The brain has separate specialized computational units to process faces and voices located in occipital and temporal cortices. However, humans seamlessly integrate signals from the faces and voices of others for optimal social interaction. How are emotional expressions, when delivered by different sensory modalities (faces and voices), integrated in the brain? In this study, we characterized the brains' response to faces, voices, and combined face-voice information (congruent, incongruent), which varied in expression (neutral, fearful). Using a whole-brain approach, we found that only the right posterior superior temporal sulcus (rpSTS) responded more to bimodal stimuli than to face or voice alone but only when the stimuli contained emotional expression. Face- and voice-selective regions of interest, extracted from independent functional localizers, similarly revealed multisensory integration in the face-selective rpSTS only; further, this was the only face-selective region that also responded significantly to voices. Dynamic causal modeling revealed that the rpSTS receives unidirectional information from the face-selective fusiform face area, and voice-selective temporal voice area, with emotional expression affecting the connection strength. Our study promotes a hierarchical model of face and voice integration, with convergence in the rpSTS, and that such integration depends on the (emotional) salience of the stimuli.

Hierarchical brain network for face and voice integration of emotion expression / Davies-Thompson, J.; Elli, G. V.; Rezk, M.; Benetti, S.; van Ackeren, M.; Collignon, O.. - In: CEREBRAL CORTEX. - ISSN 1460-2199. - 2019/29:9(2019), pp. 3590-3605. [10.1093/cercor/bhy240]

Hierarchical brain network for face and voice integration of emotion expression

Rezk M.;Benetti S.;van Ackeren M.;Collignon O.
2019-01-01

Abstract

The brain has separate specialized computational units to process faces and voices located in occipital and temporal cortices. However, humans seamlessly integrate signals from the faces and voices of others for optimal social interaction. How are emotional expressions, when delivered by different sensory modalities (faces and voices), integrated in the brain? In this study, we characterized the brains' response to faces, voices, and combined face-voice information (congruent, incongruent), which varied in expression (neutral, fearful). Using a whole-brain approach, we found that only the right posterior superior temporal sulcus (rpSTS) responded more to bimodal stimuli than to face or voice alone but only when the stimuli contained emotional expression. Face- and voice-selective regions of interest, extracted from independent functional localizers, similarly revealed multisensory integration in the face-selective rpSTS only; further, this was the only face-selective region that also responded significantly to voices. Dynamic causal modeling revealed that the rpSTS receives unidirectional information from the face-selective fusiform face area, and voice-selective temporal voice area, with emotional expression affecting the connection strength. Our study promotes a hierarchical model of face and voice integration, with convergence in the rpSTS, and that such integration depends on the (emotional) salience of the stimuli.
2019
9
Davies-Thompson, J.; Elli, G. V.; Rezk, M.; Benetti, S.; van Ackeren, M.; Collignon, O.
Hierarchical brain network for face and voice integration of emotion expression / Davies-Thompson, J.; Elli, G. V.; Rezk, M.; Benetti, S.; van Ackeren, M.; Collignon, O.. - In: CEREBRAL CORTEX. - ISSN 1460-2199. - 2019/29:9(2019), pp. 3590-3605. [10.1093/cercor/bhy240]
File in questo prodotto:
File Dimensione Formato  
DaviesThompson_2018_CerebCortex.pdf

Solo gestori archivio

Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 1.23 MB
Formato Adobe PDF
1.23 MB Adobe PDF   Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/272006
Citazioni
  • ???jsp.display-item.citation.pmc??? 9
  • Scopus 19
  • ???jsp.display-item.citation.isi??? 19
  • OpenAlex ND
social impact