This paper describes a multimodal approach to detect viewers' engagement through psycho-physiological affective signals. We investigate the individual contributions of the different modalities, and report experimental results obtained using several fusion strategies, in both per-clip and per-subject cross-validation settings. A sequence of clips from a short movie was showed to 15 participants, from whom we collected per-clip engagement self-assessments. Cues of the users' affective states were collected by means of (i) galvanic skin response (GSR), (ii) automatic facial tracking, and (iii) electroencephalogram(EEG) signals. The main findings of this study can be summarized as follows: (i) each individual modality significantly encodes the level of engagement of the viewers in response to movie clips, (ii) the GSR and EEG signals provide comparable contributions, and (iii) the best performance is obtained when the three modalities are used together. © 2013 IEEE.

Multimodal Engagement Classification for Affective Cinema

Khomami Abadi, Mojtaba;Staiano, Jacopo;Zancanaro, Massimo;Sebe, Niculae
2013-01-01

Abstract

This paper describes a multimodal approach to detect viewers' engagement through psycho-physiological affective signals. We investigate the individual contributions of the different modalities, and report experimental results obtained using several fusion strategies, in both per-clip and per-subject cross-validation settings. A sequence of clips from a short movie was showed to 15 participants, from whom we collected per-clip engagement self-assessments. Cues of the users' affective states were collected by means of (i) galvanic skin response (GSR), (ii) automatic facial tracking, and (iii) electroencephalogram(EEG) signals. The main findings of this study can be summarized as follows: (i) each individual modality significantly encodes the level of engagement of the viewers in response to movie clips, (ii) the GSR and EEG signals provide comparable contributions, and (iii) the best performance is obtained when the three modalities are used together. © 2013 IEEE.
2013
Proceedings of the 5th International Conference on Affective Computing and Intelligent Interaction (ACII 2013)
Piscataway
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS
9780769550480
Khomami Abadi, Mojtaba; Staiano, Jacopo; A., Cappelletti; Zancanaro, Massimo; Sebe, Niculae
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/33039
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 14
  • ???jsp.display-item.citation.isi??? 8
  • OpenAlex 15
social impact