Recent technological advances have enabled human users to interact with computers in ways previously unimaginable. Beyond the confines of the keyboard and mouse, new modalities for human-computer interaction such as voice, gesture, and force-feedback are emerging. However, one necessary ingredient for natural interaction is still missing - emotions. This paper describes the problem of bimodal emotion recognition and advocates the use of probabilistic graphical models when fusing the different modalities. We test our audio-visual emotion recognition approach on 38 subjects with 11 HCI-related affect states. The experimental results show that the average person-dependent emotion recognition accuracy is greatly improved when both visual and audio information are used in classification. © 2006 IEEE.

Emotion Recognition Based on Joint Visual and Audio Cues

Sebe, Niculae;
2006-01-01

Abstract

Recent technological advances have enabled human users to interact with computers in ways previously unimaginable. Beyond the confines of the keyboard and mouse, new modalities for human-computer interaction such as voice, gesture, and force-feedback are emerging. However, one necessary ingredient for natural interaction is still missing - emotions. This paper describes the problem of bimodal emotion recognition and advocates the use of probabilistic graphical models when fusing the different modalities. We test our audio-visual emotion recognition approach on 38 subjects with 11 HCI-related affect states. The experimental results show that the average person-dependent emotion recognition accuracy is greatly improved when both visual and audio information are used in classification. © 2006 IEEE.
2006
International Conference on Pattern Recognition
Los Alamitos
IEEE
9780769525211
Sebe, Niculae; I., Cohen; T., Gevers; T., Huang
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/16947
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 153
  • ???jsp.display-item.citation.isi??? 104
  • OpenAlex 203
social impact