Emotional expressions play a very important role in the interaction between virtual agents and human users. In this paper, we present a new constraint-based approach to the generation of multimodal emotional displays. The displays generated with our method are not limited to the face, but are composed of different signals partially ordered in time and belonging to different modalities. We also describe the evaluation of the main features of our approach. We examine the role of multimodality, sequentiality, and constraints in the perception of synthesized emotional states. The results of our evaluation show that applying our algorithm improves the communication of a large spectrum of emotional states, while the believability of the agent animations increases with the use of constraints over the multimodal signals.

Constraint-based model for synthesis of multimodal sequential expressions of emotions / Niewiadomski, R; Hyniewska, Sj; Pelachaud, C. - In: IEEE TRANSACTIONS ON AFFECTIVE COMPUTING. - ISSN 1949-3045. - 2:3(2011), pp. 134-146. [10.1109/T-AFFC.2011.5]

Constraint-based model for synthesis of multimodal sequential expressions of emotions

Niewiadomski R;
2011-01-01

Abstract

Emotional expressions play a very important role in the interaction between virtual agents and human users. In this paper, we present a new constraint-based approach to the generation of multimodal emotional displays. The displays generated with our method are not limited to the face, but are composed of different signals partially ordered in time and belonging to different modalities. We also describe the evaluation of the main features of our approach. We examine the role of multimodality, sequentiality, and constraints in the perception of synthesized emotional states. The results of our evaluation show that applying our algorithm improves the communication of a large spectrum of emotional states, while the believability of the agent animations increases with the use of constraints over the multimodal signals.
2011
3
Niewiadomski, R; Hyniewska, Sj; Pelachaud, C
Constraint-based model for synthesis of multimodal sequential expressions of emotions / Niewiadomski, R; Hyniewska, Sj; Pelachaud, C. - In: IEEE TRANSACTIONS ON AFFECTIVE COMPUTING. - ISSN 1949-3045. - 2:3(2011), pp. 134-146. [10.1109/T-AFFC.2011.5]
File in questo prodotto:
File Dimensione Formato  
TAC11_niewiadomski_draft.pdf

Solo gestori archivio

Tipologia: Post-print referato (Refereed author’s manuscript)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 1.01 MB
Formato Adobe PDF
1.01 MB Adobe PDF   Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/280619
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 39
  • ???jsp.display-item.citation.isi??? 31
  • OpenAlex ND
social impact