The AVLaughterCycle project aims at developing an audiovisual laughing machine, able to detect and respond to user's laughs. Laughter is an important cue to reinforce the engagement in human-computer interactions. As a first step toward this goal, we have implemented a system capable of recording the laugh of a user and responding to it with a similar laugh. The output laugh is automatically selected from an audiovisual laughter database by analyzing acoustic similarities with the input laugh. It is displayed by an Embodied Conversational Agent, animated using the audio-synchronized facial movements of the subject who originally uttered the laugh. The application is fully implemented, works in real time and a large audiovisual laughter database has been recorded as part of the project. This paper presents AVLaughterCycle, its underlying components, the freely available laughter database and the application architecture. The paper also includes evaluations of several core components of the application. Objective tests show that the similarity search engine, though simple, significantly outperforms chance for grouping laughs by speaker or type. This result can be considered as a first measurement for computing acoustic similarities between laughs. A subjective evaluation has also been conducted to measure the influence of the visual cues on the users' evaluation of similarity between laughs.

AVLaughterCycle Enabling a virtual agent to join in laughing with a conversational partner using a similarity-driven audiovisual laughter animation / Urbain, J; Niewiadomski, R; Bevacqua, E; Dutoit, T; Moinet, A; Pelachaud, C; Picart, B; Tilmanne, J; Wagner, J. - In: JOURNAL ON MULTIMODAL USER INTERFACES. - ISSN 1783-7677. - 4:1(2010), pp. 47-58. [10.1007/s12193-010-0053-1]

AVLaughterCycle Enabling a virtual agent to join in laughing with a conversational partner using a similarity-driven audiovisual laughter animation

Niewiadomski R;
2010-01-01

Abstract

The AVLaughterCycle project aims at developing an audiovisual laughing machine, able to detect and respond to user's laughs. Laughter is an important cue to reinforce the engagement in human-computer interactions. As a first step toward this goal, we have implemented a system capable of recording the laugh of a user and responding to it with a similar laugh. The output laugh is automatically selected from an audiovisual laughter database by analyzing acoustic similarities with the input laugh. It is displayed by an Embodied Conversational Agent, animated using the audio-synchronized facial movements of the subject who originally uttered the laugh. The application is fully implemented, works in real time and a large audiovisual laughter database has been recorded as part of the project. This paper presents AVLaughterCycle, its underlying components, the freely available laughter database and the application architecture. The paper also includes evaluations of several core components of the application. Objective tests show that the similarity search engine, though simple, significantly outperforms chance for grouping laughs by speaker or type. This result can be considered as a first measurement for computing acoustic similarities between laughs. A subjective evaluation has also been conducted to measure the influence of the visual cues on the users' evaluation of similarity between laughs.
2010
1
Urbain, J; Niewiadomski, R; Bevacqua, E; Dutoit, T; Moinet, A; Pelachaud, C; Picart, B; Tilmanne, J; Wagner, J
AVLaughterCycle Enabling a virtual agent to join in laughing with a conversational partner using a similarity-driven audiovisual laughter animation / Urbain, J; Niewiadomski, R; Bevacqua, E; Dutoit, T; Moinet, A; Pelachaud, C; Picart, B; Tilmanne, J; Wagner, J. - In: JOURNAL ON MULTIMODAL USER INTERFACES. - ISSN 1783-7677. - 4:1(2010), pp. 47-58. [10.1007/s12193-010-0053-1]
File in questo prodotto:
File Dimensione Formato  
JMUI10_urbainetaldraft.pdf

Solo gestori archivio

Tipologia: Post-print referato (Refereed author’s manuscript)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 1.16 MB
Formato Adobe PDF
1.16 MB Adobe PDF   Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/280634
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 34
  • ???jsp.display-item.citation.isi??? 22
social impact