In this paper, we investigate the detection of laughter from the user's nonverbal full-body movement in social and ecological contexts. Eight hundred and one laughter and nonlaughter segments of full-body movement were examined from a corpus of motion capture data of subjects participating in social activities that stimulated laughter. A set of 13 full-body movement features was identified, and corresponding automated extraction algorithms were developed. These features were extracted from the laughter and nonlaughter segments, and the resulting dataset was provided as input to supervised machine learning techniques. Both discriminative (radial basis function-support vector machines, k-nearest neighbor, and random forest) and probabilistic (naive Bayes and logistic regression) classifiers were trained and evaluated. A comparison of automated classification with the ratings of human observers for the same laughter and nonlaughter segments showed that the performance of our approach for automated laughter detection is comparable with that of humans. The highest F-score (0.74) was obtained by the random forest classifier, whereas the F-score obtained by human observers was 0.70. Based on the analysis techniques introduced in the paper, a vision-based system prototype for automated laughter detection was designed and evaluated. Support vector machines (SVMs) and Kohonen's self-organizing maps were used for training, and the highest F-score was obtained with SVM (0.73).

Automated laughter detection from full-body movements / Niewiadomski, Radoslaw; Mancini, Maurizio; Varni, Giovanna; Volpe, Gualtiero; Camurri, Antonio. - In: IEEE TRANSACTIONS ON HUMAN-MACHINE SYSTEMS. - ISSN 2168-2291. - STAMPA. - 46:(2016), pp. 113-123. [10.1109/THMS.2015.2480843]

Automated laughter detection from full-body movements

NIEWIADOMSKI, RADOSLAW;VARNI, GIOVANNA;
2016-01-01

Abstract

In this paper, we investigate the detection of laughter from the user's nonverbal full-body movement in social and ecological contexts. Eight hundred and one laughter and nonlaughter segments of full-body movement were examined from a corpus of motion capture data of subjects participating in social activities that stimulated laughter. A set of 13 full-body movement features was identified, and corresponding automated extraction algorithms were developed. These features were extracted from the laughter and nonlaughter segments, and the resulting dataset was provided as input to supervised machine learning techniques. Both discriminative (radial basis function-support vector machines, k-nearest neighbor, and random forest) and probabilistic (naive Bayes and logistic regression) classifiers were trained and evaluated. A comparison of automated classification with the ratings of human observers for the same laughter and nonlaughter segments showed that the performance of our approach for automated laughter detection is comparable with that of humans. The highest F-score (0.74) was obtained by the random forest classifier, whereas the F-score obtained by human observers was 0.70. Based on the analysis techniques introduced in the paper, a vision-based system prototype for automated laughter detection was designed and evaluated. Support vector machines (SVMs) and Kohonen's self-organizing maps were used for training, and the highest F-score was obtained with SVM (0.73).
2016
Niewiadomski, Radoslaw; Mancini, Maurizio; Varni, Giovanna; Volpe, Gualtiero; Camurri, Antonio
Automated laughter detection from full-body movements / Niewiadomski, Radoslaw; Mancini, Maurizio; Varni, Giovanna; Volpe, Gualtiero; Camurri, Antonio. - In: IEEE TRANSACTIONS ON HUMAN-MACHINE SYSTEMS. - ISSN 2168-2291. - STAMPA. - 46:(2016), pp. 113-123. [10.1109/THMS.2015.2480843]
File in questo prodotto:
File Dimensione Formato  
THMS16_niewiadomskietal_draft.pdf

Solo gestori archivio

Tipologia: Post-print referato (Refereed author’s manuscript)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 4.73 MB
Formato Adobe PDF
4.73 MB Adobe PDF   Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/280581
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 25
  • ???jsp.display-item.citation.isi??? 17
  • OpenAlex ND
social impact