In this paper we propose a multimodal approach to distinguish between movements displaying three different expressive qualities: fluid, fragmented, and impulsive movements. Our approach is based on the Event Synchronization algorithm, which is applied to compute the amount of synchronization between two low-level features extracted from multimodal data. In more details, we use the energy of the audio respiration signal captured by a standard microphone placed near to the mouth, and the whole body kinetic energy estimated from motion capture data. The method was evaluated on 90 movement segments performed by 5 dancers. Results show that fragmented movements display higher average synchronization than fluid and impulsive movements.

Using the audio respiration signal for multimodal discrimination of expressive movement qualities / Lussu, V.; Niewiadomski, R.; Volpe, G.; Camurri, A.. - 9997:(2016), pp. 102-115. (Intervento presentato al convegno 7th International Workshop on Human Behavior Understanding, HBU 2016 tenutosi a nld nel 2016) [10.1007/978-3-319-46843-3_7].

Using the audio respiration signal for multimodal discrimination of expressive movement qualities

Niewiadomski R.;
2016-01-01

Abstract

In this paper we propose a multimodal approach to distinguish between movements displaying three different expressive qualities: fluid, fragmented, and impulsive movements. Our approach is based on the Event Synchronization algorithm, which is applied to compute the amount of synchronization between two low-level features extracted from multimodal data. In more details, we use the energy of the audio respiration signal captured by a standard microphone placed near to the mouth, and the whole body kinetic energy estimated from motion capture data. The method was evaluated on 90 movement segments performed by 5 dancers. Results show that fragmented movements display higher average synchronization than fluid and impulsive movements.
2016
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
GEWERBESTRASSE 11, CHAM, CH-6330, SWITZERLAND
Springer Verlag
978-3-319-46842-6
978-3-319-46843-3
Lussu, V.; Niewiadomski, R.; Volpe, G.; Camurri, A.
Using the audio respiration signal for multimodal discrimination of expressive movement qualities / Lussu, V.; Niewiadomski, R.; Volpe, G.; Camurri, A.. - 9997:(2016), pp. 102-115. (Intervento presentato al convegno 7th International Workshop on Human Behavior Understanding, HBU 2016 tenutosi a nld nel 2016) [10.1007/978-3-319-46843-3_7].
File in questo prodotto:
File Dimensione Formato  
HBU16_lussu.pdf

Solo gestori archivio

Tipologia: Post-print referato (Refereed author’s manuscript)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 445.37 kB
Formato Adobe PDF
445.37 kB Adobe PDF   Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/279322
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 3
  • ???jsp.display-item.citation.isi??? 2
  • OpenAlex ND
social impact