In this paper, we explore how the audio respiration signal can contribute to multimodal analysis of movement qualities. Within this aim, we propose two novel techniques which use the audio respiration signal captured by a standard microphone placed near to mouth and supervised machine learning algorithms. The first approach consists of the classification of a set of acoustic features extracted from exhalations of a person performing fluid or fragmented movements. In the second approach, the intrapersonal synchronization between the respiration and kinetic energy of body movements is used to distinguish the same qualities. First, the value of synchronization between modalities is computed using the Event Synchronization algorithm. Next, a set of features, computed from the value of synchronization, is used as an input to machine learning algorithms. Both approaches were applied to the multimodal corpus composed of short performances by three professionals performing fluid and fragmented movements. The total duration of the corpus is about 17 min. The highest F-score (0.87) for the first approach was obtained for the binary classification task using Support Vector Machines (SVM-LP). The best result for the same task using the second approach was obtained using Naive Bayes algorithm (F-score of 0.72). The results confirm that it is possible to infer information about the movement qualities from respiration audio.

The role of respiration audio in multimodal analysis of movement qualities / Lussu, Vincenzo; Niewiadomski, Radoslaw; Volpe, Gualtiero; Camurri, Antonio. - In: JOURNAL ON MULTIMODAL USER INTERFACES. - ISSN 1783-7677. - ELETTRONICO. - 14:1(2020), pp. 1-15. [10.1007/s12193-019-00302-1]

The role of respiration audio in multimodal analysis of movement qualities

Niewiadomski, Radoslaw;
2020-01-01

Abstract

In this paper, we explore how the audio respiration signal can contribute to multimodal analysis of movement qualities. Within this aim, we propose two novel techniques which use the audio respiration signal captured by a standard microphone placed near to mouth and supervised machine learning algorithms. The first approach consists of the classification of a set of acoustic features extracted from exhalations of a person performing fluid or fragmented movements. In the second approach, the intrapersonal synchronization between the respiration and kinetic energy of body movements is used to distinguish the same qualities. First, the value of synchronization between modalities is computed using the Event Synchronization algorithm. Next, a set of features, computed from the value of synchronization, is used as an input to machine learning algorithms. Both approaches were applied to the multimodal corpus composed of short performances by three professionals performing fluid and fragmented movements. The total duration of the corpus is about 17 min. The highest F-score (0.87) for the first approach was obtained for the binary classification task using Support Vector Machines (SVM-LP). The best result for the same task using the second approach was obtained using Naive Bayes algorithm (F-score of 0.72). The results confirm that it is possible to infer information about the movement qualities from respiration audio.
2020
1
Lussu, Vincenzo; Niewiadomski, Radoslaw; Volpe, Gualtiero; Camurri, Antonio
The role of respiration audio in multimodal analysis of movement qualities / Lussu, Vincenzo; Niewiadomski, Radoslaw; Volpe, Gualtiero; Camurri, Antonio. - In: JOURNAL ON MULTIMODAL USER INTERFACES. - ISSN 1783-7677. - ELETTRONICO. - 14:1(2020), pp. 1-15. [10.1007/s12193-019-00302-1]
File in questo prodotto:
File Dimensione Formato  
2020_journals_jmui_1.pdf

accesso aperto

Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Creative commons
Dimensione 1.33 MB
Formato Adobe PDF
1.33 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/278857
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 5
  • ???jsp.display-item.citation.isi??? 5
  • OpenAlex ND
social impact