Supervised learning relies on the availability and reliability of the labels used to train computational models. In research areas such as Affective Computing and Social Signal Processing, such labels are usually extracted from multiple self- and/or external assessments. Labels are, then, either aggregated to produce a single groundtruth label, or all used during training, potentially resulting in degrading performance of the models. Defining a "true"label is, however, complex. Labels can be gathered at different times, with different tools, and may contain biases. Furthermore, multiple assessments are usually available for a same sample with potential contradictions. Thus, it is crucial to devise strategies that can take advantage of both self- and external assessments to train computational models without a reliable groundtruth. In this study, we designed and tested 3 of such strategies with the aim of mitigating the biases and making the models more robust to uncertain labels. Results show that the strategy based on weighting the loss during training according to a measure of disagreement improved the performances of the baseline, hence, underlining the potential of such an approach.

Training Computational Models of Group Processes without Groundtruth: the Self- vs External Assessment's Dilemma / Maman, Lucien; Volpe, Gualtiero; Varni, Giovanna. - ELETTRONICO. - (2022), pp. 18-23. ( 24th ACM International Conference on Multimodal Interaction, ICMI 2022 Bangalore, India 2022) [10.1145/3536220.3563687].

Training Computational Models of Group Processes without Groundtruth: the Self- vs External Assessment's Dilemma

Giovanna Varni
2022-01-01

Abstract

Supervised learning relies on the availability and reliability of the labels used to train computational models. In research areas such as Affective Computing and Social Signal Processing, such labels are usually extracted from multiple self- and/or external assessments. Labels are, then, either aggregated to produce a single groundtruth label, or all used during training, potentially resulting in degrading performance of the models. Defining a "true"label is, however, complex. Labels can be gathered at different times, with different tools, and may contain biases. Furthermore, multiple assessments are usually available for a same sample with potential contradictions. Thus, it is crucial to devise strategies that can take advantage of both self- and external assessments to train computational models without a reliable groundtruth. In this study, we designed and tested 3 of such strategies with the aim of mitigating the biases and making the models more robust to uncertain labels. Results show that the strategy based on weighting the loss during training according to a measure of disagreement improved the performances of the baseline, hence, underlining the potential of such an approach.
2022
ICMI '22 Companion: Companion Publication of the 2022 International Conference on Multimodal Interaction
New York, NY
Association for Computing Machinery
9781450393898
Maman, Lucien; Volpe, Gualtiero; Varni, Giovanna
Training Computational Models of Group Processes without Groundtruth: the Self- vs External Assessment's Dilemma / Maman, Lucien; Volpe, Gualtiero; Varni, Giovanna. - ELETTRONICO. - (2022), pp. 18-23. ( 24th ACM International Conference on Multimodal Interaction, ICMI 2022 Bangalore, India 2022) [10.1145/3536220.3563687].
File in questo prodotto:
File Dimensione Formato  
3536220.3563687.pdf

Solo gestori archivio

Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 639.91 kB
Formato Adobe PDF
639.91 kB Adobe PDF   Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/363465
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 3
  • ???jsp.display-item.citation.isi??? 3
  • OpenAlex ND
social impact