Modeling Transactive Memory System (TMS) over time is an actual challenge of Human-Centered Computing. TMS is a group’s meta-knowledge indicating the attribute of “who knows what”. Conceiving and developing machines able to deal with TMS is a relevant step in the field of Hybrid Intelligence aiming at creating systems where human and artificial teammates cooperate in synergistic fashion. Recently, a TMS dataset has been proposed, where a number of audio and visual automated features and manual annotations are extracted taking inspiration from Social Sciences literature. Is it possible, on top of these, to model relationships between these engineered features and the TMS scores? In this work we first build and discuss a processing pipeline; then we propose four possible classifiers, two of which are artificial neural networks-based. We observe that the largest obstacle towards modeling the target relationships currently lies in the little data availability for training an automatic system. Our purpose, with this work, is to provide hints on how to avoid some common pitfalls to train these systems to learn TMS scores from audio/visual features.
A Hitchhiker's Guide towards Transactive Memory System Modeling in Small Group Interactions / Tartaglione, Enzo; Biancardi, Beatrice; Mancini, Maurizio; Varni, Giovanna. - (2021), pp. 254-262. ( 23rd ACM International Conference on Multimodal Interaction, ICMI 2021 Montreal, QC, Canada 18-22 Ottobre 2021) [10.1145/3461615.3485414].
A Hitchhiker's Guide towards Transactive Memory System Modeling in Small Group Interactions
Giovanna Varni
2021-01-01
Abstract
Modeling Transactive Memory System (TMS) over time is an actual challenge of Human-Centered Computing. TMS is a group’s meta-knowledge indicating the attribute of “who knows what”. Conceiving and developing machines able to deal with TMS is a relevant step in the field of Hybrid Intelligence aiming at creating systems where human and artificial teammates cooperate in synergistic fashion. Recently, a TMS dataset has been proposed, where a number of audio and visual automated features and manual annotations are extracted taking inspiration from Social Sciences literature. Is it possible, on top of these, to model relationships between these engineered features and the TMS scores? In this work we first build and discuss a processing pipeline; then we propose four possible classifiers, two of which are artificial neural networks-based. We observe that the largest obstacle towards modeling the target relationships currently lies in the little data availability for training an automatic system. Our purpose, with this work, is to provide hints on how to avoid some common pitfalls to train these systems to learn TMS scores from audio/visual features.| File | Dimensione | Formato | |
|---|---|---|---|
|
3461615.3485414.pdf
Solo gestori archivio
Descrizione: Versione editoriale >10MB
Tipologia:
Altro materiale allegato (Other attachments)
Licenza:
Tutti i diritti riservati (All rights reserved)
Dimensione
14.96 MB
Formato
Adobe PDF
|
14.96 MB | Adobe PDF | Visualizza/Apri |
|
Compressed_Article.pdf
Solo gestori archivio
Tipologia:
Versione editoriale (Publisher’s layout)
Licenza:
Tutti i diritti riservati (All rights reserved)
Dimensione
1.46 MB
Formato
Adobe PDF
|
1.46 MB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione



