Background: Computational approaches hold significant promise for enhancing diagnosis and therapy in child and adolescent clinical practice. Clinical procedures heavily depend n vocal exchanges and interpersonal dynamics conveyed through speech. Research highlights the importance of investigating acoustic features and dyadic interactions during child development. However, observational methods are labor-intensive, time-consuming, and suffer from limited objectivity and quantification, hindering translation to everyday care. Aims: We propose a novel AI-based system for fully automatic acoustic segmentation of clinical sessions with autistic preschool children. Methods and procedures: We focused on naturalistic and unconstrained clinical contexts, which are characterized by background noise and data scarcity. Our approach addresses key challenges in the field while remaining non-invasive. We carefully evaluated model performance and flexibility in diverse, challenging conditions by means of domain alignment. Outcomes and results: Results demonstrated promising outcomes in voice activity detection and speaker diarization. Notably, minimal annotation efforts —just 30 seconds of target data— significantly improved model performance across all tested conditions. Our models exhibit satisfying predictive performance and flexibility for deployment in everyday settings. Conclusions and implications: Automating data annotation in real-world clinical scenarios can enable the widespread exploitation of advanced computational methods for downstream modeling, fostering precision approaches that bridge research and clinical practice.

Automated segmentation of child-clinician speech in naturalistic clinical contexts / Bertamini, Giulio; Furlanello, Cesare; Chetouani, Mohamed; Cohen, David; Venuti, Paola. - In: RESEARCH IN DEVELOPMENTAL DISABILITIES. - ISSN 0891-4222. - 157:(2025). [10.1016/j.ridd.2024.104906]

Automated segmentation of child-clinician speech in naturalistic clinical contexts

Bertamini, Giulio
Primo
;
Furlanello, Cesare;Venuti, Paola
2025-01-01

Abstract

Background: Computational approaches hold significant promise for enhancing diagnosis and therapy in child and adolescent clinical practice. Clinical procedures heavily depend n vocal exchanges and interpersonal dynamics conveyed through speech. Research highlights the importance of investigating acoustic features and dyadic interactions during child development. However, observational methods are labor-intensive, time-consuming, and suffer from limited objectivity and quantification, hindering translation to everyday care. Aims: We propose a novel AI-based system for fully automatic acoustic segmentation of clinical sessions with autistic preschool children. Methods and procedures: We focused on naturalistic and unconstrained clinical contexts, which are characterized by background noise and data scarcity. Our approach addresses key challenges in the field while remaining non-invasive. We carefully evaluated model performance and flexibility in diverse, challenging conditions by means of domain alignment. Outcomes and results: Results demonstrated promising outcomes in voice activity detection and speaker diarization. Notably, minimal annotation efforts —just 30 seconds of target data— significantly improved model performance across all tested conditions. Our models exhibit satisfying predictive performance and flexibility for deployment in everyday settings. Conclusions and implications: Automating data annotation in real-world clinical scenarios can enable the widespread exploitation of advanced computational methods for downstream modeling, fostering precision approaches that bridge research and clinical practice.
2025
Bertamini, Giulio; Furlanello, Cesare; Chetouani, Mohamed; Cohen, David; Venuti, Paola
Automated segmentation of child-clinician speech in naturalistic clinical contexts / Bertamini, Giulio; Furlanello, Cesare; Chetouani, Mohamed; Cohen, David; Venuti, Paola. - In: RESEARCH IN DEVELOPMENTAL DISABILITIES. - ISSN 0891-4222. - 157:(2025). [10.1016/j.ridd.2024.104906]
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/446922
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 2
  • ???jsp.display-item.citation.isi??? 1
  • OpenAlex 2
social impact