An accurate identification dialog acts (DAs), which represent the illocutionary aspect of communication, is essential to support the understanding of human conversations. This requires (1) the segmentation of human-human dialogs into turns, (2) the intra-turn segmentation into DA boundaries and (3) the classification of each segment according to a DA tag. This process is particularly challenging when both segmentation and tagging are automated and utterance hypotheses derive from the erroneous results of ASR. In this paper, we use Conditional Random Fields to learn models for simultaneous segmentation and labeling of DAs from whole human-human spoken dialogs. We identify the best performing lexical feature combinations on the LUNA and SWITCHBOARD human-human dialog corpora and compare performances to those of discriminative D classifiers based on manually segmented utterances. Additionally, we assess our models' robustness to recognition errors, showing that DA identification is robust in the presence of high word error rates.

Simultaneous dialog act segmentation and classification from human-human spoken conversations

Quarteroni, Silvia Alessandra;Ivanou, Aliaksei;Riccardi, Giuseppe
2011-01-01

Abstract

An accurate identification dialog acts (DAs), which represent the illocutionary aspect of communication, is essential to support the understanding of human conversations. This requires (1) the segmentation of human-human dialogs into turns, (2) the intra-turn segmentation into DA boundaries and (3) the classification of each segment according to a DA tag. This process is particularly challenging when both segmentation and tagging are automated and utterance hypotheses derive from the erroneous results of ASR. In this paper, we use Conditional Random Fields to learn models for simultaneous segmentation and labeling of DAs from whole human-human spoken dialogs. We identify the best performing lexical feature combinations on the LUNA and SWITCHBOARD human-human dialog corpora and compare performances to those of discriminative D classifiers based on manually segmented utterances. Additionally, we assess our models' robustness to recognition errors, showing that DA identification is robust in the presence of high word error rates.
2011
Acoustics, Speech and Signal Processing (ICASSP), 2011 IEEE International Conference on
Prague
IEEE
9781457705397
Quarteroni, Silvia Alessandra; Ivanou, Aliaksei; Riccardi, Giuseppe
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/89871
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 32
  • ???jsp.display-item.citation.isi??? 20
social impact