The task of predicting dialog acts (DA) based on conversational dialog is a key component in the development of conversational agents. Accurately predicting DAs requires a precise modeling of both the conversation and the global tag dependencies. We leverage seq2seq approaches widely adopted in Neural Machine Translation (NMT) to improve the modelling of tag sequentiality. Seq2seq models are known to learn complex global dependencies while currently proposed approaches using linear conditional random fields (CRF) only model local tag dependencies. In this work, we introduce a seq2seq model tailored for DA classification using: a hierarchical encoder, a novel guided attention mechanism and beam search applied to both training and inference. Compared to the state of the art our model does not require handcrafted features and is trained end-to-end. Furthermore, the proposed approach achieves an unmatched accuracy score of 85% on SwDA, and state-of-the-art accuracy score of 91.6% on MRDA.

Guiding Attention in Sequence-to-Sequence Models for Dialogue Act Prediction / Colombo, Pierre; Chapuis, Emile; Manica, Matteo; Vignon, Emmanuel; Varni, Giovanna; Clavel, Chloe. - 34:05(2020), pp. 7594-7601. (Intervento presentato al convegno 34th AAAI Conference on Artificial Intelligence, AAAI 2020 tenutosi a New York, NY, United States nel February 7–12, 2020) [10.1609/aaai.v34i05.6259].

Guiding Attention in Sequence-to-Sequence Models for Dialogue Act Prediction

Giovanna Varni;
2020-01-01

Abstract

The task of predicting dialog acts (DA) based on conversational dialog is a key component in the development of conversational agents. Accurately predicting DAs requires a precise modeling of both the conversation and the global tag dependencies. We leverage seq2seq approaches widely adopted in Neural Machine Translation (NMT) to improve the modelling of tag sequentiality. Seq2seq models are known to learn complex global dependencies while currently proposed approaches using linear conditional random fields (CRF) only model local tag dependencies. In this work, we introduce a seq2seq model tailored for DA classification using: a hierarchical encoder, a novel guided attention mechanism and beam search applied to both training and inference. Compared to the state of the art our model does not require handcrafted features and is trained end-to-end. Furthermore, the proposed approach achieves an unmatched accuracy score of 85% on SwDA, and state-of-the-art accuracy score of 91.6% on MRDA.
2020
Proceedings of the AAAI Conference on Artificial Intelligence
Palo Alto, California USA
AAAI Press, Palo Alto, California USA
9781577358350
Colombo, Pierre; Chapuis, Emile; Manica, Matteo; Vignon, Emmanuel; Varni, Giovanna; Clavel, Chloe
Guiding Attention in Sequence-to-Sequence Models for Dialogue Act Prediction / Colombo, Pierre; Chapuis, Emile; Manica, Matteo; Vignon, Emmanuel; Varni, Giovanna; Clavel, Chloe. - 34:05(2020), pp. 7594-7601. (Intervento presentato al convegno 34th AAAI Conference on Artificial Intelligence, AAAI 2020 tenutosi a New York, NY, United States nel February 7–12, 2020) [10.1609/aaai.v34i05.6259].
File in questo prodotto:
File Dimensione Formato  
AAAI_2020.pdf

accesso aperto

Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 681.59 kB
Formato Adobe PDF
681.59 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/365584
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 47
  • ???jsp.display-item.citation.isi??? 31
  • OpenAlex ND
social impact