We propose a grounded dialogue state encoder which addresses a foundational issue on how to integrate visual grounding with dialogue system components. As a test-bed, we focus on the GuessWhat?! game, a two-player game where the goal is to identify an object in a complex visual scene by asking a sequence of yes/no questions. Our visually-grounded encoder leverages synergies between guessing and asking questions, as it is trained jointly using multi-task learning. We further enrich our model via a cooperative learning regime. We show that the introduction of both the joint architecture and cooperative learning lead to accuracy improvements over the baseline system. We compare our approach to an alternative system which extends the baseline with reinforcement learning. Our in-depth analysis shows that the linguistic skills of the two models differ dramatically, despite approaching comparable performance levels. This points at the importance of analyzing the linguistic output of competing systems beyond numeric comparison solely based on task success

Beyond task success: A closer look at jointly learning to see, ask, and GuessWhat / Shekhar, Ravi; Venkatesh, Aashish; Baumgärtner, Tim; Bruni, Elia; Plank, Barbara; Bernardi, Raffaella; Fernández, Raquel. - ELETTRONICO. - (2019), pp. 2578-2587. (Intervento presentato al convegno NAACL HLT 2019 tenutosi a Minneapolis, MN nel 2nd-5th June 2019) [10.18653/v1/N19-1265].

Beyond task success: A closer look at jointly learning to see, ask, and GuessWhat

Shekhar Ravi;Bernardi Raffaella;
2019-01-01

Abstract

We propose a grounded dialogue state encoder which addresses a foundational issue on how to integrate visual grounding with dialogue system components. As a test-bed, we focus on the GuessWhat?! game, a two-player game where the goal is to identify an object in a complex visual scene by asking a sequence of yes/no questions. Our visually-grounded encoder leverages synergies between guessing and asking questions, as it is trained jointly using multi-task learning. We further enrich our model via a cooperative learning regime. We show that the introduction of both the joint architecture and cooperative learning lead to accuracy improvements over the baseline system. We compare our approach to an alternative system which extends the baseline with reinforcement learning. Our in-depth analysis shows that the linguistic skills of the two models differ dramatically, despite approaching comparable performance levels. This points at the importance of analyzing the linguistic output of competing systems beyond numeric comparison solely based on task success
2019
NAACL HLT 2019: The 2019 Conferenceof the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Proceedings of the Conference Vol. 1: Long and Short Papers
Stroudsburg, PA
ACL
978-1-950737-13-0
Shekhar, Ravi; Venkatesh, Aashish; Baumgärtner, Tim; Bruni, Elia; Plank, Barbara; Bernardi, Raffaella; Fernández, Raquel
Beyond task success: A closer look at jointly learning to see, ask, and GuessWhat / Shekhar, Ravi; Venkatesh, Aashish; Baumgärtner, Tim; Bruni, Elia; Plank, Barbara; Bernardi, Raffaella; Fernández, Raquel. - ELETTRONICO. - (2019), pp. 2578-2587. (Intervento presentato al convegno NAACL HLT 2019 tenutosi a Minneapolis, MN nel 2nd-5th June 2019) [10.18653/v1/N19-1265].
File in questo prodotto:
File Dimensione Formato  
naacl19.pdf

accesso aperto

Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Creative commons
Dimensione 898.27 kB
Formato Adobe PDF
898.27 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/250569
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 38
  • ???jsp.display-item.citation.isi??? 21
  • OpenAlex ND
social impact