The multimodal models used in the emerging field at the intersection of computational linguistics and computer vision implement the bottom-up processing of the “Hub and Spoke” architecture proposed in cognitive science to represent how the brain processes and combines multi-sensory inputs. In particular, the Hub is implemented as a neural network encoder. We investigate the effect on this encoder of various vision-and-language tasks proposed in the literature: visual question answering, visual reference resolution, and visually grounded dialogue. To measure the quality of the representations learned by the encoder, we use two kinds of analyses. First, we evaluate the encoder pre-trained on the different vision-and-language tasks on an existing “diagnostic task” designed to assess multimodal semantic understanding. Second, we carry out a battery of analyses aimed at studying how the encoder merges and exploits the two modalities.

Evaluating the Representational Hub of Language and Vision Models / Shekhar, Ravi; Takmaz, Ece; Fernández, Raquel; Bernardi, Raffaella. - ELETTRONICO. - (2019), pp. 211-222. (Intervento presentato al convegno IWCS 2019 tenutosi a Gothenburg, Sweden nel 23rd-27th May 2019) [10.18653/v1/W19-0418].

Evaluating the Representational Hub of Language and Vision Models

Shekhar Ravi;Bernardi Raffaella
2019-01-01

Abstract

The multimodal models used in the emerging field at the intersection of computational linguistics and computer vision implement the bottom-up processing of the “Hub and Spoke” architecture proposed in cognitive science to represent how the brain processes and combines multi-sensory inputs. In particular, the Hub is implemented as a neural network encoder. We investigate the effect on this encoder of various vision-and-language tasks proposed in the literature: visual question answering, visual reference resolution, and visually grounded dialogue. To measure the quality of the representations learned by the encoder, we use two kinds of analyses. First, we evaluate the encoder pre-trained on the different vision-and-language tasks on an existing “diagnostic task” designed to assess multimodal semantic understanding. Second, we carry out a battery of analyses aimed at studying how the encoder merges and exploits the two modalities.
2019
IWCS 2019: Proceedings of the 13th International Conference on Computational Semantics: Long Papers
Stroudsburg, PA
ACL
978-1-950737-19-2
Shekhar, Ravi; Takmaz, Ece; Fernández, Raquel; Bernardi, Raffaella
Evaluating the Representational Hub of Language and Vision Models / Shekhar, Ravi; Takmaz, Ece; Fernández, Raquel; Bernardi, Raffaella. - ELETTRONICO. - (2019), pp. 211-222. (Intervento presentato al convegno IWCS 2019 tenutosi a Gothenburg, Sweden nel 23rd-27th May 2019) [10.18653/v1/W19-0418].
File in questo prodotto:
File Dimensione Formato  
iwcs19.pdf

accesso aperto

Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 1.42 MB
Formato Adobe PDF
1.42 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/250571
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 10
  • ???jsp.display-item.citation.isi??? ND
  • OpenAlex ND
social impact