Capturing semantic relations between sentences, such as entailment, is a long-standing challenge for computational semantics. Logic-based models analyse entailment in terms of possible worlds (interpretations, or situations) where a premise P entails a hypothesis H iff in all worlds where P is true, H is also true. Statistical models view this relationship probabilistically, addressing it in terms of whether a human would likely infer H from P. In this paper, we wish to bridge these two perspectives, by arguing for a visually-grounded version of the Textual Entailment task. Specifically, we ask whether models can perform better if, in addition to P and H, there is also an image (corresponding to the relevant “world” or “situation”). We use a multimodal version of the SNLI dataset (Bowman et al., 2015) and we compare “blind” and visually-augmented models of textual entailment. We show that visual information is beneficial, but we also conduct an in-depth error analysis that reveals that current multimodal models are not performing “grounding” in an optimal fashion.

Grounded Textual Entailment / Hoa Trong Vu, ; Greco, Claudio; Aliia, Erofeeva; Somayeh, Jafaritazehjan; Guido, Linders; Marc, Tanti; Alberto, Testoni; Bernardi, Raffaella; Albert, Gatt. - ELETTRONICO. - (2018), pp. 2354-2368. (Intervento presentato al convegno COLING 18 tenutosi a Santa Fe, New Mexico, USA nel 20th-26th August 2018).

Grounded Textual Entailment

Greco, Claudio;Raffaella Bernardi;
2018-01-01

Abstract

Capturing semantic relations between sentences, such as entailment, is a long-standing challenge for computational semantics. Logic-based models analyse entailment in terms of possible worlds (interpretations, or situations) where a premise P entails a hypothesis H iff in all worlds where P is true, H is also true. Statistical models view this relationship probabilistically, addressing it in terms of whether a human would likely infer H from P. In this paper, we wish to bridge these two perspectives, by arguing for a visually-grounded version of the Textual Entailment task. Specifically, we ask whether models can perform better if, in addition to P and H, there is also an image (corresponding to the relevant “world” or “situation”). We use a multimodal version of the SNLI dataset (Bowman et al., 2015) and we compare “blind” and visually-augmented models of textual entailment. We show that visual information is beneficial, but we also conduct an in-depth error analysis that reveals that current multimodal models are not performing “grounding” in an optimal fashion.
2018
Proceedings of the 27th International Conference on Computational Linguistics
Santa Fe, New Mexico
Association for Computational Linguistics
978-1-948087-50-6
Hoa Trong Vu, ; Greco, Claudio; Aliia, Erofeeva; Somayeh, Jafaritazehjan; Guido, Linders; Marc, Tanti; Alberto, Testoni; Bernardi, Raffaella; Albert, ...espandi
Grounded Textual Entailment / Hoa Trong Vu, ; Greco, Claudio; Aliia, Erofeeva; Somayeh, Jafaritazehjan; Guido, Linders; Marc, Tanti; Alberto, Testoni; Bernardi, Raffaella; Albert, Gatt. - ELETTRONICO. - (2018), pp. 2354-2368. (Intervento presentato al convegno COLING 18 tenutosi a Santa Fe, New Mexico, USA nel 20th-26th August 2018).
File in questo prodotto:
File Dimensione Formato  
coling-2018.pdf

accesso aperto

Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Creative commons
Dimensione 1.36 MB
Formato Adobe PDF
1.36 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/221835
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 14
  • ???jsp.display-item.citation.isi??? ND
  • OpenAlex ND
social impact