Pre-trained language models have recently contributed to significant advances in NLP tasks. Recently, multi-modal versions of BERT have been developed, using heavy pretraining relying on vast corpora of aligned textual and image data, primarily applied to classification tasks such as VQA. In this paper, we are interested in evaluating the visual capabilities of BERT out-of-the-box, by avoiding pre-training made on supplementary data. We choose to study Visual Question Generation, a task of great interest for grounded dialog, that enables to study the impact of each modality (as input can be visual and/or textual). Moreover, the generation aspect of the task requires an adaptation since BERT is primarily designed as an encoder. We introduce BERT-gen, a BERT-based architecture for text generation, able to leverage on either mono- or multi- modal representations. The results reported under different configurations indicate an innate capacity for BERT-gen to adapt to multi-modal data and text generation, even with few data available, avoiding expensive pre-training. The proposed model obtains substantial improvements over the state-of-the-art on two established VQG datasets.

What BERT Sees: Cross-Modal Transfer for Visual Question Generation / Scialom, Thomas; Bordes, Patrick; Dray, Paul-Alexis; Staiano, Jacopo; Gallinari, Patrick. - (2020), pp. 327-337. (Intervento presentato al convegno INLG 2020 tenutosi a Dublin, Ireland nel 15th-18th December 2020) [10.18653/v1/2020.inlg-1.39].

What BERT Sees: Cross-Modal Transfer for Visual Question Generation

Staiano, Jacopo
Ultimo
;
2020-01-01

Abstract

Pre-trained language models have recently contributed to significant advances in NLP tasks. Recently, multi-modal versions of BERT have been developed, using heavy pretraining relying on vast corpora of aligned textual and image data, primarily applied to classification tasks such as VQA. In this paper, we are interested in evaluating the visual capabilities of BERT out-of-the-box, by avoiding pre-training made on supplementary data. We choose to study Visual Question Generation, a task of great interest for grounded dialog, that enables to study the impact of each modality (as input can be visual and/or textual). Moreover, the generation aspect of the task requires an adaptation since BERT is primarily designed as an encoder. We introduce BERT-gen, a BERT-based architecture for text generation, able to leverage on either mono- or multi- modal representations. The results reported under different configurations indicate an innate capacity for BERT-gen to adapt to multi-modal data and text generation, even with few data available, avoiding expensive pre-training. The proposed model obtains substantial improvements over the state-of-the-art on two established VQG datasets.
2020
The 13th International Conference on Natural Language Generation: Proceedings of the Conference
Stroudsburg, PA, USA
Association for Computational Linguistics (ACL)
978-1-952148-54-5
Scialom, Thomas; Bordes, Patrick; Dray, Paul-Alexis; Staiano, Jacopo; Gallinari, Patrick
What BERT Sees: Cross-Modal Transfer for Visual Question Generation / Scialom, Thomas; Bordes, Patrick; Dray, Paul-Alexis; Staiano, Jacopo; Gallinari, Patrick. - (2020), pp. 327-337. (Intervento presentato al convegno INLG 2020 tenutosi a Dublin, Ireland nel 15th-18th December 2020) [10.18653/v1/2020.inlg-1.39].
File in questo prodotto:
File Dimensione Formato  
2020.inlg-1.39v1.pdf

Solo gestori archivio

Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 3.54 MB
Formato Adobe PDF
3.54 MB Adobe PDF   Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/362932
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 8
  • ???jsp.display-item.citation.isi??? ND
  • OpenAlex ND
social impact