The present work investigates whether different quantification mechanisms (set comparison, vague quantification, and proportional estimation) can be jointly learned from visual scenes by a multi-task computational model. The motivation is that, in humans, these processes underlie the same cognitive, non-symbolic ability, which allows an automatic estimation and comparison of set magnitudes. We show that when information about lower-complexity tasks is available, the higher-level proportional task becomes more accurate than when performed in isolation. Moreover, the multi-task model is able to generalize to unseen combinations of target/non-target objects. Consistently with behavioral evidence showing the interference of absolute number in the proportional task, the multi-task model no longer works when asked to provide the number of target objects in the scene.

Comparative, Quantifiers, Propositions: A Multi-Task Model for the Learning of Quantities from Vision / Pezzelle, Sandro; Sorodoc, Ionut-teodor; Bernardi, Raffaella. - ELETTRONICO. - (2018), pp. 419-430. (Intervento presentato al convegno NAACL HLT 2018 tenutosi a New Orleans, LA nel 1st-6th June) [10.18653/v1/N18-1039].

Comparative, Quantifiers, Propositions: A Multi-Task Model for the Learning of Quantities from Vision

Sandro Pezzelle;Ionut Sorodoc;Raffaella Bernardi
2018-01-01

Abstract

The present work investigates whether different quantification mechanisms (set comparison, vague quantification, and proportional estimation) can be jointly learned from visual scenes by a multi-task computational model. The motivation is that, in humans, these processes underlie the same cognitive, non-symbolic ability, which allows an automatic estimation and comparison of set magnitudes. We show that when information about lower-complexity tasks is available, the higher-level proportional task becomes more accurate than when performed in isolation. Moreover, the multi-task model is able to generalize to unseen combinations of target/non-target objects. Consistently with behavioral evidence showing the interference of absolute number in the proportional task, the multi-task model no longer works when asked to provide the number of target objects in the scene.
2018
Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1
Stroudsburg, PA
Association for Computational Linguistics
978-1-948087-27-8
Pezzelle, Sandro; Sorodoc, Ionut-teodor; Bernardi, Raffaella
Comparative, Quantifiers, Propositions: A Multi-Task Model for the Learning of Quantities from Vision / Pezzelle, Sandro; Sorodoc, Ionut-teodor; Bernardi, Raffaella. - ELETTRONICO. - (2018), pp. 419-430. (Intervento presentato al convegno NAACL HLT 2018 tenutosi a New Orleans, LA nel 1st-6th June) [10.18653/v1/N18-1039].
File in questo prodotto:
File Dimensione Formato  
naacl-2018.pdf

accesso aperto

Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 1.6 MB
Formato Adobe PDF
1.6 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/221829
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
  • OpenAlex ND
social impact