We study the role of linguistic context in predicting quantifiers (‘few’, ‘all’). We collect crowdsourced data from human participants and test various models in a local (single-sentence) and a global context (multi-sentence) condition. Models significantly out-perform humans in the former setting and are only slightly better in the latter. While human performance improves with more linguistic context (especially on proportional quantifiers), model performance suffers. Models are very effective in exploiting lexical and morpho-syntactic patterns; humans are better at genuinely understanding the meaning of the (global) context.
We study the role of linguistic context in predicting quantifiers (‘few’, ‘all’). We collect crowdsourced data from human participants and test various models in a local (single-sentence) and a global context (multi-sentence) condition. Models significantly out-perform humans in the former setting and are only slightly better in the latter. While human performance improves with more linguistic context (especially on proportional quantifiers), model performance suffers. Models are very effective in exploiting lexical and morpho-syntactic patterns; humans are better at genuinely understanding the meaning of the (global) context.
Some of them can Be Guessed! Exploring the Effect of Linguistic Context in Predicting Quantifiers / Pezzelle, Sandro; Steinert-Threlkeld, Shane; Bernardi, Raffaella; Szymanik, Jakub. - ELETTRONICO. - 2:(2018), pp. 114-119. ( 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018 Melbourne, Australia 15th-20th July 2018) [10.18653/v1/P18-2019].
Some of them can Be Guessed! Exploring the Effect of Linguistic Context in Predicting Quantifiers
Sandro Pezzelle;Raffaella Bernardi;Jakub Szymanik
2018-01-01
Abstract
We study the role of linguistic context in predicting quantifiers (‘few’, ‘all’). We collect crowdsourced data from human participants and test various models in a local (single-sentence) and a global context (multi-sentence) condition. Models significantly out-perform humans in the former setting and are only slightly better in the latter. While human performance improves with more linguistic context (especially on proportional quantifiers), model performance suffers. Models are very effective in exploiting lexical and morpho-syntactic patterns; humans are better at genuinely understanding the meaning of the (global) context.| File | Dimensione | Formato | |
|---|---|---|---|
|
acl-2018.pdf
accesso aperto
Tipologia:
Versione editoriale (Publisher’s layout)
Licenza:
Tutti i diritti riservati (All rights reserved)
Dimensione
537.68 kB
Formato
Adobe PDF
|
537.68 kB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione



