While neural embeddings represent a popular choice for word representation in a wide variety of NLP tasks, their usage for thematic fit modeling has been limited, as they have been reported to lag behind syntax-based count models. In this paper, we propose a complete evaluation of count models and word embeddings on thematic fit estimation, by taking into account a larger number of parameters and verb roles and introducing also dependency-based embeddings in the comparison. Our results show a complex scenario, where a determinant factor for the performance seems to be the availability to the model of reliable syntactic information for building the distributional representations of the roles.
Are word embeddings really a bad fit for the estimation of thematic fit? / Chersoni, E.; Pannitto, L.; Santus, E.; Lenci, A.; Huang, C. -R.. - (2020), pp. 5708-5713. (Intervento presentato al convegno 12th International Conference on Language Resources and Evaluation, LREC 2020 tenutosi a Palais du Pharo, fra nel 2020).
Are word embeddings really a bad fit for the estimation of thematic fit?
Pannitto L.;
2020-01-01
Abstract
While neural embeddings represent a popular choice for word representation in a wide variety of NLP tasks, their usage for thematic fit modeling has been limited, as they have been reported to lag behind syntax-based count models. In this paper, we propose a complete evaluation of count models and word embeddings on thematic fit estimation, by taking into account a larger number of parameters and verb roles and introducing also dependency-based embeddings in the comparison. Our results show a complex scenario, where a determinant factor for the performance seems to be the availability to the model of reliable syntactic information for building the distributional representations of the roles.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione