This paper describes the SemEval–2016 Task 3 on Community Question Answer- ing, which we offered in English and Ara- bic. For English, we had three sub- tasks: Question–Comment Similarity (subtask A), Question–Question Similarity (B), and Question–External Comment Similarity (C). For Arabic, we had another subtask: Rerank the correct answers for a new question (D). Eighteen teams participated in the task, sub- mitting a total of 95 runs (38 primary and 57 contrastive) for the four subtasks. A variety of approaches and features were used by the participating systems to address the different subtasks, which are summarized in this paper. The best systems achieved an official score (MAP) of 79.19, 76.70, 55.41, and 45.83 in subtasks A, B, C, and D, respectively. These scores are significantly better than those for the baselines that we provided. For subtask A, the best system improved over the 2015 win- ner by 3 points absolute in terms of Accuracy.

SemEval-2016 Task 3: Community question answering / Nakov, Preslav; Màrquez, Lluís; Moschitti, Alessandro; Magdy, Walid; Mubarak, Hamdy; Freihat, Abed Alhakim; Glass, James; Randeree, Bilal. - ELETTRONICO. - (2016), pp. 525-545. (Intervento presentato al convegno 10th International Workshop on Semantic Evaluation, SemEval 2016 tenutosi a San Diego; United States nel 16-17 June, 2016) [10.18653/v1/s16-1083].

SemEval-2016 Task 3: Community question answering

Moschitti, Alessandro;Freihat, Abed Alhakim;
2016-01-01

Abstract

This paper describes the SemEval–2016 Task 3 on Community Question Answer- ing, which we offered in English and Ara- bic. For English, we had three sub- tasks: Question–Comment Similarity (subtask A), Question–Question Similarity (B), and Question–External Comment Similarity (C). For Arabic, we had another subtask: Rerank the correct answers for a new question (D). Eighteen teams participated in the task, sub- mitting a total of 95 runs (38 primary and 57 contrastive) for the four subtasks. A variety of approaches and features were used by the participating systems to address the different subtasks, which are summarized in this paper. The best systems achieved an official score (MAP) of 79.19, 76.70, 55.41, and 45.83 in subtasks A, B, C, and D, respectively. These scores are significantly better than those for the baselines that we provided. For subtask A, the best system improved over the 2015 win- ner by 3 points absolute in terms of Accuracy.
2016
SemEval 2016 - 10th International Workshop on Semantic Evaluation, Proceedings
P. Nakov, L. Màrquez, A. Moschitti, W. Magdy, H. Mubarak, A. A. Freihat, J. Glass, and B. Randeree
San Diego, California, USA
Association for Computational Linguistics (ACL)
9781941643952
Nakov, Preslav; Màrquez, Lluís; Moschitti, Alessandro; Magdy, Walid; Mubarak, Hamdy; Freihat, Abed Alhakim; Glass, James; Randeree, Bilal...espandi
SemEval-2016 Task 3: Community question answering / Nakov, Preslav; Màrquez, Lluís; Moschitti, Alessandro; Magdy, Walid; Mubarak, Hamdy; Freihat, Abed Alhakim; Glass, James; Randeree, Bilal. - ELETTRONICO. - (2016), pp. 525-545. (Intervento presentato al convegno 10th International Workshop on Semantic Evaluation, SemEval 2016 tenutosi a San Diego; United States nel 16-17 June, 2016) [10.18653/v1/s16-1083].
File in questo prodotto:
File Dimensione Formato  
2016_SemEval_Nakov_Task3_cQA.pdf

accesso aperto

Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Creative commons
Dimensione 412.33 kB
Formato Adobe PDF
412.33 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/204843
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 196
  • ???jsp.display-item.citation.isi??? ND
  • OpenAlex ND
social impact