Summarization evaluation remains an open research problem: current metrics such as ROUGE are known to be limited and to correlate poorly with human judgments. To alleviate this issue, recent work has proposed evaluation metrics which rely on question answering models to assess whether a summary contains all the relevant information in its source document. Though promising, the proposed approaches have so far failed to correlate better than ROUGE with human judgments. In this paper, we extend previous approaches and propose a unified framework, named QuestEval. In contrast to established metrics such as ROUGE or BERTScore, QuestEval does not require any ground-truth reference. Nonetheless, QuestEval substantially improves the correlation with human judgments over four evaluation dimensions (consistency, coherence, fluency, and relevance), as shown in extensive experiments.

QuestEval: Summarization Asks for Fact-based Evaluation / Scialom, Thomas; Dray, Paul-Alexis; Lamprier, Sylvain; Piwowarski, Benjamin; Staiano, Jacopo; Wang, Alex; Gallinari, Patrick. - (2021), pp. 6594-6604. (Intervento presentato al convegno EMNLP tenutosi a Punta Cana, Dominican Republic nel 7th November - 9th November 2021) [10.18653/v1/2021.emnlp-main.529].

QuestEval: Summarization Asks for Fact-based Evaluation

Staiano, Jacopo;
2021-01-01

Abstract

Summarization evaluation remains an open research problem: current metrics such as ROUGE are known to be limited and to correlate poorly with human judgments. To alleviate this issue, recent work has proposed evaluation metrics which rely on question answering models to assess whether a summary contains all the relevant information in its source document. Though promising, the proposed approaches have so far failed to correlate better than ROUGE with human judgments. In this paper, we extend previous approaches and propose a unified framework, named QuestEval. In contrast to established metrics such as ROUGE or BERTScore, QuestEval does not require any ground-truth reference. Nonetheless, QuestEval substantially improves the correlation with human judgments over four evaluation dimensions (consistency, coherence, fluency, and relevance), as shown in extensive experiments.
2021
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
209 N. Eighth Street, Stroudsburg PA 18360, USA
Association for Computational Linguistics
Scialom, Thomas; Dray, Paul-Alexis; Lamprier, Sylvain; Piwowarski, Benjamin; Staiano, Jacopo; Wang, Alex; Gallinari, Patrick
QuestEval: Summarization Asks for Fact-based Evaluation / Scialom, Thomas; Dray, Paul-Alexis; Lamprier, Sylvain; Piwowarski, Benjamin; Staiano, Jacopo; Wang, Alex; Gallinari, Patrick. - (2021), pp. 6594-6604. (Intervento presentato al convegno EMNLP tenutosi a Punta Cana, Dominican Republic nel 7th November - 9th November 2021) [10.18653/v1/2021.emnlp-main.529].
File in questo prodotto:
File Dimensione Formato  
2021.emnlp-main.529.pdf

accesso aperto

Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Creative commons
Dimensione 578.62 kB
Formato Adobe PDF
578.62 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/362906
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 123
  • ???jsp.display-item.citation.isi??? 37
  • OpenAlex ND
social impact