Fighting misinformation is a challenging, yet crucial, task. Despite the growing number of experts being involved in manual fact-checking, this activity is time-consuming and cannot keep up with the ever-increasing amount of fake news produced daily. Hence, automating this process is necessary to help curb misinformation. Thus far, researchers have mainly focused on claim veracity classification. In this paper, instead, we address the generation of justifications (textual explanation of why a claim is classified as either true or false) and benchmark it with novel datasets and advanced baselines. In particular, we focus on summarization approaches over unstructured knowledge (i.e., news articles) and we experiment with several extractive and abstractive strategies. We employed two datasets with different styles and structures, in order to assess the generalizability of our findings. Results show that in justification production summarization benefits from the claim information, and, in particular, that a claim-driven extractive step improves abstractive summarization performances. Finally, we show that although cross-dataset experiments suffer from performance degradation, a unique model trained on a combination of the two datasets is able to retain style information in an efficient manner.

Benchmarking the Generation of Fact Checking Explanations / Russo, Daniel; Tekiroğlu, Serra Sinem; Guerini, Marco. - In: TRANSACTIONS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS. - ISSN 2307-387X. - 11:(2023), pp. 1250-1264. [10.1162/tacl_a_00601]

Benchmarking the Generation of Fact Checking Explanations

Russo, Daniel
Primo
;
Guerini, Marco
Ultimo
2023-01-01

Abstract

Fighting misinformation is a challenging, yet crucial, task. Despite the growing number of experts being involved in manual fact-checking, this activity is time-consuming and cannot keep up with the ever-increasing amount of fake news produced daily. Hence, automating this process is necessary to help curb misinformation. Thus far, researchers have mainly focused on claim veracity classification. In this paper, instead, we address the generation of justifications (textual explanation of why a claim is classified as either true or false) and benchmark it with novel datasets and advanced baselines. In particular, we focus on summarization approaches over unstructured knowledge (i.e., news articles) and we experiment with several extractive and abstractive strategies. We employed two datasets with different styles and structures, in order to assess the generalizability of our findings. Results show that in justification production summarization benefits from the claim information, and, in particular, that a claim-driven extractive step improves abstractive summarization performances. Finally, we show that although cross-dataset experiments suffer from performance degradation, a unique model trained on a combination of the two datasets is able to retain style information in an efficient manner.
2023
Russo, Daniel; Tekiroğlu, Serra Sinem; Guerini, Marco
Benchmarking the Generation of Fact Checking Explanations / Russo, Daniel; Tekiroğlu, Serra Sinem; Guerini, Marco. - In: TRANSACTIONS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS. - ISSN 2307-387X. - 11:(2023), pp. 1250-1264. [10.1162/tacl_a_00601]
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/399147
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 5
  • ???jsp.display-item.citation.isi??? 1
  • OpenAlex ND
social impact