In this work, we present an extensive study on the use of pre-trained language models for the task of automatic Counter Narrative (CN) generation to fight online hate speech in English. We first present a comparative study to determine whether there is a particular Language Model (or class of LMs) and a particular decoding mechanism that are the most appropriate to generate CNs. Findings show that autoregressive models combined with stochastic decodings are the most promising. We then investigate how an LM performs in generating a CN with regard to an unseen target of hate. We find out that a key element for successful {`}out of target{'} experiments is not an overall similarity with the training data but the presence of a specific subset of training data, i. e. a target that shares some commonalities with the test target that can be defined a-priori. We finally introduce the idea of a pipeline based on the addition of an automatic post-editing step to refine generated CNs.

Using Pre-Trained Language Models for Producing Counter Narratives Against Hate Speech: a Comparative Study / Tekiroglu, Serra Sinem; Bonaldi, Helena; Fanton, Margherita; Guerini, Marco. - (2022), pp. 3099-3114. (Intervento presentato al convegno Annual Meeting of the Association for Computational Linguistics (ACL) tenutosi a Dublin, Ireland nel 22nd-27th May, 2022) [10.18653/v1/2022.findings-acl.245].

Using Pre-Trained Language Models for Producing Counter Narratives Against Hate Speech: a Comparative Study

Tekiroglu Serra Sinem
Primo
;
Bonaldi Helena
Secondo
;
Guerini Marco
Ultimo
2022-01-01

Abstract

In this work, we present an extensive study on the use of pre-trained language models for the task of automatic Counter Narrative (CN) generation to fight online hate speech in English. We first present a comparative study to determine whether there is a particular Language Model (or class of LMs) and a particular decoding mechanism that are the most appropriate to generate CNs. Findings show that autoregressive models combined with stochastic decodings are the most promising. We then investigate how an LM performs in generating a CN with regard to an unseen target of hate. We find out that a key element for successful {`}out of target{'} experiments is not an overall similarity with the training data but the presence of a specific subset of training data, i. e. a target that shares some commonalities with the test target that can be defined a-priori. We finally introduce the idea of a pipeline based on the addition of an automatic post-editing step to refine generated CNs.
2022
Findings of the Association for Computational Linguistics: ACL 2022
209 N. Eighth Street, Stroudsburg PA 18360, USA
Association for Computational Linguistics
978-1-955917-25-4
Tekiroglu, Serra Sinem; Bonaldi, Helena; Fanton, Margherita; Guerini, Marco
Using Pre-Trained Language Models for Producing Counter Narratives Against Hate Speech: a Comparative Study / Tekiroglu, Serra Sinem; Bonaldi, Helena; Fanton, Margherita; Guerini, Marco. - (2022), pp. 3099-3114. (Intervento presentato al convegno Annual Meeting of the Association for Computational Linguistics (ACL) tenutosi a Dublin, Ireland nel 22nd-27th May, 2022) [10.18653/v1/2022.findings-acl.245].
File in questo prodotto:
File Dimensione Formato  
2022.findings-acl.245.pdf

accesso aperto

Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Creative commons
Dimensione 403.12 kB
Formato Adobe PDF
403.12 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/370031
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 9
  • ???jsp.display-item.citation.isi??? 1
social impact