In automatic speech translation (ST), traditional cascade approaches involving separate transcription and translation steps are giving ground to increasingly competitive and more robust direct solutions. In particular, by translating speech audio data without intermediate transcription, direct ST models are able to leverage and preserve essential information present in the input (e.g.speaker's vocal characteristics) that is otherwise lost in the cascade framework. Although such ability proved to be useful for gender translation, direct ST is nonetheless affected by gender bias just like its cascade counterpart, as well as machine translation and numerous other natural language processing applications. Moreover, direct ST systems that exclusively rely on vocal biometric features as a gender cue can be unsuitable or even potentially problematic for certain users. Going beyond speech signals, in this paper we compare different approaches to inform direct ST models about the speaker's gender and test their ability to handle gender translation from English into Italian and French. To this aim, we manually annotated large datasets with speakers' gender information and used them for experiments reflecting different possible real-world scenarios. Our results show that gender-aware direct ST solutions can significantly outperform strong -- but gender-unaware -- direct ST models. In particular, the translation of gender-marked words can increase up to 30 points in accuracy while preserving overall translation quality.

Breeding Gender-aware Direct Speech Translation Systems / Gaido, Marco; Savoldi, Beatrice; Bentivogli, Luisa; Negri, Matteo; Turchi, Marco. - ELETTRONICO. - (2020), pp. 3951-3964. (Intervento presentato al convegno COLING tenutosi a Barcelona - Online nel October) [10.18653/v1/2020.coling-main.350].

Breeding Gender-aware Direct Speech Translation Systems

Marco Gaido;Beatrice Savoldi;
2020-01-01

Abstract

In automatic speech translation (ST), traditional cascade approaches involving separate transcription and translation steps are giving ground to increasingly competitive and more robust direct solutions. In particular, by translating speech audio data without intermediate transcription, direct ST models are able to leverage and preserve essential information present in the input (e.g.speaker's vocal characteristics) that is otherwise lost in the cascade framework. Although such ability proved to be useful for gender translation, direct ST is nonetheless affected by gender bias just like its cascade counterpart, as well as machine translation and numerous other natural language processing applications. Moreover, direct ST systems that exclusively rely on vocal biometric features as a gender cue can be unsuitable or even potentially problematic for certain users. Going beyond speech signals, in this paper we compare different approaches to inform direct ST models about the speaker's gender and test their ability to handle gender translation from English into Italian and French. To this aim, we manually annotated large datasets with speakers' gender information and used them for experiments reflecting different possible real-world scenarios. Our results show that gender-aware direct ST solutions can significantly outperform strong -- but gender-unaware -- direct ST models. In particular, the translation of gender-marked words can increase up to 30 points in accuracy while preserving overall translation quality.
2020
Proceedings of the 28th International Conference on Computational Linguistics
Barcelona
International Committee on Computational Linguistics
Gaido, Marco; Savoldi, Beatrice; Bentivogli, Luisa; Negri, Matteo; Turchi, Marco
Breeding Gender-aware Direct Speech Translation Systems / Gaido, Marco; Savoldi, Beatrice; Bentivogli, Luisa; Negri, Matteo; Turchi, Marco. - ELETTRONICO. - (2020), pp. 3951-3964. (Intervento presentato al convegno COLING tenutosi a Barcelona - Online nel October) [10.18653/v1/2020.coling-main.350].
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/335504
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 8
  • ???jsp.display-item.citation.isi??? ND
social impact