Direct speech-to-text translation (ST) models are usually trained on corpora segmented at sentence level, but at inference time they are commonly fed with audio split by a voice activity detector (VAD). Since VAD segmentation is not syntax-informed, the resulting segments do not necessarily correspond to well-formed sentences uttered by the speaker but, most likely, to fragments of one or more sentences. This segmentation mismatch degrades considerably the quality of ST models' output. So far, researchers have focused on improving audio segmentation towards producing sentence-like splits. In this paper, instead, we address the issue in the model, making it more robust to a different, potentially sub-optimal segmentation. To this aim, we train our models on randomly segmented data and compare two approaches: fine-tuning and adding the previous segment as context. We show that our context-aware solution is more robust to VAD-segmented input, outperforming a strong base model and the fine-tuning on different VAD segmentations of an English-German test set by up to 4.25 BLEU points.

Contextualized translation of automatically segmented speech / Gaido, M.; Di Gangi, M. A.; Negri, M.; Cettolo, M.; Turchi, M.. - 2020-:(2020), pp. 1471-1475. (Intervento presentato al convegno 21st Annual Conference of the International Speech Communication Association, INTERSPEECH 2020 tenutosi a Shanghai, China nel 25-29 October, 2020) [10.21437/Interspeech.2020-2860].

Contextualized translation of automatically segmented speech

Gaido M.;Di Gangi M. A.;
2020-01-01

Abstract

Direct speech-to-text translation (ST) models are usually trained on corpora segmented at sentence level, but at inference time they are commonly fed with audio split by a voice activity detector (VAD). Since VAD segmentation is not syntax-informed, the resulting segments do not necessarily correspond to well-formed sentences uttered by the speaker but, most likely, to fragments of one or more sentences. This segmentation mismatch degrades considerably the quality of ST models' output. So far, researchers have focused on improving audio segmentation towards producing sentence-like splits. In this paper, instead, we address the issue in the model, making it more robust to a different, potentially sub-optimal segmentation. To this aim, we train our models on randomly segmented data and compare two approaches: fine-tuning and adding the previous segment as context. We show that our context-aware solution is more robust to VAD-segmented input, outperforming a strong base model and the fine-tuning on different VAD segmentations of an English-German test set by up to 4.25 BLEU points.
2020
Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH
Online
International Speech Communication Association
Gaido, M.; Di Gangi, M. A.; Negri, M.; Cettolo, M.; Turchi, M.
Contextualized translation of automatically segmented speech / Gaido, M.; Di Gangi, M. A.; Negri, M.; Cettolo, M.; Turchi, M.. - 2020-:(2020), pp. 1471-1475. (Intervento presentato al convegno 21st Annual Conference of the International Speech Communication Association, INTERSPEECH 2020 tenutosi a Shanghai, China nel 25-29 October, 2020) [10.21437/Interspeech.2020-2860].
File in questo prodotto:
File Dimensione Formato  
gaido20_interspeech.pdf

accesso aperto

Descrizione: Articolo principale
Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 687.51 kB
Formato Adobe PDF
687.51 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/333809
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 13
  • ???jsp.display-item.citation.isi??? 7
  • OpenAlex ND
social impact