In real-world applications, users often require both translations and transcriptions of speech to enhance their comprehension, particularly in streaming scenarios where incremental generation is necessary. This paper introduces a streaming Transformer-Transducer that jointly generates automatic speech recognition (ASR) and speech translation (ST) outputs using a single decoder. To produce ASR and ST content effectively with minimal latency, we propose a joint token-level serialized output training method that interleaves source and target words by leveraging an off-the-shelf textual aligner. Experiments in monolingual (it-en) and multilingual (\{de,es,it\}-en) settings demonstrate that our approach achieves the best quality-latency balance. With an average ASR latency of 1s and ST latency of 1.3s, our model shows no degradation or even improves output quality compared to separate ASR and ST models, yielding an average improvement of 1.1 WER and 0.4 BLEU in the multilingual case.

Token-Level Serialized Output Training for Joint Streaming ASR and ST Leveraging Textual Alignments / Papi, Sara; Wang, Peidong; Chen, Junkun; Xue, Jian; Li, Jinyu; Gaur, Yashesh. - (2023). (Intervento presentato al convegno ASRU 2023 tenutosi a Beitou, Taipei nel December 2023).

Token-Level Serialized Output Training for Joint Streaming ASR and ST Leveraging Textual Alignments

Sara Papi
Primo
;
2023-01-01

Abstract

In real-world applications, users often require both translations and transcriptions of speech to enhance their comprehension, particularly in streaming scenarios where incremental generation is necessary. This paper introduces a streaming Transformer-Transducer that jointly generates automatic speech recognition (ASR) and speech translation (ST) outputs using a single decoder. To produce ASR and ST content effectively with minimal latency, we propose a joint token-level serialized output training method that interleaves source and target words by leveraging an off-the-shelf textual aligner. Experiments in monolingual (it-en) and multilingual (\{de,es,it\}-en) settings demonstrate that our approach achieves the best quality-latency balance. With an average ASR latency of 1s and ST latency of 1.3s, our model shows no degradation or even improves output quality compared to separate ASR and ST models, yielding an average improvement of 1.1 WER and 0.4 BLEU in the multilingual case.
2023
2023 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU)
Beitou, Taipei
IEEE
Papi, Sara; Wang, Peidong; Chen, Junkun; Xue, Jian; Li, Jinyu; Gaur, Yashesh
Token-Level Serialized Output Training for Joint Streaming ASR and ST Leveraging Textual Alignments / Papi, Sara; Wang, Peidong; Chen, Junkun; Xue, Jian; Li, Jinyu; Gaur, Yashesh. - (2023). (Intervento presentato al convegno ASRU 2023 tenutosi a Beitou, Taipei nel December 2023).
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/399367
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
  • OpenAlex ND
social impact