Training regimes based on Maximum Likelihood Estimation (MLE) suffer from known limitations, often leading to poorly generated text sequences. At the root of these limitations is the mismatch between training and inference, i.e. the so-called exposure bias, exacerbated by considering only the reference texts as correct, while in practice several alternative formulations could be as good. Generative Adversarial Networks (GANs) can mitigate those limitations but the discrete nature of text has hindered their application to language generation: the approaches proposed so far, based on Reinforcement Learning, have been shown to underperform MLE. Departing from previous works, we analyze the exploration step in GANs applied to text generation, and show how classical sampling results in unstable training. We propose to consider alternative exploration strategies in a GAN framework that we name ColdGANs, where we force the sampling to be close to the distribution modes to get smoother learning dynamics. For the first time, to the best of our knowledge, the proposed language GANs compare favorably to MLE, and obtain improvements over the state-of-the-art on three generative tasks, namely unconditional text generation, question generation, and abstractive summarization.

ColdGANs: Taming language GANs with cautious sampling strategies / Scialom, T.; Dray, P. -A.; Lamprier, S.; Piwowarski, B.; Staiano, J.. - 2020-:(2020). (Intervento presentato al convegno 34th Conference on Neural Information Processing Systems, NeurIPS 2020 tenutosi a Virtual, Online nel 2020).

ColdGANs: Taming language GANs with cautious sampling strategies

Staiano J.
2020-01-01

Abstract

Training regimes based on Maximum Likelihood Estimation (MLE) suffer from known limitations, often leading to poorly generated text sequences. At the root of these limitations is the mismatch between training and inference, i.e. the so-called exposure bias, exacerbated by considering only the reference texts as correct, while in practice several alternative formulations could be as good. Generative Adversarial Networks (GANs) can mitigate those limitations but the discrete nature of text has hindered their application to language generation: the approaches proposed so far, based on Reinforcement Learning, have been shown to underperform MLE. Departing from previous works, we analyze the exploration step in GANs applied to text generation, and show how classical sampling results in unstable training. We propose to consider alternative exploration strategies in a GAN framework that we name ColdGANs, where we force the sampling to be close to the distribution modes to get smoother learning dynamics. For the first time, to the best of our knowledge, the proposed language GANs compare favorably to MLE, and obtain improvements over the state-of-the-art on three generative tasks, namely unconditional text generation, question generation, and abstractive summarization.
2020
Advances in Neural Information Processing Systems
San Mateo, CA
Neural information processing systems foundation
Scialom, T.; Dray, P. -A.; Lamprier, S.; Piwowarski, B.; Staiano, J.
ColdGANs: Taming language GANs with cautious sampling strategies / Scialom, T.; Dray, P. -A.; Lamprier, S.; Piwowarski, B.; Staiano, J.. - 2020-:(2020). (Intervento presentato al convegno 34th Conference on Neural Information Processing Systems, NeurIPS 2020 tenutosi a Virtual, Online nel 2020).
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/363005
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 8
  • ???jsp.display-item.citation.isi??? ND
  • OpenAlex ND
social impact