Due to the discrete nature of words, language GANs require to be optimized from rewards provided by discriminator networks, via reinforcement learning methods. This is a much harder setting than for continuous tasks, which enjoy gradient flows from discriminators to generators, usually leading to dramatic learning instabilities. However, we claim that this can be solved by making discriminator and generator networks cooperate to produce output sequences during training. These cooperative outputs, inherently built to obtain higher discrimination scores, not only provide denser rewards for training, but also form a more compact artificial set for discriminator training, hence improving its accuracy and stability. In this paper, we show that our SelfGAN framework, built on this cooperative principle, outperforms Teacher Forcing and obtains state-of-the-art results on two challenging tasks, Summarization and Question Generation.

To Beam Or Not To Beam: That is a Question of Cooperation for Language GANs / Scialom, T.; Dray, P. -A.; Lamprier, S.; Piwowarski, B.; Staiano, J.. - 32:(2021), pp. 26585-26597. (Intervento presentato al convegno 35th Conference on Neural Information Processing Systems, NeurIPS 2021 tenutosi a Virtual, Online nel 2021).

To Beam Or Not To Beam: That is a Question of Cooperation for Language GANs

Staiano J.
2021-01-01

Abstract

Due to the discrete nature of words, language GANs require to be optimized from rewards provided by discriminator networks, via reinforcement learning methods. This is a much harder setting than for continuous tasks, which enjoy gradient flows from discriminators to generators, usually leading to dramatic learning instabilities. However, we claim that this can be solved by making discriminator and generator networks cooperate to produce output sequences during training. These cooperative outputs, inherently built to obtain higher discrimination scores, not only provide denser rewards for training, but also form a more compact artificial set for discriminator training, hence improving its accuracy and stability. In this paper, we show that our SelfGAN framework, built on this cooperative principle, outperforms Teacher Forcing and obtains state-of-the-art results on two challenging tasks, Summarization and Question Generation.
2021
Advances in Neural Information Processing Systems
San Mateo, CA
Neural information processing systems foundation
Scialom, T.; Dray, P. -A.; Lamprier, S.; Piwowarski, B.; Staiano, J.
To Beam Or Not To Beam: That is a Question of Cooperation for Language GANs / Scialom, T.; Dray, P. -A.; Lamprier, S.; Piwowarski, B.; Staiano, J.. - 32:(2021), pp. 26585-26597. (Intervento presentato al convegno 35th Conference on Neural Information Processing Systems, NeurIPS 2021 tenutosi a Virtual, Online nel 2021).
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/362926
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 10
  • ???jsp.display-item.citation.isi??? ND
social impact