Interaction between learner and feedback-giver has come into focus recently for post-training of Large Language Models (LLMs), through the use of reward models that judge the appropriateness of a model's response. In this paper, we investigate whether Dialogue Games -- goal-directed and rule-governed activities driven predominantly by verbal actions -- can also serve as a source of feedback signals for learning. We introduce Playpen, an environment for off- and online learning through Dialogue Game self-play, and investigate a representative set of post-training methods: supervised fine-tuning; direct alignment (DPO); and reinforcement learning with GRPO. We experiment with post-training a small LLM (Llama-3.1-8B-Instruct), evaluating performance on unseen instances of training games as well as unseen games, and on standard benchmarks. We find that imitation learning through SFT improves performance on unseen instances, but negatively impacts other skills, while interactive learning with GRPO shows balanced improvements without loss of skills. We release the framework and the baseline training setups to foster research in the promising new direction of learning in (synthetic) interaction.
Playpen: An Environment for Exploring Learning Through Conversational Interaction / Horst, Nicola; Mazzaccara, Davide; Schmidt, Antonia; Sullivan, Michael; Momentè, Filippo; Franceschetti, Luca; Sadler, Philipp; Hakimov, Sherzod; Testoni, Alberto; Bernardi, Raffaella; Fernández, Raquel; Koller, Alexander; Lemon, Oliver; Schlangen, David; Giulianelli, Mario; Suglia, Alessandro. - (2025), pp. 29842-29880. (Intervento presentato al convegno EMNLP tenutosi a Suzhou, China nel 4th-9th November 2025) [10.18653/v1/2025.emnlp-main.1517].
Playpen: An Environment for Exploring Learning Through Conversational Interaction
Davide Mazzaccara;Alberto Testoni;Raffaella Bernardi;David Schlangen;
2025-01-01
Abstract
Interaction between learner and feedback-giver has come into focus recently for post-training of Large Language Models (LLMs), through the use of reward models that judge the appropriateness of a model's response. In this paper, we investigate whether Dialogue Games -- goal-directed and rule-governed activities driven predominantly by verbal actions -- can also serve as a source of feedback signals for learning. We introduce Playpen, an environment for off- and online learning through Dialogue Game self-play, and investigate a representative set of post-training methods: supervised fine-tuning; direct alignment (DPO); and reinforcement learning with GRPO. We experiment with post-training a small LLM (Llama-3.1-8B-Instruct), evaluating performance on unseen instances of training games as well as unseen games, and on standard benchmarks. We find that imitation learning through SFT improves performance on unseen instances, but negatively impacts other skills, while interactive learning with GRPO shows balanced improvements without loss of skills. We release the framework and the baseline training setups to foster research in the promising new direction of learning in (synthetic) interaction.| File | Dimensione | Formato | |
|---|---|---|---|
|
PlayPenArxiv.pdf
accesso aperto
Tipologia:
Pre-print non referato (Non-refereed preprint)
Licenza:
Creative commons
Dimensione
826.86 kB
Formato
Adobe PDF
|
826.86 kB | Adobe PDF | Visualizza/Apri |
|
2025.emnlp-main.1517.pdf
accesso aperto
Tipologia:
Versione editoriale (Publisher’s layout)
Licenza:
Creative commons
Dimensione
676.62 kB
Formato
Adobe PDF
|
676.62 kB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione



