Questions are essential tools for acquiring the necessary information to complete information-seeking tasks. However, large language models (LLMs), especially open-source models, often perform poorly in generating informative questions, as measured by expected information gain (EIG). In this paper, we propose a method to enhance the informativeness of LLM-generated questions in 20-question game dialogues. We sample multiple questions from the same model (LLAMA 2-CHAT 7B) for each game and create pairs of low-EIG and high-EIG questions to apply a Direct Preference Optimization (DPO) algorithm. Our results show that this method produces more effective questions (in terms of EIG), even in domains different from those used to train the DPO model.

Learning to Ask Informative Questions: Enhancing LLMs with Preference Optimization and Expected Information Gain / Mazzaccara, Davide; Testoni, Alberto; Bernardi, Raffaella. - ELETTRONICO. - (2024), pp. 5064-5074. (Intervento presentato al convegno EMNLP tenutosi a Miami, Florida, USA nel 11th-16th November 2024) [10.18653/v1/2024.findings-emnlp.291].

Learning to Ask Informative Questions: Enhancing LLMs with Preference Optimization and Expected Information Gain

Davide Mazzaccara
;
Alberto Testoni;Raffaella Bernardi
2024-01-01

Abstract

Questions are essential tools for acquiring the necessary information to complete information-seeking tasks. However, large language models (LLMs), especially open-source models, often perform poorly in generating informative questions, as measured by expected information gain (EIG). In this paper, we propose a method to enhance the informativeness of LLM-generated questions in 20-question game dialogues. We sample multiple questions from the same model (LLAMA 2-CHAT 7B) for each game and create pairs of low-EIG and high-EIG questions to apply a Direct Preference Optimization (DPO) algorithm. Our results show that this method produces more effective questions (in terms of EIG), even in domains different from those used to train the DPO model.
2024
Findings of the Association for Computational Linguistics: EMNLP 2024
Miami, Florida, USA
Association for Computational Linguistics
Mazzaccara, Davide; Testoni, Alberto; Bernardi, Raffaella
Learning to Ask Informative Questions: Enhancing LLMs with Preference Optimization and Expected Information Gain / Mazzaccara, Davide; Testoni, Alberto; Bernardi, Raffaella. - ELETTRONICO. - (2024), pp. 5064-5074. (Intervento presentato al convegno EMNLP tenutosi a Miami, Florida, USA nel 11th-16th November 2024) [10.18653/v1/2024.findings-emnlp.291].
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/432270
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
  • OpenAlex ND
social impact