Large Language Models, and ChatGPT in particular, have recently grabbed the attention of the community and the media. Having reached high language proficiency, attention has been shifting toward its reasoning capabilities. In this paper, our main aim is to evaluate ChatGPT's question generation in a task where language production should be driven by an implicit reasoning process. To this end, we employ the 20-Questions game, traditionally used within the Cognitive Science community to inspect the information seeking-strategy's development. This task requires a series of interconnected skills: asking informative questions, stepwise updating the hypothesis space, and stopping asking questions when enough information has been collected. We build hierarchical hypothesis spaces, exploiting feature norms collected from humans vs. ChatGPT itself, and we inspect the efficiency and informativeness of ChatGPT's strategy. Our results show that ChatGPT's performance gets closer to an optimal agent only when prompted to explicitly list the updated space stepwise.

ChatGPT's Information Seeking Strategy: Insights From the 20-Questions Game / Bertolazzi, Leonardo; Mazzaccara, Davide; Merlo, Filippo; Bernardi, Raffaella. - (2023). (Intervento presentato al convegno INLG23 tenutosi a Prague, Czech Republic nel 11th-15th September 2023).

ChatGPT's Information Seeking Strategy: Insights From the 20-Questions Game

Mazzaccara, Davide;Bernardi, Raffaella
2023-01-01

Abstract

Large Language Models, and ChatGPT in particular, have recently grabbed the attention of the community and the media. Having reached high language proficiency, attention has been shifting toward its reasoning capabilities. In this paper, our main aim is to evaluate ChatGPT's question generation in a task where language production should be driven by an implicit reasoning process. To this end, we employ the 20-Questions game, traditionally used within the Cognitive Science community to inspect the information seeking-strategy's development. This task requires a series of interconnected skills: asking informative questions, stepwise updating the hypothesis space, and stopping asking questions when enough information has been collected. We build hierarchical hypothesis spaces, exploiting feature norms collected from humans vs. ChatGPT itself, and we inspect the efficiency and informativeness of ChatGPT's strategy. Our results show that ChatGPT's performance gets closer to an optimal agent only when prompted to explicitly list the updated space stepwise.
2023
Proceedings of the 15th International Conference on Natural Language Generation
Prague, Czech Republic
Association for Computational Linguistics
Bertolazzi, Leonardo; Mazzaccara, Davide; Merlo, Filippo; Bernardi, Raffaella
ChatGPT's Information Seeking Strategy: Insights From the 20-Questions Game / Bertolazzi, Leonardo; Mazzaccara, Davide; Merlo, Filippo; Bernardi, Raffaella. - (2023). (Intervento presentato al convegno INLG23 tenutosi a Prague, Czech Republic nel 11th-15th September 2023).
File in questo prodotto:
File Dimensione Formato  
INLG23.pdf

accesso aperto

Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Creative commons
Dimensione 448.64 kB
Formato Adobe PDF
448.64 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/390111
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
  • OpenAlex ND
social impact