Large Language Models, and ChatGPT in particular, have recently grabbed the attention of the community and the media. Having reached high language proficiency, attention has been shifting toward its reasoning capabilities. In this paper, our main aim is to evaluate ChatGPT's question generation in a task where language production should be driven by an implicit reasoning process. To this end, we employ the 20-Questions game, traditionally used within the Cognitive Science community to inspect the information seeking-strategy's development. This task requires a series of interconnected skills: asking informative questions, stepwise updating the hypothesis space, and stopping asking questions when enough information has been collected. We build hierarchical hypothesis spaces, exploiting feature norms collected from humans vs. ChatGPT itself, and we inspect the efficiency and informativeness of ChatGPT's strategy. Our results show that ChatGPT's performance gets closer to an optimal agent only when prompted to explicitly list the updated space stepwise.

Large Language Models, and ChatGPT in particular, have recently grabbed the attention of the community and the media. Having reached high language proficiency, attention has been shifting toward its reasoning capabilities. In this paper, our main aim is to evaluate ChatGPT's question generation in a task where language production should be driven by an implicit reasoning process. To this end, we employ the 20-Questions game, traditionally used within the Cognitive Science community to inspect the information seeking-strategy's development. This task requires a series of interconnected skills: asking informative questions, stepwise updating the hypothesis space, and stopping asking questions when enough information has been collected. We build hierarchical hypothesis spaces, exploiting feature norms collected from humans vs. ChatGPT itself, and we inspect the efficiency and informativeness of ChatGPT's strategy. Our results show that ChatGPT's performance gets closer to an optimal agent only when prompted to explicitly list the updated space stepwise.

ChatGPT's Information Seeking Strategy: Insights From the 20-Questions Game / Bertolazzi, Leonardo; Mazzaccara, Davide; Merlo, Filippo; Bernardi, Raffaella. - (2023), pp. 153-162. ( 16th International Natural Language Generation Conference, INLG 2023 - held jointly with the 24th Annual Meeting of the Special Interest Group on Discourse and Dialogue, SIGDial 2023 Prague, Czech Republic 11th-15th September 2023) [10.18653/v1/2023.inlg-main.11].

ChatGPT's Information Seeking Strategy: Insights From the 20-Questions Game

Bertolazzi, Leonardo;Mazzaccara, Davide;Bernardi, Raffaella
2023-01-01

Abstract

Large Language Models, and ChatGPT in particular, have recently grabbed the attention of the community and the media. Having reached high language proficiency, attention has been shifting toward its reasoning capabilities. In this paper, our main aim is to evaluate ChatGPT's question generation in a task where language production should be driven by an implicit reasoning process. To this end, we employ the 20-Questions game, traditionally used within the Cognitive Science community to inspect the information seeking-strategy's development. This task requires a series of interconnected skills: asking informative questions, stepwise updating the hypothesis space, and stopping asking questions when enough information has been collected. We build hierarchical hypothesis spaces, exploiting feature norms collected from humans vs. ChatGPT itself, and we inspect the efficiency and informativeness of ChatGPT's strategy. Our results show that ChatGPT's performance gets closer to an optimal agent only when prompted to explicitly list the updated space stepwise.
2023
Proceedings of the 16th International Natural Language Generation Conference
Prague, Czech Republic
Association for Computational Linguistics
9798891760011
Bertolazzi, Leonardo; Mazzaccara, Davide; Merlo, Filippo; Bernardi, Raffaella
ChatGPT's Information Seeking Strategy: Insights From the 20-Questions Game / Bertolazzi, Leonardo; Mazzaccara, Davide; Merlo, Filippo; Bernardi, Raffaella. - (2023), pp. 153-162. ( 16th International Natural Language Generation Conference, INLG 2023 - held jointly with the 24th Annual Meeting of the Special Interest Group on Discourse and Dialogue, SIGDial 2023 Prague, Czech Republic 11th-15th September 2023) [10.18653/v1/2023.inlg-main.11].
File in questo prodotto:
File Dimensione Formato  
INLG23.pdf

accesso aperto

Tipologia: Pre-print non referato (Non-refereed preprint)
Licenza: Creative commons
Dimensione 448.64 kB
Formato Adobe PDF
448.64 kB Adobe PDF Visualizza/Apri
2023.inlg-main.11.pdf

accesso aperto

Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Creative commons
Dimensione 520.69 kB
Formato Adobe PDF
520.69 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/390111
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 9
  • ???jsp.display-item.citation.isi??? ND
  • OpenAlex 3
social impact