We study the limitations of Large Language Models (LLMs) for the task of response generation in human-machine dialogue. Several techniques have been proposed in the literature for different dialogue types (e.g., Open-Domain). However, the evaluations of these techniques have been limited in terms of base LLMs, dialogue types and evaluation metrics. In this work, we extensively analyze different LLM adaptation techniques when applied to different dialogue types. We have selected two base LLMs, Llama-2 and Mistral, and four dialogue types Open-Domain, Knowledge-Grounded, Task-Oriented, and Question Answering. We evaluate the performance of in-context learning and fine-tuning techniques across datasets selected for each dialogue type. We assess the impact of incorporating external knowledge to ground the generation in both scenarios of Retrieval-Augmented Generation (RAG) and gold knowledge. We adopt consistent evaluation and explainability criteria for automatic metrics and human evaluation protocols. Our analysis shows that there is no universal best-technique for adapting large language models as the efficacy of each technique depends on both the base LLM and the specific type of dialogue. Last but not least, the assessment of the best adaptation technique should include human evaluation to avoid false expectations and outcomes derived from automatic metrics.

Should We Fine-Tune or RAG? Evaluating Different Techniques to Adapt LLMs for Dialogue / Alghisi, Simone; Rizzoli, Massimo; Roccabruna, Gabriel; Mousavi, Seyed Mahed; Riccardi, Giuseppe. - ELETTRONICO. - (2024), pp. 180-197. (Intervento presentato al convegno INLG 2024 tenutosi a Tokyo nel 23/09/2024 - 27/09/2024).

Should We Fine-Tune or RAG? Evaluating Different Techniques to Adapt LLMs for Dialogue

Alghisi, Simone
;
Rizzoli, Massimo
;
Roccabruna, Gabriel;Mousavi, Seyed Mahed;Riccardi, Giuseppe
2024-01-01

Abstract

We study the limitations of Large Language Models (LLMs) for the task of response generation in human-machine dialogue. Several techniques have been proposed in the literature for different dialogue types (e.g., Open-Domain). However, the evaluations of these techniques have been limited in terms of base LLMs, dialogue types and evaluation metrics. In this work, we extensively analyze different LLM adaptation techniques when applied to different dialogue types. We have selected two base LLMs, Llama-2 and Mistral, and four dialogue types Open-Domain, Knowledge-Grounded, Task-Oriented, and Question Answering. We evaluate the performance of in-context learning and fine-tuning techniques across datasets selected for each dialogue type. We assess the impact of incorporating external knowledge to ground the generation in both scenarios of Retrieval-Augmented Generation (RAG) and gold knowledge. We adopt consistent evaluation and explainability criteria for automatic metrics and human evaluation protocols. Our analysis shows that there is no universal best-technique for adapting large language models as the efficacy of each technique depends on both the base LLM and the specific type of dialogue. Last but not least, the assessment of the best adaptation technique should include human evaluation to avoid false expectations and outcomes derived from automatic metrics.
2024
Proceedings of the 17th International Natural Language Generation Conference
Association for Computational Linguistics
Association for Computational Linguistics
Alghisi, Simone; Rizzoli, Massimo; Roccabruna, Gabriel; Mousavi, Seyed Mahed; Riccardi, Giuseppe
Should We Fine-Tune or RAG? Evaluating Different Techniques to Adapt LLMs for Dialogue / Alghisi, Simone; Rizzoli, Massimo; Roccabruna, Gabriel; Mousavi, Seyed Mahed; Riccardi, Giuseppe. - ELETTRONICO. - (2024), pp. 180-197. (Intervento presentato al convegno INLG 2024 tenutosi a Tokyo nel 23/09/2024 - 27/09/2024).
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/437336
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
  • OpenAlex ND
social impact