Large language models (LLMs) are increasingly evaluated on reasoning tasks, yet their logical abilities remain contested. To address this, we study LLMs’ reasoning in a well-defined fragment of logic: syllogistic reasoning. We cast the problem as premise selection and construct controlled datasets to isolate logical competence. Beyond evaluation, an open challenge is enabling LLMs to acquire abstract inference patterns that generalize to novel structures. We propose to apply few-shot meta-learning to this domain, thereby encouraging models to extract rules across tasks rather than memorize patterns within tasks. Although meta-learning has been little explored in the context of logic learnability, our experiments show that it is effective: small models (1.5B–7B) fine-tuned with meta-learning demonstrate strong gains in generalization, with especially pronounced benefits in low-data regimes. These meta-learned models outperform GPT-4o and o3-mini on our syllogistic reasoning task.

Teaching Small Language Models to Learn Logic through Meta-Learning / Bertolazzi, Leonardo; Vargas Guzmán, Manuel; Bernardi, Raffaella; Malicki, Maciej; Szymanik, Jakub. - (2026), pp. 8049-8080. ( EACL 2026 Rabat, Morocco 2026) [10.18653/v1/2026.eacl-long.376].

Teaching Small Language Models to Learn Logic through Meta-Learning

Leonardo Bertolazzi;Raffaella Bernardi;Jakub Szymanik
2026-01-01

Abstract

Large language models (LLMs) are increasingly evaluated on reasoning tasks, yet their logical abilities remain contested. To address this, we study LLMs’ reasoning in a well-defined fragment of logic: syllogistic reasoning. We cast the problem as premise selection and construct controlled datasets to isolate logical competence. Beyond evaluation, an open challenge is enabling LLMs to acquire abstract inference patterns that generalize to novel structures. We propose to apply few-shot meta-learning to this domain, thereby encouraging models to extract rules across tasks rather than memorize patterns within tasks. Although meta-learning has been little explored in the context of logic learnability, our experiments show that it is effective: small models (1.5B–7B) fine-tuned with meta-learning demonstrate strong gains in generalization, with especially pronounced benefits in low-data regimes. These meta-learned models outperform GPT-4o and o3-mini on our syllogistic reasoning task.
2026
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
Rabat, Morocco
Association of Computational Linguistics
Bertolazzi, Leonardo; Vargas Guzmán, Manuel; Bernardi, Raffaella; Malicki, Maciej; Szymanik, Jakub
Teaching Small Language Models to Learn Logic through Meta-Learning / Bertolazzi, Leonardo; Vargas Guzmán, Manuel; Bernardi, Raffaella; Malicki, Maciej; Szymanik, Jakub. - (2026), pp. 8049-8080. ( EACL 2026 Rabat, Morocco 2026) [10.18653/v1/2026.eacl-long.376].
File in questo prodotto:
File Dimensione Formato  
2026.eacl-long.376.pdf

accesso aperto

Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Creative commons
Dimensione 9.03 MB
Formato Adobe PDF
9.03 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/482390
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
  • OpenAlex ND
social impact