Large Language Models (LLMs) have significantly advanced natural language processing, yet they con- tinue to face limitations such as hallucinations, factual inconsistencies, and restricted domain-specific knowledge. Knowledge Graphs (KGs), by contrast, provide structured and verifiable information but are expensive to build and maintain manually. This thesis introduces Reflexive Composition, a bidirectional integration framework in which LLMs and KGs iteratively refine each other’s outputs. The framework consists of three interconnected components: (1) LLM2KG, where LLMs assist in the construction and updating of domain-specific knowledge graphs; (2) Human-in-the-Loop (HITL) validation, which supports structured expert review; and (3) KG2LLM, which conditions LLM outputs on verified knowledge to reduce hallucinations and improve consistency. The methodology is evaluated across three case studies: temporal knowledge management, privacy- preserving data integration, and historical bias mitigation. Results include a 23% increase in knowledge extraction accuracy (F1 score from 0.65 to 0.80), a 28.7% reduction in LLM hallucination rates, and measurable improvements in validation efficiency through structured workflows. Reflexive Composition offers a reproducible approach for improving the reliability, scalability, and transparency of AI systems in dynamic or high-risk domains.

Reflexive Composition: Bidirectional Enhancement of Language Models and Knowledge Graphs / Mehta, Virendra Kumar. - (2025 Jun 20), pp. 1-178.

Reflexive Composition: Bidirectional Enhancement of Language Models and Knowledge Graphs

Mehta, Virendra Kumar
2025-06-20

Abstract

Large Language Models (LLMs) have significantly advanced natural language processing, yet they con- tinue to face limitations such as hallucinations, factual inconsistencies, and restricted domain-specific knowledge. Knowledge Graphs (KGs), by contrast, provide structured and verifiable information but are expensive to build and maintain manually. This thesis introduces Reflexive Composition, a bidirectional integration framework in which LLMs and KGs iteratively refine each other’s outputs. The framework consists of three interconnected components: (1) LLM2KG, where LLMs assist in the construction and updating of domain-specific knowledge graphs; (2) Human-in-the-Loop (HITL) validation, which supports structured expert review; and (3) KG2LLM, which conditions LLM outputs on verified knowledge to reduce hallucinations and improve consistency. The methodology is evaluated across three case studies: temporal knowledge management, privacy- preserving data integration, and historical bias mitigation. Results include a 23% increase in knowledge extraction accuracy (F1 score from 0.65 to 0.80), a 28.7% reduction in LLM hallucination rates, and measurable improvements in validation efficiency through structured workflows. Reflexive Composition offers a reproducible approach for improving the reliability, scalability, and transparency of AI systems in dynamic or high-risk domains.
20-giu-2025
XXXVI
2024-2025
Università degli Studi di Trento
Information and Communication Technology
Giunchiglia, Fausto
Casati, Fabio
no
Inglese
File in questo prodotto:
File Dimensione Formato  
Reflexive_Composition.pdf

accesso aperto

Tipologia: Tesi di dottorato (Doctoral Thesis)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 1.36 MB
Formato Adobe PDF
1.36 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/457410
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
  • OpenAlex ND
social impact