Knowledge-Aware Self-Correction in Language Models via Structured Memory Graphs

๐Ÿ“… 2025-07-06
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
To address the pervasive issue of factual hallucinations in large language models (LLMs), this paper proposes a knowledge-aware self-correction framework. The method constructs a structured memory graph from RDF triples, serving as an external semantic memory that enables post-hoc correction of model outputsโ€”without fine-tuning or retraining. Leveraging semantic matching and factual verification, the framework delivers lightweight, interpretable, and real-time error correction. Experiments on DistilGPT-2 demonstrate that even minimal factual prompting significantly improves factual consistency in generated text. The core contribution lies in deeply integrating structured knowledge graphs into the inference-time post-processing pipeline, thereby achieving high efficiency, full interpretability, and zero-shot deployability.

Technology Category

Application Category

๐Ÿ“ Abstract
Large Language Models (LLMs) are powerful yet prone to generating factual errors, commonly referred to as hallucinations. We present a lightweight, interpretable framework for knowledge-aware self-correction of LLM outputs using structured memory graphs based on RDF triples. Without retraining or fine-tuning, our method post-processes model outputs and corrects factual inconsistencies via external semantic memory. We demonstrate the approach using DistilGPT-2 and show promising results on simple factual prompts.
Problem

Research questions and friction points this paper is trying to address.

Reducing factual errors in Large Language Models outputs
Correcting hallucinations without retraining or fine-tuning
Using structured memory graphs for knowledge-aware self-correction
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses structured memory graphs for correction
Post-processes outputs without retraining
Leverages external semantic memory
๐Ÿ”Ž Similar Papers
No similar papers found.