๐ค AI Summary
To address the pervasive issue of factual hallucinations in large language models (LLMs), this paper proposes a knowledge-aware self-correction framework. The method constructs a structured memory graph from RDF triples, serving as an external semantic memory that enables post-hoc correction of model outputsโwithout fine-tuning or retraining. Leveraging semantic matching and factual verification, the framework delivers lightweight, interpretable, and real-time error correction. Experiments on DistilGPT-2 demonstrate that even minimal factual prompting significantly improves factual consistency in generated text. The core contribution lies in deeply integrating structured knowledge graphs into the inference-time post-processing pipeline, thereby achieving high efficiency, full interpretability, and zero-shot deployability.
๐ Abstract
Large Language Models (LLMs) are powerful yet prone to generating factual errors, commonly referred to as hallucinations. We present a lightweight, interpretable framework for knowledge-aware self-correction of LLM outputs using structured memory graphs based on RDF triples. Without retraining or fine-tuning, our method post-processes model outputs and corrects factual inconsistencies via external semantic memory. We demonstrate the approach using DistilGPT-2 and show promising results on simple factual prompts.