KG-TRACES: Enhancing Large Language Models with Knowledge Graph-constrained Trajectory Reasoning and Attribution Supervision

📅 2025-06-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the weak interpretability, low trustworthiness, and hallucination susceptibility of large language models (LLMs) in complex reasoning, this paper proposes a knowledge graph (KG)-constrained trajectory reasoning and attribution-aware supervision framework. The method integrates KG embeddings, multi-granularity path supervision learning, attribution-aware prompt engineering, trajectory consistency distillation, and a dual-modality reasoning adapter to support both KG-available and KG-unavailable inference modes. Its core contribution is a novel triple joint supervision mechanism: symbolic relation path prediction, triplet-level reasoning path prediction, and attribution-aware path anchoring generation. Evaluated on WebQSP and ComplexWebQuestions (CWQ), the framework achieves absolute improvements of +1.6% and +4.8% in Hits@1, and +4.7% and +2.1% in F1, respectively. Moreover, it demonstrates strong cross-domain transferability, particularly in specialized domains such as biomedicine.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have made remarkable strides in various natural language processing tasks, but their performance on complex reasoning problems remains hindered by a lack of explainability and trustworthiness. This issue, often manifesting as hallucinations or unattributable reasoning processes, limits their applicability in complex reasoning scenarios. To address this, we propose Knowledge Graph-constrained Trajectory Reasoning Attribution and Chain Explanation Supervision (KG-TRACES), a novel framework that enhances the reasoning ability of LLMs through explicit supervision over reasoning paths and processes. KG-TRACES jointly supervises the model to: (1) predict symbolic relation paths, (2) predict full triple-level reasoning paths, and (3) generate attribution-aware reasoning processes grounded in the reasoning paths. At inference phase, the model adapts to both KG-available and KG-unavailable scenarios, retrieving reasoning paths from a KG when possible or predicting plausible reasoning paths with only intrinsic knowledge when not. This design enables the model to reason in an explainable and source-attributable pattern. Through extensive experiments on complex reasoning tasks, we demonstrate that KG-TRACES significantly outperforms existing SOTA: it improves Hits@1 by 1.6% and F1 by 4.7% on WebQSP, and achieves improvements of 4.8% in Hits@1 and 2.1% in F1 on CWQ. Moreover, we show its transferability to specialized domains such as medicine. By visualizing the intermediate steps of reasoning processes, we further show that the explicit supervision introduced by KG-TRACES leads to more stable and goal-directed reasoning processes, aligning closely with correct answers. Code is available at https://github.com/Edaizi/KG-TRACES.
Problem

Research questions and friction points this paper is trying to address.

Enhancing LLMs' reasoning with knowledge graph constraints
Improving explainability and trustworthiness in complex reasoning tasks
Supervising reasoning paths for attribution-aware model outputs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Knowledge Graph-constrained reasoning paths
Attribution-aware reasoning processes
Dual-mode KG-available and KG-unavailable adaptation
🔎 Similar Papers
No similar papers found.