KG-Attention: Knowledge Graph-Guided Attention at Test-Time via Bidirectional Information Aggregation

📅 2025-07-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing knowledge graph (KG)-enhanced methods rely on parameter fine-tuning, suffering from catastrophic forgetting, degraded generalization, and poor adaptability to real-time knowledge updates. Method: We propose the first test-time KG enhancement framework that operates without updating model parameters. It employs bidirectional (outward and inward) information aggregation to enable input-driven knowledge fusion and KG-guided representation refinement. Crucially, we introduce the first KG-guided attention mechanism, establishing a closed-loop enhancement system that supports real-time knowledge injection and adaptive filtering. Contribution/Results: Our approach effectively mitigates forgetting while significantly improving dynamic knowledge adaptation. Evaluated on five benchmarks, it achieves performance comparable to state-of-the-art fine-tuning methods—demonstrating highly effective knowledge integration without any parameter updates.

Technology Category

Application Category

📝 Abstract
Knowledge graphs (KGs) play a critical role in enhancing large language models (LLMs) by introducing structured and grounded knowledge into the learning process. However, most existing KG-enhanced approaches rely on parameter-intensive fine-tuning, which risks catastrophic forgetting and degrades the pretrained model's generalization. Moreover, they exhibit limited adaptability to real-time knowledge updates due to their static integration frameworks. To address these issues, we introduce the first test-time KG-augmented framework for LLMs, built around a dedicated knowledge graph-guided attention (KGA) module that enables dynamic knowledge fusion without any parameter updates. The proposed KGA module augments the standard self-attention mechanism with two synergistic pathways: outward and inward aggregation. Specifically, the outward pathway dynamically integrates external knowledge into input representations via input-driven KG fusion. This inward aggregation complements the outward pathway by refining input representations through KG-guided filtering, suppressing task-irrelevant signals and amplifying knowledge-relevant patterns. Importantly, while the outward pathway handles knowledge fusion, the inward path selects the most relevant triples and feeds them back into the fusion process, forming a closed-loop enhancement mechanism. By synergistically combining these two pathways, the proposed method supports real-time knowledge fusion exclusively at test-time, without any parameter modification. Extensive experiments on five benchmarks verify the comparable knowledge fusion performance of KGA.
Problem

Research questions and friction points this paper is trying to address.

Enhancing LLMs with dynamic KG fusion without fine-tuning
Preventing catastrophic forgetting in KG-augmented LLMs
Enabling real-time knowledge updates in static KG frameworks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Test-time KG-augmented framework without parameter updates
Bidirectional knowledge fusion via outward and inward pathways
Dynamic KG-guided attention for real-time knowledge integration
🔎 Similar Papers
No similar papers found.
Songlin Zhai
Songlin Zhai
Southeast University
Guilin Qi
Guilin Qi
Southeast University
Artificial Intelligenceontology
Y
Yuan Meng
School of Computer Science and Engineering, Southeast University, Nanjing, China