Knowledge Graph-Infused Fine-Tuning for Structured Reasoning in Large Language Models

📅 2025-08-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address reasoning chain fragmentation and insufficient entity-level semantic modeling in large language models (LLMs) for structured knowledge reasoning, this paper proposes a knowledge graph (KG)-enhanced fine-tuning framework. The method employs graph neural networks to encode KG topological structure and introduces a dynamic gating mechanism to adaptively fuse contextual representations with structured knowledge. A joint loss function is designed to simultaneously optimize task performance and KG structural alignment, mitigating representation space conflicts. Key contributions include: (i) a learnable, multi-source knowledge fusion gating architecture; and (ii) a joint optimization objective explicitly enforcing structural consistency. Extensive experiments demonstrate significant improvements over baselines on entity recognition, question answering, and text generation tasks. Moreover, the framework exhibits strong robustness and generalization across varying learning rates, KG coverage ratios, and structural perturbations.

Technology Category

Application Category

📝 Abstract
This paper addresses the problems of missing reasoning chains and insufficient entity-level semantic understanding in large language models when dealing with tasks that require structured knowledge. It proposes a fine-tuning algorithm framework based on knowledge graph injection. The method builds on pretrained language models and introduces structured graph information for auxiliary learning. A graph neural network is used to encode entities and their relations, constructing a graph-based semantic representation. A fusion mechanism is then designed to jointly model the knowledge graph embeddings with the contextual representations from the language model. To enhance the robustness of knowledge integration, a gating mechanism is introduced to dynamically balance the contributions of linguistic semantics and structural knowledge. This effectively mitigates conflicts between different representational spaces. During training, a joint loss function is constructed to account for both task performance and structural alignment objectives. This helps improve the accuracy of entity prediction and semantic reasoning. The study also includes a series of systematic sensitivity experiments. It evaluates the effects of learning rate, graph coverage, and structural perturbations on model performance. The results further validate the effectiveness and stability of the proposed method across tasks such as entity recognition, question answering, and language generation. Experimental findings show that the proposed structure-aware fine-tuning framework significantly enhances the model's ability to represent complex semantic units. It demonstrates better semantic consistency and contextual logic modeling in scenarios involving structural reasoning and entity extraction.
Problem

Research questions and friction points this paper is trying to address.

Addresses missing reasoning chains in LLMs for structured knowledge tasks
Improves entity-level semantic understanding in language models
Mitigates conflicts between linguistic and structural knowledge representations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Knowledge graph-infused fine-tuning for structured reasoning
Graph neural network encodes entities and relations
Gating mechanism balances linguistic and structural knowledge
🔎 Similar Papers
No similar papers found.