🤖 AI Summary
Large language models (LLMs) are susceptible to textual noise in complex reasoning tasks and often produce logically inconsistent or factually incorrect outputs due to insufficient structured knowledge support. To address this, we propose Subgraph-Guided Reasoning (SGR), a novel progressive reasoning framework that dynamically constructs query-relevant knowledge subgraphs and performs stepwise, structured inference via graph-guided multi-path chain-of-thought reasoning. SGR integrates four core components: subgraph generation, multi-step reasoning control, multi-path ensemble, and prompt optimization—jointly suppressing noise and enforcing factual consistency. Evaluated across multiple reasoning benchmarks, SGR achieves an average accuracy improvement of 7.2% over strong baselines, demonstrating that external structured knowledge effectively enhances LLMs’ deep reasoning capabilities.
📝 Abstract
Large Language Models (LLMs) have achieved strong performance across a wide range of natural language processing tasks in recent years, including machine translation, text generation, and question answering. As their applications extend to increasingly complex scenarios, however, LLMs continue to face challenges in tasks that require deep reasoning and logical inference. In particular, models trained on large scale textual corpora may incorporate noisy or irrelevant information during generation, which can lead to incorrect predictions or outputs that are inconsistent with factual knowledge. To address this limitation, we propose a stepwise reasoning enhancement framework for LLMs based on external subgraph generation, termed SGR. The proposed framework dynamically constructs query relevant subgraphs from external knowledge bases and leverages their semantic structure to guide the reasoning process. By performing reasoning in a step by step manner over structured subgraphs, SGR reduces the influence of noisy information and improves reasoning accuracy. Specifically, the framework first generates an external subgraph tailored to the input query, then guides the model to conduct multi step reasoning grounded in the subgraph, and finally integrates multiple reasoning paths to produce the final answer. Experimental results on multiple benchmark datasets demonstrate that SGR consistently outperforms strong baselines, indicating its effectiveness in enhancing the reasoning capabilities of LLMs.