Paths-over-Graph: Knowledge Graph Empowered Large Language Model Reasoning

📅 2024-10-18
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) suffer from hallucination and insufficient factual grounding in knowledge-intensive, complex reasoning tasks. To address this, we propose PoG (Path-guided Reasoning over Knowledge Graphs), a novel framework featuring a graph-structure-guided, three-stage dynamic multi-hop path exploration and multi-entity deep-path detection mechanism—enabling, for the first time, noise-aware semantic relevance-based path pruning. PoG integrates knowledge graph (KG) path retrieval, SBERT-based semantic matching, prompt engineering, and graph pruning into an end-to-end KG-augmented reasoning pipeline. Evaluated on five KGQA benchmarks, PoG achieves an average accuracy gain of 18.9% over ToG. Remarkably, PoG powered by GPT-3.5-Turbo outperforms ToG driven by GPT-4 by 23.9%, demonstrating substantial improvements in reasoning faithfulness and interpretability.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have achieved impressive results in various tasks but struggle with hallucination problems and lack of relevant knowledge, especially in deep complex reasoning and knowledge-intensive tasks. Knowledge Graphs (KGs), which capture vast amounts of facts in a structured format, offer a reliable source of knowledge for reasoning. However, existing KG-based LLM reasoning methods face challenges like handling multi-hop reasoning, multi-entity questions, and effectively utilizing graph structures. To address these issues, we propose Paths-over-Graph (PoG), a novel method that enhances LLM reasoning by integrating knowledge reasoning paths from KGs, improving the interpretability and faithfulness of LLM outputs. PoG tackles multi-hop and multi-entity questions through a three-phase dynamic multi-hop path exploration, which combines the inherent knowledge of LLMs with factual knowledge from KGs. In order to improve the efficiency, PoG prunes irrelevant information from the graph exploration first and introduces efficient three-step pruning techniques that incorporate graph structures, LLM prompting, and a pre-trained language model (e.g., SBERT) to effectively narrow down the explored candidate paths. This ensures all reasoning paths contain highly relevant information captured from KGs, making the reasoning faithful and interpretable in problem-solving. PoG innovatively utilizes graph structure to prune the irrelevant noise and represents the first method to implement multi-entity deep path detection on KGs for LLM reasoning tasks. Comprehensive experiments on five benchmark KGQA datasets demonstrate PoG outperforms the state-of-the-art method ToG across GPT-3.5-Turbo and GPT-4, achieving an average accuracy improvement of 18.9%. Notably, PoG with GPT-3.5-Turbo surpasses ToG with GPT-4 by up to 23.9%.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Knowledge Graphs
Reasoning Capabilities
Innovation

Methods, ideas, or system contributions that make the work stand out.

Paths-over-Graph
Multi-step Reasoning
Knowledge Graph Utilization
🔎 Similar Papers
No similar papers found.
Xingyu Tan
Xingyu Tan
University of New South Wales
Graph ProcessingDatabaseLLMsKnowledge Graph
X
Xiaoyang Wang
University of New South Wales, Sydney, Australia
Q
Qing Liu
Data61, CSIRO, Sydney, Australia
X
Xiwei Xu
Data61, CSIRO, Sydney, Australia
X
Xin Yuan
Data61, CSIRO, Sydney, Australia
W
Wenjie Zhang
University of New South Wales, Sydney, Australia