Simple is Effective: The Roles of Graphs and Large Language Models in Knowledge-Graph-Based Retrieval-Augmented Generation

πŸ“… 2024-10-28
πŸ›οΈ arXiv.org
πŸ“ˆ Citations: 3
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address the challenge of simultaneously achieving high retrieval efficiency, accurate reasoning, and effective hallucination suppression when augmenting large language models (LLMs) with knowledge graphs (KGs), this paper proposes SubgraphRAGβ€”a fine-tuning-free, dynamic subgraph retrieval framework. Its core innovation lies in a lightweight MLP coupled with a parallel triple-scoring mechanism that explicitly encodes directed structural distances within the KG. Crucially, SubgraphRAG adaptively adjusts subgraph size based on query difficulty and LLM capability, enabling joint optimization of retrieval granularity and reasoning efficacy. The method integrates KG subgraph retrieval, structure-aware scoring, and LLM-coordinated generation. Evaluated on WebQSP and ComplexWebQuestions (CWQ), SubgraphRAG significantly reduces hallucination rates while improving answer accuracy and interpretability. It achieves state-of-the-art performance with Llama3.1-8B-Instruct and sets new benchmark records with GPT-4o on both datasets.

Technology Category

Application Category

πŸ“ Abstract
Large Language Models (LLMs) demonstrate strong reasoning abilities but face limitations such as hallucinations and outdated knowledge. Knowledge Graph (KG)-based Retrieval-Augmented Generation (RAG) addresses these issues by grounding LLM outputs in structured external knowledge from KGs. However, current KG-based RAG frameworks still struggle to optimize the trade-off between retrieval effectiveness and efficiency in identifying a suitable amount of relevant graph information for the LLM to digest. We introduce SubgraphRAG, extending the KG-based RAG framework that retrieves subgraphs and leverages LLMs for reasoning and answer prediction. Our approach innovatively integrates a lightweight multilayer perceptron with a parallel triple-scoring mechanism for efficient and flexible subgraph retrieval while encoding directional structural distances to enhance retrieval effectiveness. The size of retrieved subgraphs can be flexibly adjusted to match the query's need and the downstream LLM's capabilities. This design strikes a balance between model complexity and reasoning power, enabling scalable and generalizable retrieval processes. Notably, based on our retrieved subgraphs, smaller LLMs like Llama3.1-8B-Instruct deliver competitive results with explainable reasoning, while larger models like GPT-4o achieve state-of-the-art accuracy compared with previous baselines -- all without fine-tuning. Extensive evaluations on the WebQSP and CWQ benchmarks highlight SubgraphRAG's strengths in efficiency, accuracy, and reliability by reducing hallucinations and improving response grounding.
Problem

Research questions and friction points this paper is trying to address.

Knowledge Graph Optimization
Language Model Accuracy
Dynamic Information Retrieval
Innovation

Methods, ideas, or system contributions that make the work stand out.

SubgraphRAG
KnowledgeGraphRetrieval
LanguageModelEnhancement
πŸ”Ž Similar Papers
No similar papers found.