CompactRAG: Reducing LLM Calls and Token Overhead in Multi-Hop Question Answering

📅 2026-02-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the high token consumption, inefficient inference, and cross-hop entity inconsistency in existing retrieval-augmented generation (RAG) systems for multi-hop question answering, which stem from frequent large language model (LLM) invocations. To overcome these limitations, the authors propose CompactRAG, a novel framework that decouples knowledge preprocessing from online reasoning. In an offline phase, the corpus is restructured into atomic question-answer pairs; during inference, questions are decomposed in a consistency-preserving manner, and answers are extracted via dense retrieval combined with a RoBERTa-based reader. This approach requires only two LLM calls regardless of the number of reasoning hops. Evaluated on HotpotQA, 2WikiMultiHopQA, and MuSiQue, CompactRAG achieves accuracy comparable to state-of-the-art methods while substantially reducing token usage, demonstrating its efficiency and practicality.

Technology Category

Application Category

📝 Abstract
Retrieval-augmented generation (RAG) has become a key paradigm for knowledge-intensive question answering. However, existing multi-hop RAG systems remain inefficient, as they alternate between retrieval and reasoning at each step, resulting in repeated LLM calls, high token consumption, and unstable entity grounding across hops. We propose CompactRAG, a simple yet effective framework that decouples offline corpus restructuring from online reasoning. In the offline stage, an LLM reads the corpus once and converts it into an atomic QA knowledge base, which represents knowledge as minimal, fine-grained question-answer pairs. In the online stage, complex queries are decomposed and carefully rewritten to preserve entity consistency, and are resolved through dense retrieval followed by RoBERTa-based answer extraction. Notably, during inference, the LLM is invoked only twice in total - once for sub-question decomposition and once for final answer synthesis - regardless of the number of reasoning hops. Experiments on HotpotQA, 2WikiMultiHopQA, and MuSiQue demonstrate that CompactRAG achieves competitive accuracy while substantially reducing token consumption compared to iterative RAG baselines, highlighting a cost-efficient and practical approach to multi-hop reasoning over large knowledge corpora. The implementation is available at GitHub.
Problem

Research questions and friction points this paper is trying to address.

multi-hop question answering
retrieval-augmented generation
LLM efficiency
token overhead
entity grounding
Innovation

Methods, ideas, or system contributions that make the work stand out.

CompactRAG
multi-hop QA
retrieval-augmented generation
LLM efficiency
atomic QA knowledge base
🔎 Similar Papers
No similar papers found.
Hao Yang
Hao Yang
Professor, College of Automation Engineering, Nanjing University of Aeronautics and Astronautics
Fault tolerant controlSwitched systemsInterconnected systems
Z
Zhiyu Yang
Erik Jonsson School of Engineering and Computer Science, University of Texas at Dallas
X
Xupeng Zhang
Isoftstone Information Technology (Group) Co.,Ltd.
W
Wei Wei
College of Electronic and Information Engineering, Tongji University
Y
Yunjie Zhang
School of Electronic Information, Central South University
Lin Yang
Lin Yang
Nanjing University
Learning TheoryOnline OptimizationModel and Analysis for Computing Systems