Beyond Chunks and Graphs: Retrieval-Augmented Generation through Triplet-Driven Thinking

📅 2025-08-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing RAG approaches face a trade-off between performance and efficiency: multi-hop RAG achieves strong reasoning capability but incurs high LLM invocation overhead, while graph-based RAG suffers from expensive computation, error-prone graph construction, and redundant retrieval. To address these limitations, we propose T²RAG—a novel framework that abandons conventional text chunking and explicit graph structures. Instead, it introduces a lightweight, atomic triplet-based knowledge base. Given a query, an LLM automatically decomposes it into retrievable placeholder-augmented triplets; T²RAG then iteratively performs triplet matching and evidence aggregation to enable end-to-end retrieval-augmented reasoning. This triplet-driven mechanism eliminates complex graph construction and substantially reduces retrieval redundancy. Evaluated on six benchmark datasets, T²RAG achieves an average 11% accuracy improvement and a 45% reduction in retrieval token cost, outperforming state-of-the-art multi-hop and graph-based RAG methods in overall effectiveness and efficiency.

Technology Category

Application Category

📝 Abstract
Retrieval-augmented generation (RAG) is critical for reducing hallucinations and incorporating external knowledge into Large Language Models (LLMs). However, advanced RAG systems face a trade-off between performance and efficiency. Multi-round RAG approaches achieve strong reasoning but incur excessive LLM calls and token costs, while Graph RAG methods suffer from computationally expensive, error-prone graph construction and retrieval redundancy. To address these challenges, we propose T$^2$RAG, a novel framework that operates on a simple, graph-free knowledge base of atomic triplets. T$^2$RAG leverages an LLM to decompose questions into searchable triplets with placeholders, which it then iteratively resolves by retrieving evidence from the triplet database. Empirical results show that T$^2$RAG significantly outperforms state-of-the-art multi-round and Graph RAG methods, achieving an average performance gain of up to 11% across six datasets while reducing retrieval costs by up to 45%. Our code is available at https://github.com/rockcor/T2RAG
Problem

Research questions and friction points this paper is trying to address.

Balancing performance and efficiency in RAG systems
Reducing LLM calls and token costs in multi-round RAG
Avoiding complex graph construction in Graph RAG methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses triplet-driven knowledge base
Decomposes questions into searchable triplets
Iteratively resolves triplets with evidence
🔎 Similar Papers
No similar papers found.