🤖 AI Summary
Large language models (LLMs) exhibit limited reasoning capabilities in multi-hop question answering, while conventional retrieval-augmented generation (RAG) suffers from retrieval drift, information redundancy, and inflexible strategies. To address these challenges, we propose ReaL-RAG—a reinforcement learning–driven adaptive reasoning framework. Its key contributions are: (1) a RAG-guided reasoning alignment mechanism that jointly optimizes retrieval and reasoning; (2) a search-then-think iterative enhancement module enabling dynamic path calibration; (3) a network-local dual-granularity intelligent routing strategy improving evidence selection precision; and (4) a progressive hybrid training paradigm balancing stability and generalization. Evaluated on four multi-hop QA benchmarks, ReaL-RAG achieves significant improvements in both accuracy and LLM-based evaluation scores, demonstrating its effectiveness and robustness for complex, multi-step reasoning tasks.
📝 Abstract
This paper introduces KunLunBaizeRAG, a reinforcement learning-driven reasoning framework designed to enhance the reasoning capabilities of large language models (LLMs) in complex multi-hop question-answering tasks. The framework addresses key limitations of traditional RAG, such as retrieval drift, information redundancy, and strategy rigidity. Key innovations include the RAG-driven Reasoning Alignment (RDRA) mechanism, the Search-Think Iterative Enhancement (STIE) mechanism, the Network-Local Intelligent Routing (NLR) mechanism, and a progressive hybrid training strategy. Experimental results demonstrate significant improvements in exact match (EM) and LLM-judged score (LJ) across four benchmarks, highlighting the framework's robustness and effectiveness in complex reasoning scenarios.