Evaluation of retrieval-based QA on QUEST-LOFT

📅 2025-11-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the limited performance of retrieval-augmented generation (RAG) in multi-document information integration and complex reasoning—particularly evident on the QUEST-LOFT benchmark—this paper proposes a novel enhancement framework. Our method enforces structured output generation, explicitly articulating both reasoning chains and supporting evidence, and incorporates an answer re-verification mechanism to jointly optimize retrieval, reasoning, and validation. Crucially, it operates without requiring ultra-long-context modeling, building instead upon standard RAG architectures. A rigorous human evaluation protocol ensures result reliability. Experiments demonstrate that our approach significantly outperforms state-of-the-art long-context language models and mainstream RAG baselines on QUEST-LOFT. These results validate that structured reasoning guidance and iterative verification are critical for improving robustness in complex question answering. The framework establishes a scalable, interpretable paradigm for RAG systems tackling distributed knowledge and deep-reasoning tasks.

Technology Category

Application Category

📝 Abstract
Despite the popularity of retrieval-augmented generation (RAG) as a solution for grounded QA in both academia and industry, current RAG methods struggle with questions where the necessary information is distributed across many documents or where retrieval needs to be combined with complex reasoning. Recently, the LOFT study has shown that this limitation also applies to approaches based on long-context language models, with the QUEST benchmark exhibiting particularly large headroom. In this paper, we provide an in-depth analysis of the factors contributing to the poor performance on QUEST-LOFT, publish updated numbers based on a thorough human evaluation, and demonstrate that RAG can be optimized to significantly outperform long-context approaches when combined with a structured output format containing reasoning and evidence, optionally followed by answer re-verification.
Problem

Research questions and friction points this paper is trying to address.

RAG methods struggle with information distributed across multiple documents
Current approaches fail when retrieval requires complex reasoning processes
Long-context models show significant performance gaps on QUEST-LOFT benchmark
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combining RAG with structured reasoning output format
Adding answer re-verification to enhance accuracy
Optimizing retrieval for multi-document distributed information
🔎 Similar Papers
No similar papers found.