🤖 AI Summary
Long-form question answering (LFQA) suffers from low factual completeness, accumulated hallucinations, and a lack of reliable evaluation metrics. To address these challenges, we propose RioRAG—the first retrieval-augmented generation (RAG) framework optimized via reinforcement learning (RL) without requiring supervised training data, explicitly targeting improvements in both informativeness and factual consistency of long-form answers. Our key contributions are: (1) a novel nugget-centric, three-level hierarchical reward model that precisely quantifies factual alignment by decomposing answers into atomic information units; and (2) an end-to-end hallucination suppression mechanism integrating nugget extraction, fact verification, and RAG. On the LongFact and RAGChecker benchmarks, RioRAG achieves significant gains in factual completeness and answer coherence while substantially reducing hallucination rates. The implementation is publicly available.
📝 Abstract
Long-form question answering (LFQA) presents unique challenges for large language models, requiring the synthesis of coherent, paragraph-length answers. While retrieval-augmented generation (RAG) systems have emerged as a promising solution, existing research struggles with key limitations: the scarcity of high-quality training data for long-form generation, the compounding risk of hallucination in extended outputs, and the absence of reliable evaluation metrics for factual completeness. In this paper, we propose RioRAG, a novel reinforcement learning (RL) framework that advances long-form RAG through reinforced informativeness optimization. Our approach introduces two fundamental innovations to address the core challenges. First, we develop an RL training paradigm of reinforced informativeness optimization that directly optimizes informativeness and effectively addresses the slow-thinking deficit in conventional RAG systems, bypassing the need for expensive supervised data. Second, we propose a nugget-centric hierarchical reward modeling approach that enables precise assessment of long-form answers through a three-stage process: extracting the nugget from every source webpage, constructing a nugget claim checklist, and computing rewards based on factual alignment. Extensive experiments on two LFQA benchmarks LongFact and RAGChecker demonstrate the effectiveness of the proposed method. Our codes are available at https://github.com/RUCAIBox/RioRAG.