π€ AI Summary
Existing RAG evaluation frameworks face two key bottlenecks: high computational overhead from multi-step LLM prompting and difficulty in generating precise, interpretable pointwise reward signals. This paper proposes RAG-Zevalβa framework that models faithfulness and correctness assessment as rule-guided end-to-end reasoning tasks, trained via reinforcement learning to yield lightweight, single-pass evaluators that generate holistic scores with explicit attribution chains. It introduces a novel ranking-driven preference reward mechanism and a zero-annotation synthetic ranking reference method, eliminating reliance on human annotations and pointwise rewards. Experiments demonstrate that RAG-Zeval achieves the highest correlation with human judgments across multiple benchmarks, significantly outperforming mainstream LLM-based evaluators while reducing computational cost by 10β100Γ. Notably, it is the first approach to enable small-scale models to surpass hundred-billion-parameter LLMs in both evaluation accuracy and interpretability.
π Abstract
Robust evaluation is critical for deploying trustworthy retrieval-augmented generation (RAG) systems. However, current LLM-based evaluation frameworks predominantly rely on directly prompting resource-intensive models with complex multi-stage prompts, underutilizing models' reasoning capabilities and introducing significant computational cost. In this paper, we present RAG-Zeval (RAG-Zero Evaluator), a novel end-to-end framework that formulates faithfulness and correctness evaluation as a rule-guided reasoning task. Our approach trains evaluators with reinforcement learning, facilitating compact models to generate comprehensive and sound assessments with detailed explanation in one-pass. We introduce a ranking-based outcome reward mechanism, using preference judgments rather than absolute scores, to address the challenge of obtaining precise pointwise reward signals. To this end, we synthesize the ranking references by generating quality-controlled responses with zero human annotation. Experiments demonstrate RAG-Zeval's superior performance, achieving the strongest correlation with human judgments and outperforming baselines that rely on LLMs with 10-100 times more parameters. Our approach also exhibits superior interpretability in response evaluation.