🤖 AI Summary
This work addresses the propensity of large language models to generate hallucinated responses unsupported by evidence in high-stakes scenarios. To mitigate this, the authors propose EvidenceRL, a novel framework that, for the first time, incorporates evidence consistency as a reinforcement learning reward signal, jointly optimizing both answer correctness and fidelity to supporting evidence. Built upon the Group Relative Policy Optimization (GRPO) algorithm, EvidenceRL integrates evidence entailment scoring with reference-answer consistency evaluation. Experiments on cardiac diagnosis and legal reasoning tasks demonstrate that EvidenceRL substantially increases the rate of evidence-supported responses—by up to 61.6%—and reduces hallucinations by nearly fivefold, all while preserving baseline task accuracy. These results highlight the framework’s effectiveness in enabling trustworthy, evidence-grounded generation across diverse domains.
📝 Abstract
Large Language Models (LLMs) are fluent but prone to hallucinations, producing answers that appear plausible yet are unsupported by available evidence. This failure is especially problematic in high-stakes domains where decisions must be justified by verifiable information. We introduce \textbf{EvidenceRL}, a reinforcement learning framework that enforces evidence adherence during training. EvidenceRL scores candidate responses for grounding (entailment with retrieved evidence and context) and correctness (agreement with reference answers) and optimizes the generator using Group Relative Policy Optimization (GRPO). We evaluate across two high-stakes domains, cardiac diagnosis and legal reasoning, where EvidenceRL consistently improves evidence grounding and faithfulness without sacrificing task accuracy. On cardiac diagnosis, F1@3 increases from 37.0 to 54.5 on Llama-3.2-3B while grounding ($G_{\max}@3$) rises from 47.6 to 78.2; hallucinations drop nearly 5$\times$ and evidence-supported diagnoses increase from 31.8\% to 61.6\%. On legal reasoning, EvidenceRL raises Faithfulness from 32.8\% to 67.6\% on Llama-3.1-8B, demonstrating consistent behavioral change across domains. Our code is open-sourced at https://github.com/Wizaaard/EvidenceRL.git.