🤖 AI Summary
This work proposes a Verifiable Process Reward Model (VPRM) to address the limitations of existing process supervision methods, which rely on neural discriminators to evaluate intermediate reasoning steps of large language models and are thus prone to opacity, bias, and reward hacking, often failing to enforce domain-specific reasoning rules. VPRM introduces, for the first time, a deterministic, rule-based programmatic verifier into a reinforcement learning framework, enabling interpretable, auditable, and deception-resistant supervision of intermediate reasoning. Evaluated on the task of bias risk assessment in medical evidence synthesis, VPRM substantially improves logical consistency and evidential grounding of model reasoning, achieving up to a 20% absolute gain in F1 score across multiple datasets and outperforming outcome-only verification baselines by 6.5%.
📝 Abstract
Recent work on reinforcement learning with verifiable rewards (RLVR) has shown that large language models (LLMs) can be substantially improved using outcome-level verification signals, such as unit tests for code or exact-match checks for mathematics. In parallel, process supervision has long been explored as a way to shape the intermediate reasoning behaviour of LLMs, but existing approaches rely on neural judges to score chain-of-thought steps, leaving them vulnerable to opacity, bias, and reward hacking. To address this gap, we introduce Verifiable Process Reward Models (VPRMs), a reinforcement-learning framework in which intermediate reasoning steps are checked by deterministic, rule-based verifiers. We apply VPRMs to risk-of-bias assessment for medical evidence synthesis, a domain where guideline-defined criteria and rule-based decision paths enable programmatic verification of reasoning traces. Across multiple datasets, we find that VPRMs generate reasoning that adheres closely to domain rules and achieve substantially higher coherence between step-level decisions and final labels. Results show that VPRMs achieve up to 20% higher F1 than state-of-the-art models and 6.5% higher than verifiable outcome rewards, with substantial gains in evidence grounding and logical coherence.