Beyond Outcome Verification: Verifiable Process Reward Models for Structured Reasoning

📅 2026-01-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes a Verifiable Process Reward Model (VPRM) to address the limitations of existing process supervision methods, which rely on neural discriminators to evaluate intermediate reasoning steps of large language models and are thus prone to opacity, bias, and reward hacking, often failing to enforce domain-specific reasoning rules. VPRM introduces, for the first time, a deterministic, rule-based programmatic verifier into a reinforcement learning framework, enabling interpretable, auditable, and deception-resistant supervision of intermediate reasoning. Evaluated on the task of bias risk assessment in medical evidence synthesis, VPRM substantially improves logical consistency and evidential grounding of model reasoning, achieving up to a 20% absolute gain in F1 score across multiple datasets and outperforming outcome-only verification baselines by 6.5%.

Technology Category

Application Category

📝 Abstract
Recent work on reinforcement learning with verifiable rewards (RLVR) has shown that large language models (LLMs) can be substantially improved using outcome-level verification signals, such as unit tests for code or exact-match checks for mathematics. In parallel, process supervision has long been explored as a way to shape the intermediate reasoning behaviour of LLMs, but existing approaches rely on neural judges to score chain-of-thought steps, leaving them vulnerable to opacity, bias, and reward hacking. To address this gap, we introduce Verifiable Process Reward Models (VPRMs), a reinforcement-learning framework in which intermediate reasoning steps are checked by deterministic, rule-based verifiers. We apply VPRMs to risk-of-bias assessment for medical evidence synthesis, a domain where guideline-defined criteria and rule-based decision paths enable programmatic verification of reasoning traces. Across multiple datasets, we find that VPRMs generate reasoning that adheres closely to domain rules and achieve substantially higher coherence between step-level decisions and final labels. Results show that VPRMs achieve up to 20% higher F1 than state-of-the-art models and 6.5% higher than verifiable outcome rewards, with substantial gains in evidence grounding and logical coherence.
Problem

Research questions and friction points this paper is trying to address.

process supervision
verifiable rewards
reasoning coherence
reward hacking
structured reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Verifiable Process Reward Models
Rule-based Verification
Reinforcement Learning with Verifiable Rewards
Process Supervision
Structured Reasoning
🔎 Similar Papers
No similar papers found.
M
Massimiliano Pronesti
IBM Research Europe - Ireland, Dublin City University
Anya Belz
Anya Belz
Professor of Computer Science, ADAPT Research Centre, Dublin City University, Ireland
Natural Language GenerationAINatural Language ProcessingEvaluationReproducibility
Y
Yufang Hou
IBM Research Europe - Ireland, IT:U Interdisciplinary Transformation University Austria