SPARK: Stepwise Process-Aware Rewards for Reference-Free Reinforcement Learning

πŸ“… 2025-12-02
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing process reward models (PRMs) rely on costly step-level human annotations or ground-truth reference solutions, limiting their applicability to domains like mathematical reasoning where gold-standard process annotations are unavailable. Method: We propose SPARK, the first framework for ground-truth-free process-level reward modeling. It employs a generator-verifier collaborative paradigm to produce diverse solution paths, integrates parallel self-consistency scoring, sequence-level meta-critique, and chain-of-thought verification (PRM-CoT) to construct synthetic verification data for fine-tuning a generative PRM, and incorporates format constraints to mitigate reward hacking. Contribution/Results: On ProcessBench, SPARK achieves 67.5 F1β€”surpassing the ground-truth-supervised baseline (66.4). Across six mathematical reasoning benchmarks, it attains a mean accuracy of 47.4%, significantly outperforming RLVR (43.9%) and establishing the first effective process-supervised reinforcement learning method without reference answers.

Technology Category

Application Category

πŸ“ Abstract
Process reward models (PRMs) that provide dense, step-level feedback have shown promise for reinforcement learning, yet their adoption remains limited by the need for expensive step-level annotations or ground truth references. We propose SPARK: a three-stage framework where in the first stage a generator model produces diverse solutions and a verifier model evaluates them using parallel scaling (self-consistency) and sequential scaling (meta-critique). In the second stage, we use these verification outputs as synthetic training data to fine-tune generative process reward models, which subsequently serve as reward signals during training. We show that aggregating multiple independent verifications at the step level produces training data for process reward models that surpass ground-truth outcome supervision, achieving 67.5 F1 on ProcessBench (a benchmark for identifying erroneous steps in mathematical reasoning) compared to 66.4 for reference-guided training and 61.9 for GPT-4o. In the final stage, we apply our generative PRM with chain-of-thought verification (PRM-CoT) as the reward model in RL experiments on mathematical reasoning, and introduce format constraints to prevent reward hacking. Using Qwen2.5-Math-7B, we achieve 47.4% average accuracy across six mathematical reasoning benchmarks, outperforming ground-truth-based RLVR (43.9%). Our work enables reference-free RL training that exceeds ground-truth methods, opening new possibilities for domains lacking verifiable answers or accessible ground truth.
Problem

Research questions and friction points this paper is trying to address.

Addresses the need for expensive step-level annotations in process reward models.
Proposes a reference-free reinforcement learning framework using synthetic verification data.
Enhances mathematical reasoning accuracy by aggregating multiple step-level verifications.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Three-stage framework with generator and verifier models
Synthetic training data from verification outputs for fine-tuning
Generative process reward models with chain-of-thought verification
πŸ”Ž Similar Papers
No similar papers found.