Beyond Variance: Prompt-Efficient RLVR via Rare-Event Amplification and Bidirectional Pairing

📅 2026-02-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the instability and poor generalization of traditional reinforcement learning with verifiable rewards (RLVR) methods, which rely on variance in training accuracy for prompt selection. The authors propose a mechanism-driven prompt selection strategy that constructs paired positive and negative prompts—defined as “hard but solvable” versus “easy but fragile”—and integrates them with a weighted GRPO algorithm, group-normalized advantage estimation, and multi-trajectory success-rate evaluation. This approach delivers both reliable positive anchors and explicit negative signals within a single batch update, substantially improving sample efficiency and exploration stability. Evaluated on Qwen2.5-Math-7B, the method achieves a Pass@8 score of 22.2 on AIME 2025 (↑5.4) and a Pass@64 score of 97.0 on AMC23 (↑3.0) using only a single prompt pair, matching the performance of state-of-the-art RLVR approaches that employ prompt pools of thousands.

Technology Category

Application Category

📝 Abstract
Reinforcement learning with verifiable rewards (RLVR) is effective for training large language models on deterministic outcome reasoning tasks. Prior work shows RLVR works with few prompts, but prompt selection is often based only on training-accuracy variance, leading to unstable optimization directions and weaker transfer. We revisit prompt selection from a mechanism-level view and argue that an effective minibatch should provide both (i) a reliable positive anchor and (ii) explicit negative learning signals from rare failures. Based on this principle, we propose \emph{positive--negative pairing}: at each update, we sample a hard-but-solvable $q^{+}$ and an easy-but-brittle prompt $q^{-}$(high success rate but not perfect), characterized by low and high empirical success rates under multiple rollouts. We further introduce Weighted GRPO, which reweights binary outcomes at the pair level and uses group-normalized advantages to amplify rare successes on $q^{+}$ into sharp positive guidance while turning rare failures on $q^{-}$ into strong negative penalties. This bidirectional signal provides informative learning feedback for both successes and failures, improving sample efficiency without suppressing exploration. On Qwen2.5-Math-7B, a single paired minibatch per update consistently outperforms a GRPO baseline that selects two prompts via commonly used variance-based selection heuristics: AIME~2025 Pass@8 improves from 16.8 to 22.2, and AMC23 Pass@64 from 94.0 to 97.0, while remaining competitive with large-scale RLVR trained from a pool of 1209 training prompts. Similar gains are observed on Qwen2.5-Math-7B-Instruct.
Problem

Research questions and friction points this paper is trying to address.

Reinforcement Learning with Verifiable Rewards
Prompt Selection
Training Stability
Transfer Performance
Rare-Event Learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Rare-Event Amplification
Bidirectional Pairing
Prompt-Efficient RLVR
Weighted GRPO
Positive-Negative Prompt Selection
🔎 Similar Papers
No similar papers found.
X
Xin Sheng
Beijing University of Post and Telecommunications
J
Jiaxin Li
Sichuan Agricultural University
Y
Yujuan Pang
Beijing University of Post and Telecommunications
R
Ran Peng
Sichuan Agricultural University
Yong Ma
Yong Ma
Wuhan University
Infrared image processingremote sensing