When Right Meets Wrong: Bilateral Context Conditioning with Reward-Confidence Correction for GRPO

📅 2026-03-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses a key limitation in existing Group Relative Policy Optimization (GRPO) methods, which overlook the contrastive structure between correct and incorrect reasoning paths within the same group, thereby failing to effectively leverage inter-sample comparison signals. To overcome this, we propose the Bilateral Intra-Group Contextual Conditioning (BICC) mechanism, which explicitly cross-references successful and failed reasoning trajectories during policy optimization. Additionally, we introduce a Reward-Confidence Covariance (RCC) estimator to dynamically adjust the advantage baseline. Both components require no additional sampling or auxiliary models and can be seamlessly integrated into various GRPO variants. Extensive experiments across multiple mathematical reasoning benchmarks demonstrate consistent performance improvements under diverse model architectures and algorithmic configurations.

Technology Category

Application Category

📝 Abstract
Group Relative Policy Optimization (GRPO) has emerged as an effective method for training reasoning models. While it computes advantages based on group mean, GRPO treats each output as an independent sample during the optimization and overlooks a vital structural signal: the natural contrast between correct and incorrect solutions within the same group, thus ignoring the rich, comparative data that could be leveraged by explicitly pitting successful reasoning traces against failed ones. To capitalize on this, we present a contrastive reformulation of GRPO, showing that the GRPO objective implicitly maximizes the margin between the policy ratios of correct and incorrect samples. Building on this insight, we propose Bilateral Context Conditioning (BICC), a mechanism that allows the model to cross-reference successful and failed reasoning traces during the optimization, enabling a direct information flow across samples. We further introduce Reward-Confidence Correction (RCC) to stabilize training by dynamically adjusts the advantage baseline in GRPO using reward-confidence covariance derived from the first-order approximation of the variance-minimizing estimator. Both mechanisms require no additional sampling or auxiliary models and can be adapted to all GRPO variants. Experiments on mathematical reasoning benchmarks demonstrate consistent improvements across comprehensive models and algorithms. Code is available at \href{https://github.com/Skylanding/BiCC}{https://github.com/Skylanding/BiCC}.
Problem

Research questions and friction points this paper is trying to address.

Group Relative Policy Optimization
contrastive learning
reasoning models
policy optimization
reward-confidence
Innovation

Methods, ideas, or system contributions that make the work stand out.

Bilateral Context Conditioning
Reward-Confidence Correction
Group Relative Policy Optimization
contrastive learning
policy optimization
🔎 Similar Papers
No similar papers found.
Y
Yu Li
Department of Electrical and Computer Engineering, George Washington University
Tian Lan
Tian Lan
George Washington University
Machine LearningOptimizationCyber Security
Z
Zhengling Qi
School of Business, George Washington University