🤖 AI Summary
This work addresses a key limitation in existing Group Relative Policy Optimization (GRPO) methods, which overlook the contrastive structure between correct and incorrect reasoning paths within the same group, thereby failing to effectively leverage inter-sample comparison signals. To overcome this, we propose the Bilateral Intra-Group Contextual Conditioning (BICC) mechanism, which explicitly cross-references successful and failed reasoning trajectories during policy optimization. Additionally, we introduce a Reward-Confidence Covariance (RCC) estimator to dynamically adjust the advantage baseline. Both components require no additional sampling or auxiliary models and can be seamlessly integrated into various GRPO variants. Extensive experiments across multiple mathematical reasoning benchmarks demonstrate consistent performance improvements under diverse model architectures and algorithmic configurations.
📝 Abstract
Group Relative Policy Optimization (GRPO) has emerged as an effective method for training reasoning models. While it computes advantages based on group mean, GRPO treats each output as an independent sample during the optimization and overlooks a vital structural signal: the natural contrast between correct and incorrect solutions within the same group, thus ignoring the rich, comparative data that could be leveraged by explicitly pitting successful reasoning traces against failed ones. To capitalize on this, we present a contrastive reformulation of GRPO, showing that the GRPO objective implicitly maximizes the margin between the policy ratios of correct and incorrect samples. Building on this insight, we propose Bilateral Context Conditioning (BICC), a mechanism that allows the model to cross-reference successful and failed reasoning traces during the optimization, enabling a direct information flow across samples. We further introduce Reward-Confidence Correction (RCC) to stabilize training by dynamically adjusts the advantage baseline in GRPO using reward-confidence covariance derived from the first-order approximation of the variance-minimizing estimator. Both mechanisms require no additional sampling or auxiliary models and can be adapted to all GRPO variants. Experiments on mathematical reasoning benchmarks demonstrate consistent improvements across comprehensive models and algorithms. Code is available at \href{https://github.com/Skylanding/BiCC}{https://github.com/Skylanding/BiCC}.