When Right Meets Wrong: Bilateral Context Conditioning with Reward-Confidence Correction for GRPO

2026-03-13Artificial Intelligence

Artificial Intelligence
AI summary

The authors identify that the existing Group Relative Policy Optimization (GRPO) method doesn't fully use the differences between correct and incorrect reasoning outcomes within the same group. They show that GRPO is actually trying to separate these outcomes but isn't directly using this contrast during training. To fix this, they propose Bilateral Context Conditioning (BICC), which lets the model compare successful and failed attempts directly. They also introduce Reward-Confidence Correction (RCC) to make training more stable by adjusting how advantages are calculated. Their methods improve performance on math reasoning tests without needing extra data or models.

Group Relative Policy Optimization (GRPO)contrastive learningpolicy optimizationadvantage estimationBilateral Context Conditioning (BICC)Reward-Confidence Correction (RCC)reinforcement learningreasoning modelstraining stabilitymathematical reasoning benchmarks
Authors
Yu Li, Tian Lan, Zhengling Qi
Abstract
Group Relative Policy Optimization (GRPO) has emerged as an effective method for training reasoning models. While it computes advantages based on group mean, GRPO treats each output as an independent sample during the optimization and overlooks a vital structural signal: the natural contrast between correct and incorrect solutions within the same group, thus ignoring the rich, comparative data that could be leveraged by explicitly pitting successful reasoning traces against failed ones. To capitalize on this, we present a contrastive reformulation of GRPO, showing that the GRPO objective implicitly maximizes the margin between the policy ratios of correct and incorrect samples. Building on this insight, we propose Bilateral Context Conditioning (BICC), a mechanism that allows the model to cross-reference successful and failed reasoning traces during the optimization, enabling a direct information flow across samples. We further introduce Reward-Confidence Correction (RCC) to stabilize training by dynamically adjusts the advantage baseline in GRPO using reward-confidence covariance derived from the first-order approximation of the variance-minimizing estimator. Both mechanisms require no additional sampling or auxiliary models and can be adapted to all GRPO variants. Experiments on mathematical reasoning benchmarks demonstrate consistent improvements across comprehensive models and algorithms. Code is available at \href{https://github.com/Skylanding/BiCC}{https://github.com/Skylanding/BiCC}.