AI summaryⓘ
The authors study how to improve a method called Group Relative Policy Optimization (GRPO) used for teaching language models to solve math problems. They find that GRPO is not flexible enough in exploring new solutions and treats all training problems equally, which is not ideal. To fix this, they introduce Exploration-Prioritized Policy Optimization (EXPO), which adjusts how the model learns based on its accuracy and focuses more on problems of moderate difficulty. Their experiments show that EXPO helps the models perform better on challenging math benchmarks, especially when generating multiple solution attempts. This suggests that EXPO allows the model to explore more effectively without increasing the cost of running it.
Reinforcement LearningVerifiable RewardsGroup Relative Policy OptimizationKL PenaltyCurriculum SamplingMathematical ReasoningPolicy ExplorationAccuracy-Conditioned KL ScalingGaussian Curriculum SamplingPass@k
Authors
Mingxiong Lin, Zhangquan Gong, Maowen Tang, Qian Li, Chuangchuang Wang, Jian Ma, Sutian Huang, Kai Tang, Haonan Lu
Abstract
Reinforcement Learning with Verifiable Rewards (RLVR) has become the standard paradigm for LLM mathematical reasoning, where Group Relative Policy Optimization (GRPO) serves as the mainstream algorithm. We point out two understudied inefficiencies existing in GRPO. First, the fixed KL penalty coefficient overly restricts policy exploration at stages where the model requires significant deviation from the reference policy. Second, uniform sampling of training questions ignores that moderately difficult problems provide the most informative gradient signals for optimization. We propose Exploration-Prioritized Policy Optimization (EXPO) with two lightweight plug-in modules. The Accuracy-Conditioned KL Scaling (AKL) dynamically adjusts KL regularization strength through a smooth nonlinear function of batch average accuracy, relaxing the penalty when the model underperforms and strengthening it when the model achieves good results. The Gaussian Curriculum Sampling (GCS) assigns sampling weights to questions following a Gaussian distribution centered at moderate accuracy around 0.5, focusing training on the model's learning frontier. We conduct extensive experiments on DeepSeek-R1-Distill-Qwen-1.5B and Qwen3-8B-Base over six mathematical reasoning benchmarks. The results show EXPO steadily surpasses vanilla GRPO. It obtains an absolute gain of 13.34 on AIME 2025 pass@32, rising from 63.33 percent to 76.67 percent, and achieves an average pass@32 improvement of 2.66 on the 8B model. The much larger performance gains on pass@32 compared with pass@1 demonstrate that EXPO effectively enlarges the model's exploration boundary under a fixed inference cost budget.