Verifier-Free RL for LLMs via Intrinsic Gradient-Norm Reward

2026-05-11Machine Learning

Machine LearningArtificial Intelligence
AI summary

The authors propose VIGOR, a new way to improve language models without needing external reward signals or labels. Instead, VIGOR measures how well different outputs fit the model’s current behavior by looking at the gradients of the model’s own predictions. Outputs that cause smaller gradient changes are rewarded more, suggesting better alignment with the model's knowledge. This method helps the model learn math problems better and even transfers some improvements to code tasks. Overall, VIGOR offers a simpler and more stable way to fine-tune models using their internal feedback.

Reinforcement LearningLarge Language ModelsGradient NormReward FunctionIntrinsic RewardPolicy OptimizationNegative Log-LikelihoodCross-Domain TransferMathematical ReasoningCode Generation
Authors
Xuexiang Wen, Hang Yu, Linchao Zhu, Gaoang Wang
Abstract
While Reinforcement Learning with Verifiable Rewards (RLVR) has recently emerged as a promising post-training paradigm for Large Language Models (LLMs), its dependency on the gold label or domain-specific verifiers limits its scalability to new tasks and domains. In this work, we propose Verifier-free Intrinsic Gradient-Norm Reward (VIGOR), a simple reward that uses only the policy model itself. Given a prompt, VIGOR samples a group of completions and assigns higher within-group rewards to outputs that induce smaller $\ell_2$ norms of the teacher-forced negative log-likelihood gradients under the current parameters. Intuitively, lower gradient norms suggest the completion aligns better with the current policy, serving as an intrinsic preference signal for policy optimization. To make this intrinsic signal practical for RL, we correct the systematic length bias of averaged token-level gradients with a $\sqrt{T}$ scaling, and apply group-wise rank shaping to stabilize reward scales across prompts. Across mathematical reasoning benchmarks, VIGOR outperforms the state-of-the-art Reinforcement Learning from Internal Feedback (RLIF) baseline, and it also exhibits cross-domain transfer to code benchmarks when trained only on math data. For instance, on Qwen2.5-7B-Base post-trained on MATH, VIGOR improves the average math accuracy by +3.31% and the average code accuracy by +1.91% over this baseline, while exhibiting more stable training dynamics. The code is available at https://github.com/ZJUSCL/VIGOR.