Demystifying OPD: Length Inflation and Stabilization Strategies for Large Language Models

2026-04-09Computation and Language

Computation and LanguageMachine Learning
AI summary

The authors study a problem in on-policy distillation, where student models learn from teachers while using data generated by themselves. They find that the student’s own data can become dominated by long, repetitive sequences that get cut off, causing training to become unstable and performance to drop. To fix this, the authors propose StableOPD, a method that controls these repetitive patterns and mixes different types of training data to keep learning steady. They show that their method improves stability and accuracy on several math reasoning tasks.

on-policy distillationstudent modelteacher supervisionrolloutstrajectory truncationtraining instabilitydivergence constraintmixture distillationmath reasoning datasetsgradient bias
Authors
Feng Luo, Yu-Neng Chuang, Guanchu Wang, Zicheng Xu, Xiaotian Han, Tianyi Zhang, Vladimir Braverman
Abstract
On-policy distillation (OPD) trains student models under their own induced distribution while leveraging supervision from stronger teachers. We identify a failure mode of OPD: as training progresses, on-policy rollouts can undergo abrupt length inflation, causing truncated trajectories to dominate the training data. This truncation collapse coincides with abrupt repetition saturation and induces biased gradient signals, leading to severe training instability and sharp degradation in validation performance. We attribute this problem to the interaction between student-induced data collection and the distillation objective, which implicitly favors long and repetitive rollouts. To address this issue, we propose StableOPD, a stabilized OPD framework that combines a reference-based divergence constraint with rollout mixture distillation. These together mitigate repetition-induced length inflation and further stabilize OPD training. Across multiple math reasoning datasets, our approach prevents truncation collapse, stabilizes training dynamics, and improves performance by 7.2% on average.