Stepwise Credit Assignment for GRPO on Flow-Matching Models

2026-03-30Machine Learning

Machine LearningArtificial IntelligenceComputer Vision and Pattern Recognition
AI summary

The authors study how reinforcement learning is applied to diffusion models, which generate images step-by-step. They note that previous methods gave equal importance to every step, although early steps shape the big picture and later steps add details. To fix this, the authors propose Stepwise-Flow-GRPO, which rewards each step based on how much it improves the outcome, leading to better learning efficiency and faster results. They also create a new variant of the diffusion process that helps get more accurate feedback while keeping randomness in the model.

Reinforcement LearningFlow ModelsDiffusion ModelsCredit AssignmentTweedie's FormulaPolicy GradientDDIMStochastic Differential EquationsStepwise Reward
Authors
Yash Savani, Branislav Kveton, Yuchen Liu, Yilin Wang, Jing Shi, Subhojyoti Mukherjee, Nikos Vlassis, Krishna Kumar Singh
Abstract
Flow-GRPO successfully applies reinforcement learning to flow models, but uses uniform credit assignment across all steps. This ignores the temporal structure of diffusion generation: early steps determine composition and content (low-frequency structure), while late steps resolve details and textures (high-frequency details). Moreover, assigning uniform credit based solely on the final image can inadvertently reward suboptimal intermediate steps, especially when errors are corrected later in the diffusion trajectory. We propose Stepwise-Flow-GRPO, which assigns credit based on each step's reward improvement. By leveraging Tweedie's formula to obtain intermediate reward estimates and introducing gain-based advantages, our method achieves superior sample efficiency and faster convergence. We also introduce a DDIM-inspired SDE that improves reward quality while preserving stochasticity for policy gradients.