The two clocks and the innovation window: When and how generative models learn rules

2026-05-11Machine Learning

Machine LearningArtificial IntelligenceComputational Complexity
AI summary

The authors studied how generative models learn rules versus memorizing data by looking at two key moments: when the model starts generating rule-following outputs and when it begins copying exact training examples. They found that the time to learn rules depends on how complex the rule is and the model's capacity, while memorization timing depends mostly on the training dataset size. The gap between these two times, which they call the 'innovation window,' shows when a model can create new, rule-valid outputs instead of just memorizing. This behavior appears similarly in different model types, and the authors connect these findings to changes in the model’s optimization process.

generative modelsscore-matchingnext-token predictionrule complexitymodel capacitymemorizationinnovation windowdiffusion modelsautoregressive modelsoptimization landscape
Authors
Binxu Wang, Emma Lucia Byrnes Finn, Bingbin Liu
Abstract
Generative models trained on finite data face a fundamental tension: their score-matching or next-token objective converges to the empirical training distribution rather than the population distribution we seek to learn. Using rule-valid synthetic tasks, we trace this tension across two training timescales: $τ_{\mathrm{rule}}$, the step at which generations first become rule-valid, and $τ_{\mathrm{mem}}$, the step at which models begin reproducing training samples. Focusing on parity and extending to other binary rules and combinatorial puzzles, we characterize how these two clocks, $τ_{\mathrm{rule}}$ and $τ_{\mathrm{mem}}$, depend on key aspects of the learning setup. Specifically, we show that $τ_{\mathrm{rule}}$ increases with rule complexity and decreases with model capacity, while $τ_{\mathrm{mem}}$ is approximately invariant to the rule and scales nearly linearly with dataset size $N$. We define the \emph{innovation window} as the interval $[τ_{\mathrm{rule}}, τ_{\mathrm{mem}}]$. This window widens with increasing $N$ and narrows with rule complexity, and may vanish entirely when $τ_{\mathrm{rule}} \geq τ_{\mathrm{mem}}$. The same two-clock structure arises in both diffusion (DiT) and autoregressive (GPT) models, with architecture-dependent offsets. Dissecting the learned score of DiT models reveals a corresponding evolution of the optimization landscapes, where rule-valid samples' basins expand substantially around $τ_{\mathrm{rule}}$, while training samples' basins begin to dominate around $τ_{\mathrm{mem}}$. Together, these results yield a unified and predictive account of when and how generative models exhibit genuine innovation.