Learning, Fast and Slow: Towards LLMs That Adapt Continually
2026-05-12 • Machine Learning
Machine LearningArtificial Intelligence
AI summaryⓘ
The authors propose a way for large language models (LLMs) to learn using two speeds: slow changes to the model's parameters and fast adjustments in the context it sees during use. This method, called Fast-Slow Training (FST), lets the model quickly adapt to tasks without forgetting what it learned before, unlike traditional training that changes parameters and risks losing old knowledge. They show that FST learns tasks more efficiently, performs better overall, and keeps the model flexible to learn new tasks later. This approach mimics how humans might learn at different rates for different types of thinking.
Large Language ModelsIn-Context LearningParameter UpdateCatastrophic ForgettingPlasticityFast-Slow LearningKL DivergenceReinforcement LearningContinual LearningPrompt Optimization
Authors
Rishabh Tiwari, Kusha Sareen, Lakshya A Agrawal, Joseph E. Gonzalez, Matei Zaharia, Kurt Keutzer, Inderjit S Dhillon, Rishabh Agarwal, Devvrit Khatri
Abstract
Large language models (LLMs) are trained for downstream tasks by updating their parameters (e.g., via RL). However, updating parameters forces them to absorb task-specific information, which can result in catastrophic forgetting and loss of plasticity. In contrast, in-context learning with fixed LLM parameters can cheaply and rapidly adapt to task-specific requirements (e.g., prompt optimization), but cannot by itself typically match the performance gains available through updating LLM parameters. There is no good reason for restricting learning to being in-context or in-weights. Moreover, humans also likely learn at different time scales (e.g., System 1 vs 2). To this end, we introduce a fast-slow learning framework for LLMs, with model parameters as "slow" weights and optimized context as "fast" weights. These fast "weights" can learn from textual feedback to absorb the task-specific information, while allowing slow weights to stay closer to the base model and persist general reasoning behaviors. Fast-Slow Training (FST) is up to 3x more sample-efficient than only slow learning (RL) across reasoning tasks, while consistently reaching a higher performance asymptote. Moreover, FST-trained models remain closer to the base LLM (up to 70% less KL divergence), resulting in less catastrophic forgetting than RL-training. This reduced drift also preserves plasticity: after training on one task, FST trained models adapt more effectively to a subsequent task than parameter-only trained models. In continual learning scenarios, where task domains change on the fly, FST continues to acquire each new task while parameter-only RL stalls.