Stop Probing, Start Coding: Why Linear Probes and Sparse Autoencoders Fail at Compositional Generalisation

2026-03-30Machine Learning

Machine Learning
AI summary

The authors studied how neural networks represent complex concepts as combinations of simpler parts. They found that a common method called sparse autoencoders (SAEs), which tries to quickly guess these parts, often fails when facing new kinds of data. Their experiments showed that the problem isn’t with guessing the parts fast but with learning the right building blocks (called dictionaries). If the dictionary is good, the problem is much easier to solve. So, the authors suggest focusing on better ways to learn these dictionaries for improving such models.

neural network activationslinear representation hypothesissuperpositionsparse codingsparse autoencodersdictionary learningcompressed sensingamortized inferenceout-of-distribution generalizationFISTA
Authors
Vitória Barin Pacela, Shruti Joshi, Isabela Camacho, Simon Lacoste-Julien, David Klindt
Abstract
The linear representation hypothesis states that neural network activations encode high-level concepts as linear mixtures. However, under superposition, this encoding is a projection from a higher-dimensional concept space into a lower-dimensional activation space, and a linear decision boundary in the concept space need not remain linear after projection. In this setting, classical sparse coding methods with per-sample iterative inference leverage compressed sensing guarantees to recover latent factors. Sparse autoencoders (SAEs), on the other hand, amortise sparse inference into a fixed encoder, introducing a systematic gap. We show this amortisation gap persists across training set sizes, latent dimensions, and sparsity levels, causing SAEs to fail under out-of-distribution (OOD) compositional shifts. Through controlled experiments that decompose the failure, we identify dictionary learning -- not the inference procedure -- as the binding constraint: SAE-learned dictionaries point in substantially wrong directions, and replacing the encoder with per-sample FISTA on the same dictionary does not close the gap. An oracle baseline proves the problem is solvable with a good dictionary at all scales tested. Our results reframe the SAE failure as a dictionary learning challenge, not an amortisation problem, and point to scalable dictionary learning as the key open problem for sparse inference under superposition.