Prospective Compression in Human Abstraction Learning
2026-05-11 • Artificial Intelligence
Artificial IntelligenceMachine LearningNeural and Evolutionary Computing
AI summaryⓘ
The authors study how people learn to create useful building blocks (abstractions) while solving changing problems over time. They suggest that instead of summarizing past tasks, humans forecast future tasks to build better tools. Using a visual task with patterns, they found people adapt their learning based on the evolving nature of problems. Their results show that human learning doesn’t match older methods focused only on past tasks, and also can't be explained by some AI models.
program synthesislibrary learningabstractionnon-stationary domainsprospective compressionretrospective compressionvisual program synthesislatent curriculalarge language modelsinductive bias
Authors
Leonardo Hernandez Cano, Ivan Zareski, Luisa El Amouri, Pinzhe Zhao, Max Mascini, Emanuele Sansone, Yewen Pu, Bonan Zhao, Marta Kryven
Abstract
A core challenge in program synthesis is online library learning: the incremental acquisition of reusable abstractions under uncertainty about future task demands. Existing algorithms treat library learning as retrospective compression over a static task distribution, where the learned library is determined by the corpus of past tasks. However, real-world learning domains are often non-stationary, with tasks arising from a generative process that evolves over time. We propose and test the hypothesis that in non-stationary domains human library learning selects abstractions prospectively: targeting compression of future tasks. We study this question using the Pattern Builder Task, a visual program synthesis paradigm in which participants construct increasingly complex geometric patterns from a small set of primitives, transformations, and custom helpers that carry forward across trials. Using this task, we conduct two experiments with complementary latent curricula, designed to dissociate between behaviors consistent with prospective compression, and alternative library learning accounts. Using six computational models spanning online library learning strategies, we show that human abstraction behavior reflects sensitivity to latent, non-stationary structure in the task-generating process. This behavior is consistent with prospective compression, and cannot be captured by existing retrospective compression-based algorithms, or inductive biases modeled by LLM-based program synthesis.