Pretrained Model Representations as Acquisition Signals for Active Learning of MLIPs

2026-05-05Machine Learning

Machine Learning
AI summary

The authors explored ways to make training machine learning models for predicting chemical reactions more efficient by reducing the number of costly quantum chemistry calculations needed. They focused on using information already present inside a pretrained model to decide which new data points to learn from next, avoiding complex additional computations. By using special mathematical tools derived from the model's internal features, they were able to pick better data samples and reach target accuracy with less data. Their approach worked better than traditional methods and helped the model understand chemical similarities more reliably.

Machine Learning Interatomic PotentialsActive LearningPretrained ModelsNeural Tangent KernelLatent SpaceTransition StatesQuantum ChemistryReactive ChemistryModel UncertaintyData Acquisition
Authors
Eszter Varga-Umbrich, Shikha Surana, Paul Duckworth, Jules Tilly, Olivier Peltre, Zachary Weller-Davies
Abstract
Training machine learning interatomic potentials (MLIPs) for reactive chemistry is often bottlenecked by the high cost of quantum chemical labels and the scarcity of transition state configurations in candidate pools. Active learning (AL) can mitigate these costs, but its effectiveness hinges on the acquisition rule. We investigate whether the latent space of a pretrained MLIP already contains the information necessary for effective acquisition, eliminating the need for auxiliary uncertainty heads, Bayesian training and fine-tuning, or committee ensembles. We introduce two acquisition signals derived directly from a pretrained MACE potential: a finite-width neural tangent kernel (NTK) and an activation kernel built from hidden latent space features. On reactive-chemistry benchmarks, both kernels consistently outperform fixed-descriptor baselines, committee disagreement, and random acquisition, reducing the data required to reach performance targets by an average of 38% for energy error and 28% for force error. We further show that the pretrained model induces similarity spaces that preserve chemically meaningful structure and provide more reliable residual uncertainty estimates than randomly initialised or fixed-descriptor-based kernels. Our results suggest that pretraining aligns latent-space geometry with model error, yielding a practical and sufficient acquisition signal for reactive MLIP fine-tuning.