Scalable Gaussian process inference via neural feature maps
2026-05-11 • Machine Learning
Machine Learning
AI summaryⓘ
The authors developed a new way to improve Gaussian process models by using neural networks to create better feature maps, which helps the model understand data more effectively. They showed that these learned features work like a smart shortcut to simplify complex calculations while keeping the model accurate. Their method also fixes common smoothing issues and works well for different types of data, like tables and images. When tested, their approach was both faster and more accurate than older methods for tasks like regression and classification.
Gaussian processneural feature mapkernelreproducing kernel Hilbert space (RKHS)Gram matrixlow-rank approximationspectral propertiesproduct kernelsregressionclassification
Authors
Anthony Stephenson
Abstract
We present a theoretically grounded Gaussian process framework that leverages neural feature maps to construct expressive kernels. We show that the learned feature map can be interpreted as an optimal low-rank approximation to a Gram matrix derived from an implied RKHS, from which we establish consistency of the GP posterior. We further analyse the spectral properties of the induced kernels and introduce product feature-map kernels to address oversmoothing. This simple yet powerful approach enables fast, scalable, and accurate exact GP inference with minimal upfront work. The flexibility of kernel design supports seamless application to both regression and classification tasks across diverse data modalities, including tabular inputs and structured domains such as images. On benchmark datasets, this approach surpasses pre-existing methods in terms of accuracy and training and prediction efficiency.