The Recipe Matters More Than the Kitchen:Mathematical Foundations of the AI Weather Prediction Pipeline

2026-04-01Machine Learning

Machine LearningArtificial Intelligence
AI summary

The authors present a new unified mathematical framework to understand what affects AI weather prediction accuracy, looking beyond just the model's design to include training methods, loss functions, and data variety. They show that errors from training and data are currently more important than errors from the model architecture itself. Using theory and experiments on diverse AI weather models, they find that common training techniques cause models to lose detail in predictions and underestimate extreme weather events. They also offer a new way to evaluate all parts of the forecasting process before actually training the model.

Approximation TheoryDynamical SystemsInformation TheoryStatistical Learning TheoryLoss FunctionMean Squared Error (MSE)Spherical HarmonicsOut-of-Distribution GeneralizationForecast SkillData Diversity
Authors
Piyush Garg, Diana R. Gergel, Andrew E. Shao, Galen J. Yacalis
Abstract
AI weather prediction has advanced rapidly, yet no unified mathematical framework explains what determines forecast skill. Existing theory addresses specific architectural choices rather than the learning pipeline as a whole, while operational evidence from 2023-2026 demonstrates that training methodology, loss function design, and data diversity matter at least as much as architecture selection. This paper makes two interleaved contributions. Theoretically, we construct a framework rooted in approximation theory on the sphere, dynamical systems theory, information theory, and statistical learning theory that treats the complete learning pipeline (architecture, loss function, training strategy, data distribution) rather than architecture alone. We establish a Learning Pipeline Error Decomposition showing that estimation error (loss- and data-dependent) dominates approximation error (architecture-dependent) at current scales. We develop a Loss Function Spectral Theory formalizing MSE-induced spectral blurring in spherical harmonic coordinates, and derive Out-of-Distribution Extrapolation Bounds proving that data-driven models systematically underestimate record-breaking extremes with bias growing linearly in record exceedance. Empirically, we validate these predictions via inference across ten architecturally diverse AI weather models using NVIDIA Earth2Studio with ERA5 initial conditions, evaluating six metrics across 30 initialization dates spanning all seasons. Results confirm universal spectral energy loss at high wavenumbers for MSE-trained models, rising Error Consensus Ratios showing that the majority of forecast error is shared across architectures, and linear negative bias during extreme events. A Holistic Model Assessment Score provides unified multi-dimensional evaluation, and a prescriptive framework enables mathematical evaluation of proposed pipelines before training.