Benchmarking Optimizers for MLPs in Tabular Deep Learning
2026-04-16 • Machine Learning
Machine Learning
AI summaryⓘ
The authors studied different optimizers used to train simple neural networks (MLPs) on table-like data. They tested many optimizers on many datasets to see which works best. Their main finding is that the Muon optimizer usually trains these models better than the commonly used AdamW. They also found that using an exponential moving average of the model’s weights helps improve AdamW training, though this improvement is less reliable with other model types.
MLPtabular dataoptimizerAdamWMuon optimizerexponential moving averagesupervised learningdeep learningbenchmarkingtraining efficiency
Authors
Yury Gorishniy, Ivan Rubachev, Dmitrii Feoktistov, Artem Babenko
Abstract
MLP is a heavily used backbone in modern deep learning (DL) architectures for supervised learning on tabular data, and AdamW is the go-to optimizer used to train tabular DL models. Unlike architecture design, however, the choice of optimizer for tabular DL has not been examined systematically, despite new optimizers showing promise in other domains. To fill this gap, we benchmark \Noptimizers optimizers on \Ndatasets tabular datasets for training MLP-based models in the standard supervised learning setting under a shared experiment protocol. Our main finding is that the Muon optimizer consistently outperforms AdamW, and thus should be considered a strong and practical choice for practitioners and researchers, if the associated training efficiency overhead is affordable. Additionally, we find exponential moving average of model weights to be a simple yet effective technique that improves AdamW on vanilla MLPs, though its effect is less consistent across model variants.