Pretraining large language models with MXFP4
2026-05-11 • Machine Learning
Machine LearningArtificial Intelligence
AI summaryⓘ
The authors studied why training large language models with fully quantized FP4 precision often fails even though some signals seem stable. They found that quantizing the weight gradients (Wgrad) is the main issue causing training problems, while quantizing other parts like the forward pass or activation gradients has less impact. They tested different techniques and discovered that deterministic Hadamard rotations can fix training instability caused by Wgrad quantization, but stochastic methods do not help. Their findings suggest the problem comes from specific scaling errors in gradients rather than randomness issues. They confirmed these results using specialized hardware that supports native FP4 operations.
FP4 quantizationweight gradientsforward propagationactivation gradientsHadamard rotationsstochastic roundinglarge language modelsgradient scalingMXFP4training stability
Authors
Musa Cim, Poovaiah Palangappa, Miro Hodak, Ravi Dwivedula, Meena Arunachalam, Mahmut Taylan Kandemir
Abstract
Why does full-pipeline FP4 training of large language models often diverge, even when forward activations and activation gradients remain stable? We address this question through a controlled study of MXFP4 quantization in transformer training, progressively enabling FP4 across forward propagation (Fprop), activation gradients (Dgrad), and weight gradients (Wgrad) while holding all other factors fixed. In full pretraining of Llama 3.1-8B on the C4 dataset, we observe that quantizing Wgrad is the primary driver of convergence degradation, whereas FP4 in Fprop and Dgrad alone introduces only modest additional token requirements. To interpret this behavior, we evaluate both structured and stochastic interventions under a controlled experimental setting. We find that stochastic rounding and randomized Hadamard rotations fail to stabilize training once Wgrad is quantized, whereas deterministic Hadamard rotations consistently restore stable optimization. These results suggest that FP4 training instability is driven by structured micro-scaling errors along sensitive gradient paths, rather than by insufficient stochasticity. We run experiments with native MXFP4 support on AMD Instinct MI355X GPUs, enabling controlled investigation of these effects without reliance on software emulation.