Search Your Block Floating Point Scales!
2026-05-12 • Machine Learning
Machine LearningHardware ArchitecturePerformance
AI summaryⓘ
The authors focus on improving a method called Block Floating Point (BFP) quantization, which helps run large language models faster by using fewer bits. They find that the usual way to pick scale factors for BFP can cause bigger errors than needed. To fix this, they propose ScaleSearch, a technique that searches for better scale settings to reduce these errors. They also create ScaleSearchAttention, a faster attention method for language models that uses ScaleSearch to keep accuracy high while speeding up computations. Their tests show that these new methods lower errors and improve model performance with minimal loss in output quality.
QuantizationBlock Floating Point (BFP)Scale factorPost Training Quantization (PTQ)MicroscalingAttention mechanismCausal language modelingNVFP4 formatPerplexityLanguage models
Authors
Tanmaey Gupta, Hayden Prairie, Xiaoxia Wu, Reyna Abhyankar, Qingyang Wu, Austin Silveria, Pragaash Ponnusamy, Jue Wang, Ben Athiwaratkun, Leon Song, Tri Dao, Daniel Y. Fu, Chris De Sa
Abstract
Quantization has emerged as a standard technique for accelerating inference for generative models by enabling faster low-precision computations and reduced memory transfers. Recently, GPU accelerators have added first-class support for microscaling Block Floating Point (BFP) formats. Standard BFP algorithms use a fixed scale based on the maximum magnitude of the block. We observe that this scale choice can be suboptimal with respect to quantization errors. In this work, we propose ScaleSearch, an alternative strategy for selecting these scale factors: using a fine-grained search leveraging the mantissa bits in microscaling formats to minimize the quantization error for the given distribution. ScaleSearch can be integrated with existing quantization methods such as Post Training Quantization and low-precision attention, and is shown to improve their performance. Additionally, we introduce ScaleSearchAttention, an accelerated NVFP4-based attention algorithm, which uses ScaleSearch and adapted prior techniques to ensure near-0 performance loss for causal language modeling. Experiments show that ScaleSearch reduces quantization error by 27% for NVFP4 and improves language model PTQ by up to 15 points for MATH500 (Qwen3-8B), while ScaleSearchAttention improves Wikitext-2 PPL by upto 0.77 points for Llama 3.1 70B. The proposed methods closely match baseline performance while providing quantization accuracy improvements.