CARE: Covariance-Aware and Rank-Enhanced Decomposition for Enabling Multi-Head Latent Attention
2026-03-18 • Machine Learning
Machine LearningArtificial Intelligence
AI summaryⓘ
The authors study how to convert a type of attention module used in pretrained models to a new format called multi-head latent attention (MLA) that can improve model flexibility without increasing memory costs. They identify that current methods focus too much on matching weights and ignore how inputs behave, leading to worse performance. To fix this, they propose CARE, which better aligns the conversion process with actual input data and smartly allocates resources to important layers while keeping memory use stable. Their approach leads to significant improvements in model accuracy and perplexity compared to older methods, and they can fully restore original accuracy with some fine-tuning.
attention modulegrouped-query attention (GQA)multi-head latent attention (MLA)KV-cachelow-rank approximationsingular value decomposition (SVD)activation covariancemodel conversionperplexityfine-tuning
Authors
Zhongzhu Zhou, Fengxiang Bie, Ziyan Chen, Zhenyu Zhang, Yibo Yang, Junxiong Wang, Ben Athiwaratkun, Xiaoxia Wu, Shuaiwen Leon Song
Abstract
Converting pretrained attention modules such as grouped-query attention (GQA) into multi-head latent attention (MLA) can improve expressivity without increasing KV-cache cost, making it attractive for efficient inference. However, many practical conversion baselines rely on weight-only low-rank approximations (e.g., SVD-style initializations) and uniform rank allocation. They focus on minimizing the difference between weight matrices rather than on how those weights affect input activations, ignore the covariance structure of activations, and enforce uniform rank across layers, causing activation drift and degraded attention fidelity. To address these issues, we propose CARE, a Covariance-Aware, Rank-Enhanced MLA conversion pipeline under a fixed KV width. CARE introduces three key steps: (i) activation-preserving factorization, which aligns the approximation with the actual input activations rather than just the weights; (ii) adjusted-rank allocation, which spreads a fixed KV budget across layers by giving more capacity to layers that need it most; and (iii) KV-parity mapping, which reparameterizes the converted K and V to fit the MLA format while keeping the KV-cache size unchanged. Our method outperforms a uniform-rank SVD baseline on Qwen3-4B/30B-A3B-Instruct-2507 and Llama-3.1-8B/70B-Instruct, reducing one-shot perplexity by up to 215x and improving mean accuracy by up to 1.70x at matched KV budgets. With a brief post-SVD healing fine-tune, we fully recover the original model's accuracy.