Tucker Attention: A generalization of approximate attention mechanisms

2026-03-31Machine Learning

Machine LearningArtificial Intelligence
AI summary

The authors study ways to make the self-attention part of machine learning models use less memory. They look at previous methods that try to simplify attention by breaking it down into smaller pieces, but these methods are a bit unusual. To understand and improve on them, the authors create a new approach called Tucker Attention that uses fewer parameters while keeping similar performance. Their method also explains how older methods relate to each other and works well with existing techniques.

self-attentionmulti-headed attentionlow-rank approximationTucker decompositiongroup-query attentionmulti-head latent attentionflash-attentionrotary position embeddingstransformersparameter efficiency
Authors
Timon Klein, Jonas Kusch, Sebastian Sager, Stefan Schnake, Steffen Schotthöfer
Abstract
The pursuit of reducing the memory footprint of the self-attention mechanism in multi-headed self attention (MHA) spawned a rich portfolio of methods, e.g., group-query attention (GQA) and multi-head latent attention (MLA). The methods leverage specialized low-rank factorizations across embedding dimensions or attention heads. From the point of view of classical low-rank approximation, these methods are unconventional and raise questions of which objects they really approximate and how to interpret the low-rank behavior of the resulting representations. To answer these questions, this work proposes a generalized view on the weight objects in the self-attention layer and a factorization strategy, which allows us to construct a parameter efficient scheme, called Tucker Attention. Tucker Attention requires an order of magnitude fewer parameters for comparable validation metrics, compared to GQA and MLA, as evaluated in LLM and ViT test cases. Additionally, Tucker Attention~encompasses GQA, MLA, MHA as special cases and is fully compatible with flash-attention and rotary position embeddings (RoPE). This generalization strategy yields insights of the actual ranks achieved by MHA, GQA, and MLA, and further enables simplifications for MLA.