QLAM: A Quantum Long-Attention Memory Approach to Long-Sequence Token Modeling

2026-05-13Machine Learning

Machine LearningComputer Vision and Pattern Recognition
AI summary

The authors explore a new way to improve how machines understand long sequences, like sentences or image pixels in a row. They combine ideas from quantum physics and machine learning to create a model called Quantum Long-Attention Memory (QLAM), which remembers information using quantum states instead of traditional methods. This approach aims to capture complex connections in data more efficiently than existing models. Their tests show that QLAM performs better than some popular sequence models on image-based tasks.

TransformersState-space modelsQuantum superpositionQuantum circuitsSequential dataAttention mechanismsRecurrent modelsImage classificationLinear-time computation
Authors
Hoang-Quan Nguyen, Sankalp Pandey, Khoa Luu
Abstract
Modeling long-range dependencies in sequential data remains a central challenge in machine learning. Transformers address this challenge through attention mechanisms, but their quadratic complexity with respect to sequence length limits scalability to long contexts. State-space models (SSMs) provide an efficient alternative with linear-time computation by evolving a latent state through recurrent updates, but their memory is typically formed via additive or linear transitions, which can limit their ability to capture complex global interactions across tokens. In this work, we introduce one of the first studies to leverage the superposition property of quantum systems to enhance state-based sequence modeling. In particular, we propose Quantum Long-Attention Memory (QLAM), a hybrid quantum-classical memory mechanism that can be viewed as a quantum extension of state-space models. Instead of maintaining a classical latent state updated through additive dynamics, QLAM represents the hidden state as a quantum state whose amplitudes encode a superposition of historical information. The state evolves through parameterized quantum circuits conditioned on the input, enabling a non-classical, globally update mechanism. In this way, QLAM preserves the recurrent and linear-time structure of SSMs while fundamentally enriching the memory representation through quantum superposition. Unlike attention mechanisms that explicitly compute pairwise interactions, QLAM implicitly captures global dependencies through the evolution of the quantum state, and retrieves task-relevant information via query-dependent measurements. We evaluate QLAM on sequential variants of standard image classification benchmarks, including sMNIST, sFashion-MNIST, and sCIFAR-10, where images are flattened into token sequences. Across all tasks, QLAM consistently improves over recurrent baselines and transformer-based models.