Shot-Based Quantum Encoding: A Data-Loading Paradigm for Quantum Neural Networks

2026-04-07Artificial Intelligence

Artificial IntelligenceMachine Learning
AI summary

The authors introduce a new way to put data into quantum computers called Shot-Based Quantum Encoding (SBQE). SBQE uses the number of times a quantum measurement is repeated (shots) to represent data, making the process more efficient without needing complex circuits. They show that SBQE works like a simple neural network but uses quantum circuits for its weights. Testing on image recognition tasks, their method performs better or as well as existing quantum or classical approaches, all while avoiding complicated data encoding steps.

quantum machine learningdata encodingshotsmixed-state representationHilbert spacequantum circuitsmultilayer perceptronFashion MNISTSemeion datasetquantum coherence
Authors
Basil Kyriacou, Viktoria Patapovich, Maniraman Periyasamy, Alexey Melnikov
Abstract
Efficient data loading remains a bottleneck for near-term quantum machine-learning. Existing schemes (angle, amplitude, and basis encoding) either underuse the exponential Hilbert-space capacity or require circuit depths that exceed the coherence budgets of noisy intermediate-scale quantum hardware. We introduce Shot-Based Quantum Encoding (SBQE), a data embedding strategy that distributes the hardware's native resource, shots, according to a data-dependent classical distribution over multiple initial quantum states. By treating the shot counts as a learnable degree of freedom, SBQE produces a mixed-state representation whose expectation values are linear in the classical probabilities and can therefore be composed with non-linear activation functions. We show that SBQE is structurally equivalent to a multilayer perceptron whose weights are realised by quantum circuits, and we describe a hardware-compatible implementation protocol. Benchmarks on Fashion MNIST and Semeion handwritten digits, with ten independent initialisations per model, show that SBQE achieves 89.1% +/- 0.9% test accuracy on Semeion (reducing error by 5.3% relative to amplitude encoding and matching a width-matched classical network) and 80.95% +/- 0.10% on Fashion MNIST (exceeding amplitude encoding by +2.0% and a linear multilayer perceptron by +1.3%), all without any data-encoding gates.