Hardness Amplification for (Sparse) LPN

2026-05-11Cryptography and Security

Cryptography and SecurityComputational Complexity
AI summary

The authors study a problem called Learning Parity with Noise (LPN), where the goal is to find a hidden secret from noisy data. They show that if there is an algorithm that can solve LPN sometimes (on a small fraction of cases), it can be turned into one that works almost all the time. This process is called hardness amplification and means that solving LPN only a little bit better than random is as hard as solving it almost perfectly. They also extend these results to variations of LPN, including when data is sparse or over larger finite fields. This strengthens the belief that LPN-based problems are generally hard to solve.

Learning Parity with Noise (LPN)Hardness AmplificationLinear AlgebraNoiseBer(η) DistributionFinite Fields (F_2, F_q)Sparse VectorsAverage-case HardnessCryptography
Authors
Divesh Aggarwal, Rishav Gupta, Li Zeyong
Abstract
We prove new hardness amplification results for Learning Parity with Noise ($\mathsf{LPN}$) and its sparse variants. In $\mathsf{LPN}_{η,n,m}$, the goal is to recover a secret $\vec s\in\mathbb{F}_2^n$ from $m$ noisy linear samples $(\vec a,b)$, where $\vec a\leftarrow \mathbb{F}_2^n$ is uniform and $b=\langle \vec a,\vec s\rangle + e$ with $e\leftarrow \mathrm{Ber}(η)$. Building on the direct-product framework introduced by Hirahara and Shimizu [HS23], we show an 'instance-fraction amplification' theorem: for any $\varepsilon,δ>0$, any algorithm that solves $\mathsf{LPN}_{η,n,m}$ with success probability $\varepsilon$ can be transformed into an algorithm that succeeds with probability $1-δ$ on a related \textsf{LPN} distribution with scaled parameters $\mathsf{LPN}_{η/k,\;n/k,\;m}$, where $ k=Θ\!\left(\frac{1}δ\log\frac{1}{\varepsilon}\right). $ Equivalently, an algorithm that solves $\mathsf{LPN}$ on a 'small fraction of instances' can be converted into an algorithm that solves $\mathsf{LPN}$ on 'almost all instances', yielding a self-amplification for a wide range of parameters. We extend the same amplification approach to $\mathsf{LPN}$ over $\mathbb{F}_q$ and to Sparse-$\mathsf{LPN}$, where each query vector $\vec a$ has exactly $σ$ nonzero entries. Together, these results establish hardness self-amplification for a broad family of $\mathsf{LPN}$-type problems, strengthening the foundations for assuming the average-case hardness of $\mathsf{LPN}$ and its sparse variants.