Paper 2020/870

Smoothing Out Binary Linear Codes and Worst-case Sub-exponential Hardness for LPN

Yu Yu and Jiang Zhang

Abstract

Learning parity with noise (LPN) is a notorious (average-case) hard problem that has been well studied in learning theory, coding theory and cryptography since the early 90's. It further inspires the Learning with Errors (LWE) problem [Regev, STOC 2005], which has become one of the central building blocks for post-quantum cryptography and advanced cryptographic primitives. Unlike LWE whose hardness can be reducible from worst-case lattice problems, no corresponding worst-case hardness results were known for LPN until very recently. At Eurocrypt 2019, Brakerski et al. [BLVW19] established the first feasibility result that the worst-case hardness of nearest codeword problem (NCP) (on balanced linear code) at the extremely low noise rate $\frac{log^2 n}{n}$ implies the quasi-polynomial hardness of LPN at the extremely high noise rate $1/2-1/poly(n)$. It remained open whether a worst-case to average-case reduction can be established for standard (constant-noise) LPN, ideally with sub-exponential hardness. We start with a simple observation that the hardness of high-noise LPN over large fields is implied by that of the LWE of the same modulus, and is thus reducible from worst-case hardness of lattice problems. We then revisit [BLVW19] and carry on the worst-case to average-case reduction for LPN, which is the main focus of this work. We first expand the underlying binary linear codes (of the worst-case NCP) to not only the balanced code considered in [BLVW19] but also to another code (in some sense dual to balanced code). At the core of our reduction is a new variant of smoothing lemma (for both binary codes) that circumvents the barriers (inherent in the underlying worst-case randomness extraction) and admits tradeoffs for a wider spectrum of parameter choices. In addition to the worst-case hardness result obtained in [BLVW19], we show that for any constant $0<c<1$ the constant-noise LPN problem is ($T=2^{\Omega(n^{1-c})},\epsilon=2^{-\Omega(n^{min(c,1-c)})},q=2^{\Omega(n^{min(c,1-c)})}$)-hard assuming that the NCP (on either code) at the low-noise rate $\tau=n^{-c}$ is ($T'={2^{\Omega(\tau n)}}$, $\epsilon'={2^{-\Omega(\tau n)}}$,$m={2^{\Omega(\tau n)}}$)-hard in the worst case, where $T$, $\epsilon$, $q$ and $m$ are time complexity, success rate, sample complexity, and codeword length respectively. Moreover, refuting the worst-case hardness assumption would imply arbitrary polynomial speedups over the current state-of-the-art algorithms for solving the NCP (and LPN), which is a win-win result. Unfortunately, public-key encryptions and collision resistant hash functions would need constant-noise LPN with ($T={2^{\omega(\sqrt{n})}}$, $\epsilon'={2^{-\omega(\sqrt{n})}}$,$q={2^{\sqrt{n}}}$)-hardness (Yu et al., CRYPTO 2016 \& ASIACRYPT 2019), which is almost (up to an arbitrary $\omega(1)$ factor in the exponent) what is reducible from the worst-case NCP when $c= 0.5$. We leave it as an open problem whether the gap can be closed or there is a separation in place.

Metadata
Available format(s)
PDF
Category
Foundations
Publication info
A minor revision of an IACR publication in CRYPTO 2021
DOI
10.1007/978-3-030-84252-9_16
Keywords
Learning Parity with NoiseWorst-case-to-average-case reductionSmoothing Lemma
Contact author(s)
yuyu @ yuyu hk
jiangzhang09 @ gmail com
History
2022-05-04: last of 4 revisions
2020-07-12: received
See all versions
Short URL
https://ia.cr/2020/870
License
Creative Commons Attribution
CC BY

BibTeX

@misc{cryptoeprint:2020/870,
      author = {Yu Yu and Jiang Zhang},
      title = {Smoothing Out Binary Linear Codes and Worst-case Sub-exponential Hardness for {LPN}},
      howpublished = {Cryptology {ePrint} Archive, Paper 2020/870},
      year = {2020},
      doi = {10.1007/978-3-030-84252-9_16},
      url = {https://eprint.iacr.org/2020/870}
}
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.