Paper 2020/870
Smoothing Out Binary Linear Codes and Worstcase Subexponential Hardness for LPN
Yu Yu and Jiang Zhang
Abstract
Learning parity with noise (LPN) is a notorious (averagecase) hard problem that has been well studied in learning theory, coding theory and cryptography since the early 90's. It further inspires the Learning with Errors (LWE) problem [Regev, STOC 2005], which has become one of the central building blocks for postquantum cryptography and advanced cryptographic primitives. Unlike LWE whose hardness can be reducible from worstcase lattice problems, no corresponding worstcase hardness results were known for LPN until very recently. At Eurocrypt 2019, Brakerski et al. [BLVW19] established the first feasibility result that the worstcase hardness of nearest codeword problem (NCP) (on balanced linear code) at the extremely low noise rate $\frac{log^2 n}{n}$ implies the quasipolynomial hardness of LPN at the extremely high noise rate $1/21/poly(n)$. It remained open whether a worstcase to averagecase reduction can be established for standard (constantnoise) LPN, ideally with subexponential hardness. We start with a simple observation that the hardness of highnoise LPN over large fields is implied by that of the LWE of the same modulus, and is thus reducible from worstcase hardness of lattice problems. We then revisit [BLVW19] and carry on the worstcase to averagecase reduction for LPN, which is the main focus of this work. We first expand the underlying binary linear codes (of the worstcase NCP) to not only the balanced code considered in [BLVW19] but also to another code (in some sense dual to balanced code). At the core of our reduction is a new variant of smoothing lemma (for both binary codes) that circumvents the barriers (inherent in the underlying worstcase randomness extraction) and admits tradeoffs for a wider spectrum of parameter choices. In addition to the worstcase hardness result obtained in [BLVW19], we show that for any constant $0<c<1$ the constantnoise LPN problem is ($T=2^{\Omega(n^{1c})},\epsilon=2^{\Omega(n^{min(c,1c)})},q=2^{\Omega(n^{min(c,1c)})}$)hard assuming that the NCP (on either code) at the lownoise rate $\tau=n^{c}$ is ($T'={2^{\Omega(\tau n)}}$, $\epsilon'={2^{\Omega(\tau n)}}$,$m={2^{\Omega(\tau n)}}$)hard in the worst case, where $T$, $\epsilon$, $q$ and $m$ are time complexity, success rate, sample complexity, and codeword length respectively. Moreover, refuting the worstcase hardness assumption would imply arbitrary polynomial speedups over the current stateoftheart algorithms for solving the NCP (and LPN), which is a winwin result. Unfortunately, publickey encryptions and collision resistant hash functions would need constantnoise LPN with ($T={2^{\omega(\sqrt{n})}}$, $\epsilon'={2^{\omega(\sqrt{n})}}$,$q={2^{\sqrt{n}}}$)hardness (Yu et al., CRYPTO 2016 \& ASIACRYPT 2019), which is almost (up to an arbitrary $\omega(1)$ factor in the exponent) what is reducible from the worstcase NCP when $c= 0.5$. We leave it as an open problem whether the gap can be closed or there is a separation in place.
Metadata
 Available format(s)
 Category
 Foundations
 Publication info
 A minor revision of an IACR publication in CRYPTO 2021
 DOI
 10.1007/9783030842529_16
 Keywords
 Learning Parity with NoiseWorstcasetoaveragecase reductionSmoothing Lemma
 Contact author(s)

yuyu @ yuyu hk
jiangzhang09 @ gmail com  History
 20220504: last of 4 revisions
 20200712: received
 See all versions
 Short URL
 https://ia.cr/2020/870
 License

CC BY
BibTeX
@misc{cryptoeprint:2020/870, author = {Yu Yu and Jiang Zhang}, title = {Smoothing Out Binary Linear Codes and Worstcase Subexponential Hardness for LPN}, howpublished = {Cryptology ePrint Archive, Paper 2020/870}, year = {2020}, doi = {10.1007/9783030842529_16}, note = {\url{https://eprint.iacr.org/2020/870}}, url = {https://eprint.iacr.org/2020/870} }