Paper 2021/994

BKW Meets Fourier: New Algorithms for LPN with Sparse Parities

Dana Dachman-Soled, Huijing Gong, Hunter Kippen, and Aria Shahverdi

Abstract

We consider the Learning Parity with Noise (LPN) problem with sparse secret, where the secret vector $\textbf{s}$ of dimension $n$ has Hamming weight at most $k$. We are interested in algorithms with asymptotic improvement in the $\textit{exponent}$ beyond the state of the art. Prior work in this setting presented algorithms with runtime $n^{c \cdot k}$ for constant $c < 1$, obtaining a constant factor improvement over brute force search, which runs in time ${n \choose k}$. We obtain the following results: - We first consider the $\textit{constant}$ error rate setting, and in this case present a new algorithm that leverages a subroutine from the acclaimed BKW algorithm [Blum, Kalai, Wasserman, J.~ACM '03] as well as techniques from Fourier analysis for $p$-biased distributions. Our algorithm achieves asymptotic improvement in the exponent compared to prior work, when the sparsity $k = k(n) = \frac{n}{\log^{1+ 1/c}(n)}$, where $c \in o(\log \log(n))$ and $c \in \omega(1)$. The runtime and sample complexity of this algorithm are approximately the same. - We next consider the $\textit{low noise}$ setting, where the error is subconstant. We present a new algorithm in this setting that requires only a $\textit{polynomial}$ number of samples and achieves asymptotic improvement in the exponent compared to prior work, when the sparsity $k = \frac{1}{\eta} \cdot \frac{\log(n)}{\log(f(n))}$ and noise rate of $\eta \neq 1/2$ and $\eta^2 = \left(\frac{\log(n)}{n} \cdot f(n)\right)$, for $f(n) \in \omega(1) \cap n^{o(1)}$. To obtain the improvement in sample complexity, we create subsets of samples using the $\textit{design}$ of Nisan and Wigderson [J.~Comput.~Syst.~Sci. '94], so that any two subsets have a small intersection, while the number of subsets is large. Each of these subsets is used to generate a single $p$-biased sample for the Fourier analysis step. We then show that this allows us to bound the covariance of pairs of samples, which is sufficient for the Fourier analysis. - Finally, we show that our first algorithm extends to the setting where the noise rate is very high $1/2 - o(1)$, and in this case can be used as a subroutine to obtain new algorithms for learning DNFs and Juntas. Our algorithms achieve asymptotic improvement in the exponent for certain regimes. For DNFs of size $s$ with approximation factor $\epsilon$ this regime is when $\log \frac{s}{\epsilon} \in \omega \left( \frac{c}{\log n \log \log c}\right)$, and $\log \frac{s}{\epsilon} \in n^{1 - o(1)}$, for $c \in n^{1 - o(1)}$. For Juntas of $k$ the regime is when $k \in \omega \left( \frac{c}{\log n \log \log c}\right)$, and $k \in n^{1 - o(1)}$, for $c \in n^{1 - o(1)}$.

Metadata
Available format(s)
PDF
Category
Public-key cryptography
Publication info
Preprint. MINOR revision.
Keywords
Learning Parity with NoiseBKWFourier AnalysisDNFJuntas
Contact author(s)
ariash @ umd edu
History
2021-07-28: received
Short URL
https://ia.cr/2021/994
License
Creative Commons Attribution
CC BY

BibTeX

@misc{cryptoeprint:2021/994,
      author = {Dana Dachman-Soled and Huijing Gong and Hunter Kippen and Aria Shahverdi},
      title = {{BKW} Meets Fourier: New Algorithms for {LPN} with Sparse Parities},
      howpublished = {Cryptology {ePrint} Archive, Paper 2021/994},
      year = {2021},
      url = {https://eprint.iacr.org/2021/994}
}
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.