Paper 2022/1744
Worst and Average Case Hardness of Decoding via Smoothing Bounds
Abstract
In this work, we consider the worst and average case hardness of the decoding problems that are the basis for codebased cryptography. By a decoding problem, we consider inputs of the form $(\mathbf{G}, \mathbf{m} \mathbf{G} + \mathbf{t})$ for a matrix $\mathbf{G}$ that generates a code and a noise vector $\mathbf{t}$, and the algorithm's goal is to recover $\mathbf{m}$. We consider a natural strategy for creating a reduction to an averagecase problem: from our input we simulate a Learning Parity with Noise (LPN) oracle, where we recall that LPN is essentially an averagecase decoding problem where there is no a priori lower bound on the rate of the code. More formally, the oracle $\mathcal{O}_{\mathbf{x}}$ outputs independent samples of the form $\langle \mathbf{x}, \mathbf{a} \rangle + e$, where $\mathbf{a}$ is a uniformly random vector and $e$ is a noise bit. Such an approach is (implicit in) the previous worstcase to averagecase reductions for coding problems (Brakerski et al Eurocrypt 2019, Yu and Zhang CRYPTO 2021). To analyze the effectiveness of this reduction, we use a smoothing bound derived recently by (DebrisAlazard et al IACR Eprint 2022), which quantifies the simulation error of this reduction. It is worth noting that this latter work crucially use a bound, known as the second linear programming bounds, on the weight distribution of the code generated here by $\mathbf{G}$. Our approach, which is Fourier analytic in nature, applies to any smoothing distribution (so long as it is radial); for our purposes, the best choice appears to be Bernoulli (although for the analysis it is most effective to study the uniform distribution over a sphere, and subsequently translate the bound back to the Bernoulli distribution by applying a truncation trick). Our approach works naturally when reducing from a worstcase instance, as well as from an averagecase instance. While we are unable to improve the parameters of the worstcase to averagecase reductions of Brakerski et al or Yu and Zhang, we think that our work highlights two important points. Firstly, in analyzing the averagecase to averagecase reduction we run into inherent limitations of this reduction template. Essentially, it appears hopeless to reduce to an LPN instance for which the noise rate is more than inversepolynomially biased away from uniform. We furthermore uncover a surprising weakness in the second linear programming bound: we observe that it is essentially useless for the regime of parameters where the rate of the code is inverse polynomial in the blocklength. By highlighting these shortcomings, we hope to stimulate the development of new techniques for reductions between cryptographic decoding problems.
Metadata
 Available format(s)
 Category
 Foundations
 Publication info
 Preprint.
 Keywords
 codebased cryptography hardness reductions smoothing bounds
 Contact author(s)

thomas debris @ inria fr
n a resch @ uva nl  History
 20221225: approved
 20221219: received
 See all versions
 Short URL
 https://ia.cr/2022/1744
 License

CC BY
BibTeX
@misc{cryptoeprint:2022/1744, author = {Thomas DebrisAlazard and Nicolas Resch}, title = {Worst and Average Case Hardness of Decoding via Smoothing Bounds}, howpublished = {Cryptology ePrint Archive, Paper 2022/1744}, year = {2022}, note = {\url{https://eprint.iacr.org/2022/1744}}, url = {https://eprint.iacr.org/2022/1744} }