You are looking at a specific version 20181022:160234 of this paper. See the latest version.

Paper 2018/1005

Handling Correlated Errors: Hardness of LWE in the Exponent

Luke Demarest and Benjamin Fuller and Alexander Russell

Abstract

The hardness of decoding random linear codes with errors is a complexity-theoretic assumption with broad applications to cryptography. In contrast, Reed-Solomon codes permit efficient decoding in many representations. Despite this, a result of Peikert (TCC 2006) proves that in groups where discrete log is hard it is difficult to perform Reed-Solomon error correction if each symbol is written in the exponent. We bring these two lines of work together, examining hardness of decoding random linear codes in the exponent. Our main result is a pair of theorems that show hardness of decoding random linear codes in both the generic group model and the standard model. In the generic group model our analysis can be carried out for a quite general family of distributions for (the support of) the error terms. We show hardness of decoding as long as every subset of group elements (of size at least the dimension of the code) has an overwhelming probability of at least one random error. This family includes error distributions whose symbols are correlated. Our results in the standard model show hardness of decoding random linear codes with a uniform input point. These results improve on a result of Peikert (TCC 2006) who considered the problem for Reed-Solomon codes. We explore two applications of these results. First, we construct a reusable fuzzy extractor with storage independent of the number of errors to be corrected. This construction unifies the strengths of two prior constructions (Fuller, Meng, and Reyzin, Asiacrypt 2013) and (Canetti et al., Eurocrypt 2016). Second, we show how to build virtual black-box obfuscation for a class of functionality known as pattern matching. Recently, Bishop et al. (Crypto 2018) constructed a scheme based on codes in the exponent and showed security for uniform inputs. We show the same construction is secure for more distributions. The security arguments of both applications rely on distributions drawn from some physical process which are best modeled by distributions with correlated bits.

Metadata
Available format(s)
PDF
Publication info
Preprint. MINOR revision.
Keywords
Learning with errorsError-CorrectionGeneric Group ModelFuzzy ExtractorsPattern-matching obfuscation
Contact author(s)
luke johnson @ uconn edu
History
2021-02-19: last of 8 revisions
2018-10-22: received
See all versions
Short URL
https://ia.cr/2018/1005
License
Creative Commons Attribution
CC BY
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.