**Code Offset in the Exponent**

*Luke Demarest and Benjamin Fuller and Alexander Russell*

**Abstract: **Fuzzy extractors stable reliable keys from noisy sources. The standard construction is the code offset: the noisy secret w is stored in a one-time pad which is sampled as a codeword from an error-correcting code (Juels and Wattenberg, CCS 1999). Information-theoretic analysis of this construction has weaknesses: 1) it leaks information about w and 2) does not allow reuse (Boyen, CCS 2004). Fuller, Meng, and Reyzin (Asiacrypt 2013) substitute a random linear code and reduce to learning with errors when the symbols of w are independent and uniform.
We introduce code offset in the exponent: a group generator is raised to the output of the code offset (instantiated with a random linear code). If the resulting vector is indistinguishable from random group elements, the construction 1) leaks nothing about w 2) is reusable and 3) corrects up to a subconstant fraction of errors (in the Hamming metric).
We characterize what error distributions make code offset indistinguishable from random group elements in the generic group model. This corresponds to error distributions that make learning with errors hard in the generic group model. The construction and adversary are both provided with the code description.
It suffices for all vectors in the null space of a random linear code to have an unpredictable inner product with the noisy distribution. Our primary result is: the condition is satisfied by all distributions with minentropy that is larger than log of the size of the nullspace of the code by any super logarithmic amount. The condition is also satisfied by structured distributions including:
1. Distributions with independent symbols that have super logarithmic minentropy including the discretized Gaussian and the uniform interval.
2. Binary distributions lifted by multiplying by a random vector. It suffices for subsets of the binary string (whose size is the nullity of the code) to be unlikely to be all zero. Using this modification, we improve decoding over Canetti et al. (Eurocrypt 2016).
Our construction also yields a more flexible construction of pattern matching obfuscation (Bishop et al., Crypto 2018). Lastly, we provide standard model results, showing hardness of bounded distance decoding of random linear codes with uniform input point, quantitatively improving prior bounds of Peikert (TCC 2006).

**Category / Keywords: **foundations / fuzzy extractors; code offset; learning with errors; error correction; generic group model; pattern-matching obfuscation

**Date: **received 17 Oct 2018, last revised 16 Oct 2019

**Contact author: **luke johnson at uconn edu

**Available format(s): **PDF | BibTeX Citation

**Note: **Substantial new technical material. Now shows hardness of general high minentropy distributions.

**Version: **20191016:135807 (All versions of this report)

**Short URL: **ia.cr/2018/1005

[ Cryptology ePrint archive ]