Paper 2009/467

The LPN Problem with Auxiliary Input

Yu Yu

Abstract

This paper investigates the Learning from Parity with Noise (LPN) problem under the scenario that the unknowns (secret keys) are only unpredictable instead of being uniformly random to the adversaries. In practice, this corresponds to the case where an adversary already possesses some additional knowledge about the secret key. In the information-theoretic setting, we show that the problem is robust against arbitrary leakages as long as the unknowns remain some sufficient amount of min-entropy. In the computational setting, we prove Dodis et al.'s [STOC'09] conjecture that the auxiliary-input LPN assumption is implied by the standard LPN assumption, i.e., encryption schemes based on the standard LPN assumption is secure against any exponentially hard-to-invert auxiliary input. \medskip Our setting is more general than the traditional model of auxiliary input which deals with secret keys of sufficient min-entropy, in particular, we allow leakages that information-theoretically determine their secret keys as long as the keys remain a linear amount of unpredictability pseudoentropy. Further, unlike most other schemes, our result is reducible to a well-known hardness problem and does not quantify over all hard-to-invert auxiliary functions.

Metadata
Available format(s)
-- withdrawn --
Publication info
Published elsewhere. n.a.
Keywords
hard learning problemcryptography with auxiliary input
Contact author(s)
yu yu @ uclouvain be
History
2009-09-29: withdrawn
2009-09-26: received
See all versions
Short URL
https://ia.cr/2009/467
License
Creative Commons Attribution
CC BY
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.