You are looking at a specific version 20131107:170625 of this paper. See the latest version.

Paper 2013/723

Amplifying Privacy in Privacy Amplification

Divesh Aggarwal and Yevgeniy Dodis and Zahra Jafargholi and Eric Miles and Leonid Reyzin

Abstract

We study a classical problem of privacy amplification, where two parties Alice and Bob share a weak secret X of min-entropy k, and wish to agree on secret key $R$ of length $m$ over a public communication channel completely controlled by a computationally unbounded attacker Eve. Despite being extensively studied in the literature, the design of (efficient) "optimal" privacy amplification protocols is still open. Part of the reason is that there are quite a few important efficiency/security goals when designing privacy amplification protocols. The most basic such goal is to minimize the {\em entropy loss} L=k-m, and it is known that the optimal value for L=O(\lambda), where \eps=2^{-\lambda} is the desired security of the protocol. Other important considerations include (1) minimizing the number of communication rounds, (2) achieving strongest security notion called {\em post-application robustness}, and (3) ensuring that the protocol $P$ does not leak some ``useful information'' about the source $X$ (this is called {\em source privacy}). Additionally, when trying to extract a key R which is much shorter than the source length |X| (and, often, the min-entropy bound k), "Goal (0)" of minimizing the entropy loss is replaced by asking (4) if P can be made {\em locally computable} (meaning it reads only O(|R|) bits of X; this is called the {\em Bounded Retrieval Model} (BRM)), and/or (5) if P can be sequentially run to extract the optimal number t = \Theta(k/\lambda) of session keys R_1,...,R_t of length m=O(\lambda) each. As a result, {\em all} existing protocols in the literature fail to achieve at least two of Goals (0)-(3) (or, when |R| << |X|, Goals (1)-(5)). In this work we improve upon the current state-of-the-art, by designing a variety of new privacy amplification protocols, in several cases achieving {\em optimal parameters for the first time}. Moreover, in most cases we do it by giving relatively {\em general transformations} which convert a given protocol P into a ``better'' protocol P'. In particular, as special cases of these transformations (applied to best known prior protocols), we achieve the following privacy amplification protocols for the first time: \begin{itemize} \item 4-round (resp. 2-round) {\em source-private} protocol with {\em optimal entropy loss} L=O(\lambda), whenever k = \Omega(\lambda^2) (resp. k > \frac{n}{2}(1-\alpha) for some universal constant \alpha>0). Best previous constant round source-private protocols achieved L=\Omega(\lambda^2). \item 3-round {\em post-application-robust} protocols with {\em optimal entropy loss} L=O(\lambda), whenever $k = \Omega(\lambda^2)$ or k > \frac{n}{2}(1-\alpha) (the latter is also {\em source-private}). Best previous post-application robust protocols achieved L=\Omega(\lambda^2). \item The first BRM protocol capable of extracting the optimal number \Theta(k/\lambda) of session keys, improving upon the previously best bound \Theta(k/\lambda^2). (Additionally, our BRM protocol is post-application-robust, takes 2 rounds, and can be made source-private by increasing the number of rounds to 4.) \end{itemize}

Metadata
Available format(s)
PDF
Category
Cryptographic protocols
Publication info
Preprint. MINOR revision.
Keywords
Privacy AmplificationSource PrivacyBRMOptimal Entropy Loss
Contact author(s)
divesh aggarwal @ gmail com
History
2013-11-24: last of 2 revisions
2013-11-07: received
See all versions
Short URL
https://ia.cr/2013/723
License
Creative Commons Attribution
CC BY
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.