Paper 2013/723
Amplifying Privacy in Privacy Amplification
Divesh Aggarwal, Yevgeniy Dodis, Zahra Jafargholi, Eric Miles, and Leonid Reyzin
Abstract
We study the classical problem of privacy amplification, where two parties Alice and Bob share a weak secret of min-entropy , and wish to agree on secret key of length over a public communication channel completely controlled by a computationally unbounded attacker Eve.
Despite being extensively studied in the literature, the problem of designing ``optimal'' efficient privacy amplification protocols is still open, because there are several optimization goals. The first of them is (1) minimizing the {\em entropy loss} (it is known that the optimal value for , where is the desired security of the protocol). Other important considerations include (2) minimizing the number of communication rounds, (3) maintaining security even after the secret key is used (this is called {\em post-application robustness}), and (4) ensuring that the protocol does not leak some ``useful information'' about the source (this is called {\em source privacy}). Additionally, when dealing with a very long source , as happens in the so-called Bounded Retrieval Model (BRM), extracting as long a key as possible is no longer the goal. Instead, the goals are (5) to touch as little of as possible (for efficiency), and (6) to be able to run the protocol many times on the same , extracting multiple secure keys.
Achieving goals (1)-(4) (or (2)-(6) in BRM) simultaneously has remained open, and, indeed, all known protocols fail to achieve at least two of them.
In this work we improve upon the current state-of-the-art, by designing a variety of new privacy amplification protocols, in several cases achieving {\em optimal parameters for the first time}. Moreover, in most cases we do it by giving relatively {\em general transformations} which convert a given protocol into a ``better'' protocol . In particular, as special cases of these transformations (applied to best known prior protocols), we achieve the following privacy amplification protocols for the first time:
\begin{itemize}
\item -round (resp. -round) {\em source-private} protocol with {\em optimal entropy loss} , whenever (resp. for some universal constant ). Best previous constant round source-private protocols achieved .
\item -round {\em post-application-robust} protocols with {\em optimal entropy loss} , whenever or (the latter is also {\em source-private}). Best previous post-application robust protocols achieved .
\item The first BRM protocol capable of extracting the optimal number of session keys, improving upon the previously best bound . (Additionally, our BRM protocol is post-application-robust, takes rounds, and can be made source-private by increasing the number of rounds to .)
\end{itemize}