As an important tool in the proof we use a newly established bound for min-entropy, which can be of independent interest. It may be viewed as an analogue of the chain rule -- a weaker form of the well-known formula $\mathbf{H}(X \vert Y) = \mathbf{H}(X, Y) - \mathbf{H}(Y)$ for random variables $X$, $Y$, and Shannon entropy, which our result originates from. For min-entropy only a much more limited version of this relation is known to hold. Namely, the min-entropy of $X$ may decrease by up to the bitlength of $Y$ when $X$ is conditioned on $Y$, in short: $\widetilde{\mathbf{H}(X \vert Y) \geq \mathbf{H}_\infty(X) - \lvert Y\rvert$. In many cases this inequality does not offer tight bounds, and such significant entropy loss makes it inadequate for our particular application. In the quasi chain rule we propose, we inject some carefully crafted side information (spoiling knowledge) to show that with large probability the average min-entropy of $X$ conditioned on both: $Y$ and this side information can be almost lower bounded by the min-entropy of $(X, Y)$ decreased by the min-entropy of $Y$ conditioned on the side information.
Category / Keywords: cryptographic protocols / leakage resilient cryptography, bounded retrieval model, key derivation, chain rule, min-entropy Date: received 11 Mar 2015 Contact author: m zajac at mimuw edu pl Available format(s): PDF | BibTeX Citation Version: 20150311:093315 (All versions of this report) Short URL: ia.cr/2015/228 Discussion forum: Show discussion | Start new discussion