You are looking at a specific version 20190306:030430 of this paper. See the latest version.

Paper 2019/264

Unifying computational entropies via Kullback-Leibler divergence

Rohit Agrawal and Yi-Hsiu Chen and Thibaut Horel and Salil Vadhan

Abstract

We introduce KL-hardness, a new notion of hardness for search problems which on the one hand is satisfied by all one-way functions and on the other hand implies both next-block pseudoentropy and inaccessible-entropy, two forms of computational entropy used in recent constructions of pseudorandom generators and statistically hiding commitment schemes, respectively. Thus, KL-hardness unifies the latter two notions of computational entropy and sheds light on the apparent "duality" between them. Additionally, it yields a more modular and illuminating proof that one-way functions imply next-block inaccessible entropy, similar in structure to the proof that one-way functions imply next-block pseudoentropy (Vadhan and Zheng, STOC '12).

Metadata
Available format(s)
PDF
Category
Foundations
Publication info
Preprint. MINOR revision.
Keywords
one-way functionspseudo-randomnessbit commitmentinformation theorypseudoentropyinaccessible entropy
Contact author(s)
thorel @ seas harvard edu
History
2019-08-20: revised
2019-03-06: received
See all versions
Short URL
https://ia.cr/2019/264
License
Creative Commons Attribution
CC BY
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.