Cryptology ePrint Archive: Report 2018/820
Privacy Loss Classes: The Central Limit Theorem in Differential Privacy
David Sommer and Sebastian Meiser and Esfandiar Mohammadi
Abstract: Quantifying the privacy loss of a privacy-preserving mechanism on potentially sensitive data is a complex and well-researched topic; the de-facto standard for privacy measures are $\varepsilon$-differential privacy (DP) and its versatile relaxation $(\varepsilon,\delta)$-approximate differential privacy (ADP). Recently, novel variants of (A)DP focused on giving tighter privacy bounds under continual observation. In this paper we unify many previous works via the \emph{privacy loss distribution} (PLD) of a mechanism. We show that for non-adaptive mechanisms, the privacy loss under sequential composition undergoes a convolution and will converge to a Gauss distribution (the central limit theorem for DP). We derive several relevant insights: we can now characterize mechanisms by their \emph{privacy loss class}, i.e., by the Gauss distribution to which their PLD converges, which allows us to give novel ADP bounds for mechanisms based on their privacy loss class; we derive \emph{exact} analytical guarantees for the approximate randomized response mechanism and an \emph{exact} analytical and closed formula for the Gauss mechanism, that, given $\varepsilon$, calculates $\delta$, s.t., the mechanism is $(\varepsilon, \delta)$-ADP (not an over-approximating bound).
Category / Keywords: foundations / differential privacy, privacy loss
Date: received 3 Sep 2018, last revised 14 Jan 2019
Contact author: s meiser at ucl ac uk
Available format(s): PDF | BibTeX Citation
Version: 20190114:161158 (All versions of this report)
Short URL: ia.cr/2018/820
[ Cryptology ePrint archive ]