Paper 2021/687

Towards Understanding Practical Randomness Beyond Noise: Differential Privacy and Mixup

Hanshen Xiao and Srinivas Devadas


Information-theoretical privacy relies on randomness. Representatively, Differential Privacy (DP) has emerged as the gold standard to quantify the individual privacy preservation provided by given randomness. However, almost all randomness in existing differentially private optimization and learning algorithms is restricted to noise perturbation. In this paper, we set out to provide a privacy analysis framework to understand the privacy guarantee produced by other randomness commonly used in optimization and learning algorithms (e.g., parameter randomness). We take mixup: a random linear aggregation of inputs, as a concrete example. Our contributions are twofold. First, we develop a rigorous analysis on the privacy amplification provided by mixup either on samples or updates, where we find the hybrid structure of mixup and the Laplace Mechanism produces a new type of DP guarantee lying between Pure DP and Approximate DP. Such an average-case privacy amplification can produce tighter composition bounds. Second, both empirically and theoretically, we show that proper mixup comes almost free of utility compromise.

Available format(s)
Publication info
Preprint. MINOR revision.
Differential PrivacyPractical RandomnessConvex OptimizationStatistical Divergence
Contact author(s)
hsxiao @ mit edu
devadas @ mit edu
2021-05-28: received
Short URL
Creative Commons Attribution


      author = {Hanshen Xiao and Srinivas Devadas},
      title = {Towards Understanding Practical Randomness Beyond Noise: Differential Privacy and Mixup},
      howpublished = {Cryptology ePrint Archive, Paper 2021/687},
      year = {2021},
      note = {\url{}},
      url = {}
Note: In order to protect the privacy of readers, does not use cookies or embedded third party content.