Paper 2024/373

Lower Bounds for Differential Privacy Under Continual Observation and Online Threshold Queries

Edith Cohen, Google Research and Tel Aviv University
Xin Lyu, UC Berkeley and Google Research
Jelani Nelson, UC Berkeley and Google Research
Tamás Sarlós, Google Research
Uri Stemmer, Tel Aviv University and Google Research
Abstract

One of the most basic problems for studying the "price of privacy over time" is the so called private counter problem, introduced by Dwork et al. (2010) and Chan et al. (2010). In this problem, we aim to track the number of events that occur over time, while hiding the existence of every single event. More specifically, in every time step $t\in[T]$ we learn (in an online fashion) that $\Delta_t\geq 0$ new events have occurred, and must respond with an estimate $n_t\approx\sum_{j=1}^t \Delta_j$. The privacy requirement is that all of the outputs together, across all time steps, satisfy event level differential privacy. The main question here is how our error needs to depend on the total number of time steps $T$ and the total number of events $n$. Dwork et al. (2015) showed an upper bound of $O\left(\log(T)+\log^2(n)\right)$, and Henzinger et al. (2023) showed a lower bound of $\Omega\left(\min\{\log n, \log T\}\right)$. We show a new lower bound of $\Omega\left(\min\{n,\log T\}\right)$, which is tight w.r.t. the dependence on $T$, and is tight in the sparse case where $\log^2 n=O(\log T)$. Our lower bound has the following implications: (1) We show that our lower bound extends to the online thresholds problem, where the goal is to privately answer many "quantile queries" when these queries are presented one-by-one. This resolves an open question of Bun et al. (2017). (2) Our lower bound implies, for the first time, a separation between the number of mistakes obtainable by a private online learner and a non-private online learner. This partially resolves a COLT'22 open question published by Sanyal and Ramponi. (3) Our lower bound also yields the first separation between the standard model of private online learning and a recently proposed relaxed variant of it, called private online prediction.

Metadata
Available format(s)
PDF
Category
Foundations
Publication info
Preprint.
Keywords
Differential Privacy
Contact author(s)
edith @ cohenwang com
lyuxin1999 @ gmail com
minilek @ alum mit edu
stamas @ google com
u @ uri co il
History
2024-03-01: approved
2024-02-29: received
See all versions
Short URL
https://ia.cr/2024/373
License
Creative Commons Attribution
CC BY

BibTeX

@misc{cryptoeprint:2024/373,
      author = {Edith Cohen and Xin Lyu and Jelani Nelson and Tamás Sarlós and Uri Stemmer},
      title = {Lower Bounds for Differential Privacy Under Continual Observation and Online Threshold Queries},
      howpublished = {Cryptology {ePrint} Archive, Paper 2024/373},
      year = {2024},
      url = {https://eprint.iacr.org/2024/373}
}
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.