Paper 2023/392
Locally Covert Learning
Abstract
The goal of a covert learning algorithm is to learn a function $f$ by querying it, while ensuring that an adversary, who sees all queries and their responses, is unable to (efficiently) learn any more about $f$ than they could learn from random input-output pairs. We focus on a relaxation that we call local covertness, in which queries are distributed across $k$ servers and we only limit what is learnable by $k - 1$ colluding servers. For any constant $k$, we give a locally covert algorithm for efficiently learning any Fourier-sparse function (technically, our notion of learning is improper, agnostic, and with respect to the uniform distribution). Our result holds unconditionally and for computationally unbounded adversaries. Prior to our work, such an algorithm was known only for the special case of $O(\log n)$-juntas, and only with $k = 2$ servers, Ishai et al. (Crypto 2019). Our main technical observation is that the original Goldreich-Levin algorithm only utilizes i.i.d. pairs of correlated queries, where each half of every pair is uniformly random. We give a simple generalization of this algorithm in which pairs are replaced by $k$-tuples in which any $k - 1$ components are jointly uniform. The cost of this generalization is that the number of queries needed grows exponentially with $k$.
Note: Fixed typos in appendix.
Metadata
- Available format(s)
- Category
- Foundations
- Publication info
- Published elsewhere. ITC 2023
- Keywords
- learning theoryadversarial machine learning
- Contact author(s)
-
justin holmgren @ ntt-research com
jawale2 @ illinois edu - History
- 2023-05-13: revised
- 2023-03-19: received
- See all versions
- Short URL
- https://ia.cr/2023/392
- License
-
CC BY
BibTeX
@misc{cryptoeprint:2023/392, author = {Justin Holmgren and Ruta Jawale}, title = {Locally Covert Learning}, howpublished = {Cryptology {ePrint} Archive, Paper 2023/392}, year = {2023}, url = {https://eprint.iacr.org/2023/392} }