Paper 2021/771

Securing Secure Aggregation: Mitigating Multi-Round Privacy Leakage in Federated Learning

Jinhyun So, Ramy E. Ali, Basak Guler, Jiantao Jiao, and Salman Avestimehr

Abstract

Secure aggregation is a critical component in federated learning, which enables the server to learn the aggregate model of the users without observing their local models. Conventionally, secure aggregation algorithms focus only on ensuring the privacy of individual users in a single training round. We contend that such designs can lead to significant privacy leakages over multiple training rounds, due to partial user selection/participation at each round of federated learning. In fact, we empirically show that the conventional random user selection strategies for federated learning lead to leaking users' individual models within number of rounds linear in the number of users. To address this challenge, we introduce a secure aggregation framework with multi-round privacy guarantees. In particular, we introduce a new metric to quantify the privacy guarantees of federated learning over multiple training rounds, and develop a structured user selection strategy that guarantees the long-term privacy of each user (over any number of training rounds). Our framework also carefully accounts for the fairness and the average number of participating users at each round. We perform several experiments on MNIST and CIFAR-10 datasets in the IID and the non-IID settings to demonstrate the performance improvement over the baseline algorithms, both in terms of privacy protection and test accuracy.

Metadata
Available format(s)
PDF
Category
Applications
Publication info
Preprint. MINOR revision.
Keywords
federated learningsecure aggregationprivacy-preserving machine learning
Contact author(s)
reali @ usc edu
History
2021-06-09: received
Short URL
https://ia.cr/2021/771
License
Creative Commons Attribution
CC BY

BibTeX

@misc{cryptoeprint:2021/771,
      author = {Jinhyun So and Ramy E.  Ali and Basak Guler and Jiantao Jiao and Salman Avestimehr},
      title = {Securing Secure Aggregation: Mitigating Multi-Round Privacy Leakage in Federated Learning},
      howpublished = {Cryptology ePrint Archive, Paper 2021/771},
      year = {2021},
      note = {\url{https://eprint.iacr.org/2021/771}},
      url = {https://eprint.iacr.org/2021/771}
}
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.