Paper 2021/386
SAFELearn: Secure Aggregation for private FEderated Learning
Hossein Fereidooni, Samuel Marchal, Markus Miettinen, Azalia Mirhoseini, Helen Möllering, Thien Duc Nguyen, Phillip Rieger, Ahmad Reza Sadeghi, Thomas Schneider, Hossein Yalame, and Shaza Zeitouni
Abstract
Federated learning (FL) is an emerging distributed machine learning paradigm which addresses critical data privacy issues in machine learning by enabling clients, using an aggregation server (aggregator), to jointly train a global model without revealing their training data. Thereby, it improves not only privacy but is also efficient as it uses the computation power and data of potentially millions of clients for training in parallel. However, FL is vulnerable to so-called inference attacks by malicious aggregators which can infer information about clients’ data from their model updates. Secure aggregation restricts the central aggregator to only learn the summation or average of the updates of clients. Unfortunately, existing protocols for secure aggregation for FL suffer from high communication, computation, and many communication rounds. In this work, we present SAFELearn, a generic design for efficient private FL systems that protects against inference attacks that have to analyze individual clients' model updates using secure aggregation. It is flexibly adaptable to the efficiency and security requirements of various FL applications and can be instantiated with MPC or FHE. In contrast to previous works, we only need 2 rounds of communication in each training iteration, do not use any expensive cryptographic primitives on clients, tolerate dropouts, and do not rely on a trusted third party. We implement and benchmark an instantiation of our generic design with secure two-party computation. Our implementation aggregates 500~models with more than 300K parameters in less than 0.5 seconds.
Note: SAFELearn is flexibly adaptable to the efficiency and security requirements of various FL applications and can be instantiated with MPC or FHE. SAFELearn only needs 2 rounds of communication in each training iteration, does not use any expensive cryptographic primitives on clients, tolerates dropouts, and does not rely on a trusted third party.
Metadata
- Available format(s)
- Category
- Applications
- Publication info
- Published elsewhere. Minor revision. Deep Learning and Security Workshop (DLS'21)
- Keywords
- Federated LearningInference AttacksSecure ComputationData Privacy
- Contact author(s)
- yalame @ encrypto cs tu-darmstadt de
- History
- 2021-03-27: received
- Short URL
- https://ia.cr/2021/386
- License
-
CC BY
BibTeX
@misc{cryptoeprint:2021/386, author = {Hossein Fereidooni and Samuel Marchal and Markus Miettinen and Azalia Mirhoseini and Helen Möllering and Thien Duc Nguyen and Phillip Rieger and Ahmad Reza Sadeghi and Thomas Schneider and Hossein Yalame and Shaza Zeitouni}, title = {{SAFELearn}: Secure Aggregation for private {FEderated} Learning}, howpublished = {Cryptology {ePrint} Archive, Paper 2021/386}, year = {2021}, url = {https://eprint.iacr.org/2021/386} }