You are looking at a specific version 20210327:071223 of this paper. See the latest version.

Paper 2021/386

SAFELearn: Secure Aggregation for private FEderated Learning

Hossein Fereidooni and Samuel Marchal and Markus Miettinen and Azalia Mirhoseini and Helen Möllering and Thien Duc Nguyen and Phillip Rieger and Ahmad Reza Sadeghi and Thomas Schneider and Hossein Yalame and Shaza Zeitouni

Abstract

Federated learning (FL) is an emerging distributed machine learning paradigm which addresses critical data privacy issues in machine learning by enabling clients, using an aggregation server (aggregator), to jointly train a global model without revealing their training data. Thereby, it improves not only privacy but is also efficient as it uses the computation power and data of potentially millions of clients for training in parallel. However, FL is vulnerable to so-called inference attacks by malicious aggregators which can infer information about clients’ data from their model updates. Secure aggregation restricts the central aggregator to only learn the summation or average of the updates of clients. Unfortunately, existing protocols for secure aggregation for FL suffer from high communication, computation, and many communication rounds. In this work, we present SAFELearn, a generic design for efficient private FL systems that protects against inference attacks that have to analyze individual clients' model updates using secure aggregation. It is flexibly adaptable to the efficiency and security requirements of various FL applications and can be instantiated with MPC or FHE. In contrast to previous works, we only need 2 rounds of communication in each training iteration, do not use any expensive cryptographic primitives on clients, tolerate dropouts, and do not rely on a trusted third party. We implement and benchmark an instantiation of our generic design with secure two-party computation. Our implementation aggregates 500~models with more than 300K parameters in less than 0.5 seconds.

Note: SAFELearn is flexibly adaptable to the efficiency and security requirements of various FL applications and can be instantiated with MPC or FHE. SAFELearn only needs 2 rounds of communication in each training iteration, does not use any expensive cryptographic primitives on clients, tolerates dropouts, and does not rely on a trusted third party.

Metadata
Available format(s)
PDF
Category
Applications
Publication info
Published elsewhere. Minor revision. Deep Learning and Security Workshop (DLS'21)
Keywords
Federated LearningInference AttacksSecure ComputationData Privacy
Contact author(s)
yalame @ encrypto cs tu-darmstadt de
History
2021-03-27: received
Short URL
https://ia.cr/2021/386
License
Creative Commons Attribution
CC BY
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.