Cryptology ePrint Archive: Report 2021/386

SAFELearn: Secure Aggregation for private FEderated Learning

Hossein Fereidooni and Samuel Marchal and Markus Miettinen and Azalia Mirhoseini and Helen Möllering and Thien Duc Nguyen and Phillip Rieger and Ahmad Reza Sadeghi and Thomas Schneider and Hossein Yalame and Shaza Zeitouni

Abstract: Federated learning (FL) is an emerging distributed machine learning paradigm which addresses critical data privacy issues in machine learning by enabling clients, using an aggregation server (aggregator), to jointly train a global model without revealing their training data. Thereby, it improves not only privacy but is also efficient as it uses the computation power and data of potentially millions of clients for training in parallel.

However, FL is vulnerable to so-called inference attacks by malicious aggregators which can infer information about clients’ data from their model updates. Secure aggregation restricts the central aggregator to only learn the summation or average of the updates of clients. Unfortunately, existing protocols for secure aggregation for FL suffer from high communication, computation, and many communication rounds.

In this work, we present SAFELearn, a generic design for efficient private FL systems that protects against inference attacks that have to analyze individual clients' model updates using secure aggregation. It is flexibly adaptable to the efficiency and security requirements of various FL applications and can be instantiated with MPC or FHE. In contrast to previous works, we only need 2 rounds of communication in each training iteration, do not use any expensive cryptographic primitives on clients, tolerate dropouts, and do not rely on a trusted third party. We implement and benchmark an instantiation of our generic design with secure two-party computation. Our implementation aggregates 500~models with more than 300K parameters in less than 0.5 seconds.

Category / Keywords: applications / Federated Learning, Inference Attacks, Secure Computation, Data Privacy

Original Publication (with minor differences): Deep Learning and Security Workshop (DLS'21)

Date: received 23 Mar 2021

Contact author: yalame at encrypto cs tu-darmstadt de

Available format(s): PDF | BibTeX Citation

Note: SAFELearn is flexibly adaptable to the efficiency and security requirements of various FL applications and can be instantiated with MPC or FHE. SAFELearn only needs 2 rounds of communication in each training iteration, does not use any expensive cryptographic primitives on clients, tolerates dropouts, and does not rely on a trusted third party.

Version: 20210327:071223 (All versions of this report)

Short URL: ia.cr/2021/386


[ Cryptology ePrint archive ]