However, FL is vulnerable to so-called inference attacks by malicious aggregators which can infer information about clients’ data from their model updates. Secure aggregation restricts the central aggregator to only learn the summation or average of the updates of clients. Unfortunately, existing protocols for secure aggregation for FL suffer from high communication, computation, and many communication rounds.
In this work, we present SAFELearn, a generic design for efficient private FL systems that protects against inference attacks that have to analyze individual clients' model updates using secure aggregation. It is flexibly adaptable to the efficiency and security requirements of various FL applications and can be instantiated with MPC or FHE. In contrast to previous works, we only need 2 rounds of communication in each training iteration, do not use any expensive cryptographic primitives on clients, tolerate dropouts, and do not rely on a trusted third party. We implement and benchmark an instantiation of our generic design with secure two-party computation. Our implementation aggregates 500~models with more than 300K parameters in less than 0.5 seconds.
Category / Keywords: applications / Federated Learning, Inference Attacks, Secure Computation, Data Privacy Original Publication (with minor differences): Deep Learning and Security Workshop (DLS'21) Date: received 23 Mar 2021 Contact author: yalame at encrypto cs tu-darmstadt de Available format(s): PDF | BibTeX Citation Note: SAFELearn is flexibly adaptable to the efficiency and security requirements of various FL applications and can be instantiated with MPC or FHE. SAFELearn only needs 2 rounds of communication in each training iteration, does not use any expensive cryptographic primitives on clients, tolerates dropouts, and does not rely on a trusted third party. Version: 20210327:071223 (All versions of this report) Short URL: ia.cr/2021/386