Paper 2022/714

MicroFedML: Privacy Preserving Federated Learning for Small Weights

Yue Guo, J.P. Morgan AI Research
Antigoni Polychroniadou, J.P. Morgan AI Research
Elaine Shi, Carnegie Mellon University
David Byrd, Bowdoin College
Tucker Balch, J.P. Morgan AI Research
Abstract

Secure aggregation on user private data with the aid of an entrusted server provides strong privacy guarantees and has been well-studied in the context of privacy-preserving federated learning. An important problem in privacy-preserving federated learning with user constrained computation and wireless network resources is the computation and communication overhead which wastes bandwidth, increases training time, and can even impacts the model accuracy if many users drop out. The seminal work of Bonawitz et al. and the work of Bell et al. have constructed secure aggregation protocols for a very large number of users which handle dropout users in a federated learning setting. However, these works suffer from high round complexity (referred to as the number of times the users exchange messages with the server) and overhead in every training iteration. In this work, we propose and implement MicroFedML, a new secure aggregation system with lower round complexity and computation overhead per training iteration. MicroFedML reduces the computational burden by at least 100 orders of magnitude for 500 users (or more depending on the number of users) and the message size by 50 times compared to prior work. Our system is suitable and performs its best when the input domain is not too large, i.e., small model weights. Notable examples include gradient sparsification, quantization, and weight regularization in federated learning.

Metadata
Available format(s)
PDF
Category
Cryptographic protocols
Publication info
Preprint.
Keywords
secure aggregation federated learning privacy
Contact author(s)
yue guo @ jpmchase com
antigoni polychroniadou @ jpmorgan com
runting @ gmail com
d byrd @ bowdoin edu
tucker balch @ jpmchase com
History
2022-06-06: revised
2022-06-05: received
See all versions
Short URL
https://ia.cr/2022/714
License
Creative Commons Attribution
CC BY

BibTeX

@misc{cryptoeprint:2022/714,
      author = {Yue Guo and Antigoni Polychroniadou and Elaine Shi and David Byrd and Tucker Balch},
      title = {MicroFedML: Privacy Preserving Federated Learning for Small Weights},
      howpublished = {Cryptology ePrint Archive, Paper 2022/714},
      year = {2022},
      note = {\url{https://eprint.iacr.org/2022/714}},
      url = {https://eprint.iacr.org/2022/714}
}
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.