Paper 2017/281
Practical Secure Aggregation for Privacy Preserving Machine Learning
Keith Bonawitz, Vladimir Ivanov, Ben Kreuter, Antonio Marcedone, H. Brendan McMahan, Sarvar Patel, Daniel Ramage, Aaron Segal, and Karn Seth
Abstract
We design a novel, communication-efficient, failure-robust protocol for secure aggregation of high-dimensional data. Our protocol allows a server to compute the sum of large, user-held data vectors from mobile devices in a secure manner (i.e. without learning each user's individual contribution), and can be used, for example, in a federated learning setting, to aggregate user-provided model updates for a deep neural network. We prove the security of our protocol in the honest-but-curious and malicious settings, and show that security is maintained even if an arbitrarily chosen subset of users drop out at any time. We evaluate the efficiency of our protocol and show, by complexity analysis and a concrete implementation, that its runtime and communication overhead remain low even on large data sets and client pools. For 16-bit input values, our protocol offers $1.73\times$ communication expansion for $2^{10}$ users and $2^{20}$-dimensional vectors, and $1.98\times$ expansion for $2^{14}$ users and $2^{24}$-dimensional vectors over sending data in the clear.
Metadata
- Available format(s)
- Category
- Cryptographic protocols
- Publication info
- Preprint. MINOR revision.
- Contact author(s)
- karn @ google com
- History
- 2018-03-16: last of 3 revisions
- 2017-03-30: received
- See all versions
- Short URL
- https://ia.cr/2017/281
- License
-
CC BY
BibTeX
@misc{cryptoeprint:2017/281, author = {Keith Bonawitz and Vladimir Ivanov and Ben Kreuter and Antonio Marcedone and H. Brendan McMahan and Sarvar Patel and Daniel Ramage and Aaron Segal and Karn Seth}, title = {Practical Secure Aggregation for Privacy Preserving Machine Learning}, howpublished = {Cryptology {ePrint} Archive, Paper 2017/281}, year = {2017}, url = {https://eprint.iacr.org/2017/281} }