Paper 2019/140

CodedPrivateML: A Fast and Privacy-Preserving Framework for Distributed Machine Learning

Jinhyun So, Basak Guler, A. Salman Avestimehr, and Payman Mohassel

Abstract

How to train a machine learning model while keeping the data private and secure? We present CodedPrivateML, a fast and scalable approach to this critical problem. CodedPrivateML keeps both the data and the model information-theoretically private, while allowing efficient parallelization of training across distributed workers. We characterize CodedPrivateML's privacy threshold and prove its convergence for logistic (and linear) regression. Furthermore, via experiments over Amazon EC2, we demonstrate that CodedPrivateML can provide an order of magnitude speedup (up to $\sim 34\times$) over the state-of-the-art cryptographic approaches.

Metadata
Available format(s)
PDF
Category
Applications
Publication info
Published elsewhere. arXiv:1902.00641
Keywords
privacy-preserving machine learninginformation-theoretic privacy
Contact author(s)
jinhyuns @ usc edu
bguler @ usc edu
History
2019-02-14: received
Short URL
https://ia.cr/2019/140
License
Creative Commons Attribution
CC BY

BibTeX

@misc{cryptoeprint:2019/140,
      author = {Jinhyun So and Basak Guler and A.  Salman Avestimehr and Payman Mohassel},
      title = {{CodedPrivateML}: A Fast and Privacy-Preserving Framework for Distributed Machine Learning},
      howpublished = {Cryptology {ePrint} Archive, Paper 2019/140},
      year = {2019},
      url = {https://eprint.iacr.org/2019/140}
}
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.