Paper 2021/835
Practical, Label Private Deep Learning Training based on Secure Multiparty Computation and Differential Privacy
Sen Yuan, Milan Shen, Ilya Mironov, and Anderson C. A. Nascimento
Abstract
Secure Multiparty Computation (MPC) is an invaluable tool for training machine learning models when the training data cannot be directly accessed by the model trainer. Unfortunately, complex algorithms, such as deep learning models, have their computational complexities increased by orders of magnitude when performed using MPC protocols. In this contribution, we study how to efficiently train an important class of machine learning problems by using MPC where features are known by one of the computing parties and only the labels are private. We propose new protocols combining differential privacy (DP) and MPC in order to privately and efficiently train a deep learning model in such scenario. More specifically, we release differentially private information during the MPC computation to dramatically reduce the training time. All released information does not compromise the privacy of the labels at the individual level. Our protocols can have running times that are orders of magnitude better than a straightforward use of MPC at a moderate cost in model accuracy.
Metadata
- Available format(s)
- Category
- Cryptographic protocols
- Publication info
- Preprint. MINOR revision.
- Keywords
- MPCDifferential PrivacyPrivacy Preserving Machine LearningDP-SGD
- Contact author(s)
- andclay @ uw edu
- History
- 2021-06-29: last of 3 revisions
- 2021-06-21: received
- See all versions
- Short URL
- https://ia.cr/2021/835
- License
-
CC BY
BibTeX
@misc{cryptoeprint:2021/835, author = {Sen Yuan and Milan Shen and Ilya Mironov and Anderson C. A. Nascimento}, title = {Practical, Label Private Deep Learning Training based on Secure Multiparty Computation and Differential Privacy}, howpublished = {Cryptology {ePrint} Archive, Paper 2021/835}, year = {2021}, url = {https://eprint.iacr.org/2021/835} }