Paper 2022/933

Secure Quantized Training for Deep Learning

Marcel Keller, CSIRO's Data61
Ke Sun, CSIRO's Data61
Abstract

We implement training of neural networks in secure multi-party computation (MPC) using quantization commonly used in said setting. We are the first to present an MNIST classifier purely trained in MPC that comes within 0.2 percent of the accuracy of the same convolutional neural network trained via plaintext computation. More concretely, we have trained a network with two convolutional and two dense layers to 99.2% accuracy in 3.5 hours (under one hour for 99% accuracy). We have also implemented AlexNet for CIFAR-10, which converges in a few hours. We develop novel protocols for exponentiation and inverse square root. Finally, we present experiments in a range of MPC security models for up to ten parties, both with honest and dishonest majority as well as semi-honest and malicious security.

Metadata
Available format(s)
PDF
Category
Implementation
Publication info
Published elsewhere. International Conference on Machine Learning
Keywords
Privacy-preserving machine learning secure multi-party computation
Contact author(s)
mks keller @ gmail com
ke sun @ data61 csiro au
History
2022-07-18: approved
2022-07-18: received
See all versions
Short URL
https://ia.cr/2022/933
License
Creative Commons Attribution-NonCommercial-ShareAlike
CC BY-NC-SA

BibTeX

@misc{cryptoeprint:2022/933,
      author = {Marcel Keller and Ke Sun},
      title = {Secure Quantized Training for Deep Learning},
      howpublished = {Cryptology ePrint Archive, Paper 2022/933},
      year = {2022},
      note = {\url{https://eprint.iacr.org/2022/933}},
      url = {https://eprint.iacr.org/2022/933}
}
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.