Paper 2018/442
SecureNN: Efficient and Private Neural Network Training
Sameer Wagh, Divya Gupta, and Nishanth Chandran
Abstract
Neural Networks (NN) provide a powerful method for machine learning training and inference. To effectively train, it is desirable for multiple parties to combine their data -- however, doing so conflicts with data privacy. In this work, we provide novel three-party secure computation protocols for various NN building blocks such as matrix multiplication, convolutions, Rectified Linear Units, Maxpool, normalization and so on. This enables us to construct three-party secure protocols for training and inference of several NN architectures such that no single party learns any information about the data. Experimentally, we implement our system over Amazon EC2 servers in different settings. \
Our work advances the state-of-the-art of secure computation for neural networks in three ways:
\begin{enumerate}
\item Scalability: We are the first work to provide neural network training on Convolutional Neural Networks (CNNs) that have an accuracy of
Metadata
- Available format(s)
-
PDF
- Category
- Cryptographic protocols
- Publication info
- Published elsewhere. 19th Privacy Enhancing Technologies Symposium (PETS 2019)
- Keywords
- secure computationneural network traininginformation-theoretic security
- Contact author(s)
-
nichandr @ microsoft com
t-digu @ microsoft com
snwagh @ gmail com - History
- 2019-03-08: revised
- 2018-05-14: received
- See all versions
- Short URL
- https://ia.cr/2018/442
- License
-
CC BY
BibTeX
@misc{cryptoeprint:2018/442, author = {Sameer Wagh and Divya Gupta and Nishanth Chandran}, title = {{SecureNN}: Efficient and Private Neural Network Training}, howpublished = {Cryptology {ePrint} Archive, Paper 2018/442}, year = {2018}, url = {https://eprint.iacr.org/2018/442} }