Paper 2017/452
Oblivious Neural Network Predictions via MiniONN transformations
Jian Liu, Mika Juuti, Yao Lu, and N. Asokan
Abstract
Machine learning models hosted in a cloud service are increasingly popular but risk privacy: clients sending prediction requests to the service need to disclose potentially sensitive information. In this paper, we explore the problem of privacy-preserving predictions: after each prediction, the server learns nothing about clients' input and clients learn nothing about the model. We present MiniONN, the first approach for transforming an existing neural network to an oblivious neural network supporting privacy-preserving predictions with reasonable efficiency. Unlike prior work, MiniONN requires no change to how models are trained. To this end, we design oblivious protocols for commonly used operations in neural network prediction models. We show that MiniONN outperforms existing work in terms of response latency and message sizes. We demonstrate the wide applicability of MiniONN by transforming several typical neural network models trained from standard datasets.
Metadata
- Available format(s)
- Publication info
- Preprint. MINOR revision.
- Keywords
- privacymachine learningneural network predictions
- Contact author(s)
- jian liu @ aalto fi
- History
- 2017-08-03: revised
- 2017-05-23: received
- See all versions
- Short URL
- https://ia.cr/2017/452
- License
-
CC BY
BibTeX
@misc{cryptoeprint:2017/452, author = {Jian Liu and Mika Juuti and Yao Lu and N. Asokan}, title = {Oblivious Neural Network Predictions via {MiniONN} transformations}, howpublished = {Cryptology {ePrint} Archive, Paper 2017/452}, year = {2017}, url = {https://eprint.iacr.org/2017/452} }