Paper 2017/452

Oblivious Neural Network Predictions via MiniONN transformations

Jian Liu, Mika Juuti, Yao Lu, and N. Asokan


Machine learning models hosted in a cloud service are increasingly popular but risk privacy: clients sending prediction requests to the service need to disclose potentially sensitive information. In this paper, we explore the problem of privacy-preserving predictions: after each prediction, the server learns nothing about clients' input and clients learn nothing about the model. We present MiniONN, the first approach for transforming an existing neural network to an oblivious neural network supporting privacy-preserving predictions with reasonable efficiency. Unlike prior work, MiniONN requires no change to how models are trained. To this end, we design oblivious protocols for commonly used operations in neural network prediction models. We show that MiniONN outperforms existing work in terms of response latency and message sizes. We demonstrate the wide applicability of MiniONN by transforming several typical neural network models trained from standard datasets.

Available format(s)
Publication info
Preprint. Minor revision.
privacymachine learningneural network predictions
Contact author(s)
jian liu @ aalto fi
2017-08-03: revised
2017-05-23: received
See all versions
Short URL
Creative Commons Attribution


      author = {Jian Liu and Mika Juuti and Yao Lu and N.  Asokan},
      title = {Oblivious Neural Network Predictions via MiniONN transformations},
      howpublished = {Cryptology ePrint Archive, Paper 2017/452},
      year = {2017},
      note = {\url{}},
      url = {}
Note: In order to protect the privacy of readers, does not use cookies or embedded third party content.