You are looking at a specific version 20171120:172818 of this paper. See the latest version.

Paper 2017/1114

Fast Homomorphic Evaluation of Deep Discretized Neural Networks

Florian Bourse and Michele Minelli and Matthias Minihold and Pascal Paillier

Abstract

The rise of machine learning -- and most particularly the one of deep neural networks -- multiplies scenarios where one faces a privacy dilemma: either sensitive user data must be revealed to the entity that evaluates the cognitive model (e.g. in the Cloud), or the model itself must be revealed to the user so that the evaluation can take place locally. The use of homomorphic encryption promises to reconcile these conflicting interests in the Cloud-based scenario: the computation is performed remotely and homomorphically on an encrypted input and the user decrypts the returned result. A typical task is that of classifying input patterns and a number of works have already attempted to implement homomorphic classification based on Somewhat Homomorphic Encryption. The resulting running times are not only disappointing but also quickly degrade with the number of layers in the network. Surely, this approach is not adapted to deep neural networks, that are composed of tens or possibly hundreds of layers. This paper achieves unprecedentedly fast, scale-invariant homomorphic evaluation of neural networks. Scale-invariance here means that the computation carried out by every neuron in the network is independent of the total number of neurons and layers, thus opening the way to privacy-preserving applications of deep neural networks. We refine the recent R/LWE-based Torus- FHE construction by Chillotti et al. (ASIACRYPT 2016) and make use of its efficient bootstrapping to refresh ciphertexts propagated throughout the network. For our techniques to be applicable, we require the neural network to be discretized, meaning that signals are binarized values in $\{-1, 1\}$, weights are signed integers in a prescribed interval and neurons are activated by the sign function. We show how to train such networks using a specifically designed backpropagation algorithm. We report experimental results on the MNIST dataset and show-case a discretized neural network that recognizes handwritten numbers with over 92% accuracy and is evaluated homomorphically in just about 0.88 seconds on a single core of an average-level laptop. We believe that this work can help bridge the gap between machine learning's capabilities and its practical, efficient and privacy-preserving implementation.

Metadata
Available format(s)
PDF
Publication info
Preprint. MAJOR revision.
Keywords
Fully Homomorphic EncryptionNeural NetworksBootstrappingMNIST
Contact author(s)
michele minelli @ ens fr
History
2018-05-28: last of 2 revisions
2017-11-20: received
See all versions
Short URL
https://ia.cr/2017/1114
License
Creative Commons Attribution
CC BY
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.