You are looking at a specific version 20190213:172957 of this paper. See the latest version.

Paper 2019/131

Secure Evaluation of Quantized Neural Networks

Assi Barak and Daniel Escudero and Anders Dalskov and Marcel Keller

Abstract

Machine Learning models, and specially convolutional neural networks (CNNs), are at the heart of many day-to-day applications like image classification and speech recognition. The need for evaluating such models whilst preserving the privacy of the input provided increases as the models are used for more information-sensitive tasks like DNA analysis or facial recognition. Research on evaluating CNNs securely has been very active during the last couple of years, e.g.~Mohassel \& Zhang (S\&P'17) and Liu et al.~(CCS'17), leading to very efficient frameworks like SecureNN (ePrint:2018:442), which can perform evaluation of some CNNs with a multplicative overhead of only $17$--$33$ with respect to evaluation in the clear. We contribute to this line of research by introducing a technique from the Machine Learning domain, namely quantization, which allows us to scale secure evaluation of CNNs to much larger networks without the accuracy loss that could happen by adapting the network to the MPC setting. Quantization is motivated by the deployment of ML models in resource-constrained devices, and we show it to be useful in the MPC setting as well. Our results show that it is possible to evaluate realistic models---specifically Google's MobileNets line of models for image recognition---within seconds. Our performance gain can be mainly attributed to two key ingredients: One is the use of the three-party MPC protocol based on replicated secret sharing by Araki et al. (S\&P'17), whose multiplication only requires sending one number per party. Moreover, it allows to evaluate arbitrary long dot products at the same communication cost of a single multiplication, which facilitates matrix multiplications considerably. The second main ingredient is the use of arithmetic modulo $2^{64}$, for which we develop a set of primitives of indepedent interest that are necessary for the quantization like comparison and truncation by a secret shift.

Metadata
Available format(s)
PDF
Publication info
Preprint. MINOR revision.
Keywords
Machine LearningMulti-Party ComputationQuantization
Contact author(s)
escudero @ cs au dk
History
2020-06-23: last of 3 revisions
2019-02-13: received
See all versions
Short URL
https://ia.cr/2019/131
License
Creative Commons Attribution
CC BY
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.