Paper 2020/042

BLAZE: Blazing Fast Privacy-Preserving Machine Learning

Arpita Patra and Ajith Suresh

Abstract

Machine learning tools have illustrated their potential in many significant sectors such as healthcare and finance, to aide in deriving useful inferences. The sensitive and confidential nature of the data, in such sectors, raises natural concerns for the privacy of data. This motivated the area of Privacy-preserving Machine Learning (PPML) where privacy of the data is guaranteed. Typically, ML techniques require large computing power, which leads clients with limited infrastructure to rely on the method of Secure Outsourced Computation (SOC). In SOC setting, the computation is outsourced to a set of specialized and powerful cloud servers and the service is availed on a pay-per-use basis. In this work, we explore PPML techniques in the SOC setting for widely used ML algorithms-- Linear Regression, Logistic Regression, and Neural Networks. We propose BLAZE, a blazing fast PPML framework in the three server setting tolerating one malicious corruption over a ring ($\mathbb{Z}_{2^{\ell}}$). BLAZE achieves the stronger security guarantee of fairness (all honest servers get the output whenever the corrupt server obtains the same). Leveraging an input-independent preprocessing phase, BLAZE has a fast input-dependent online phase relying on efficient PPML primitives such as: (i) A dot product protocol for which the communication in the online phase is independent of the vector size, the first of its kind in the three server setting; (ii) A method for truncation that shuns evaluating expensive circuit for Ripple Carry Adders (RCA) and achieves a constant round complexity. This improves over the truncation method of ABY3 (Mohassel et al., CCS 2018) that uses RCA and consumes a round complexity that is of the order of the depth of RCA (which is the same as the underlying ring size). An extensive benchmarking of BLAZE for the aforementioned ML algorithms over a 64-bit ring in both WAN and LAN settings shows massive improvements over ABY3. Concretely, we observe improvements up to $333\times$ for Linear Regression, $53\times$ for Logistic Regression and $276\times$ for Neural Networks over WAN. Similarly, we show improvements up to $2610\times$ for Linear Regression, $54\times$ for Logistic Regression and $278\times$for Neural Networks over LAN.

Note: This article is the full and extended version of an article published in The Network and Distributed System Security Symposium (NDSS) 2020. The article also fixes a small bug, present in one of the protocols of the earlier version.

Metadata
Available format(s)
PDF
Category
Cryptographic protocols
Publication info
Published elsewhere. Major revision.The Network and Distributed System Security Symposium (NDSS) 2020
DOI
10.14722/ndss.2020.24202
Keywords
MPCPPMLPrivacy-preserving Machine LearningMulti-party Computation
Contact author(s)
ajith @ iisc ac in
arpita @ iisc ac in
History
2021-01-06: last of 6 revisions
2020-01-15: received
See all versions
Short URL
https://ia.cr/2020/042
License
Creative Commons Attribution
CC BY

BibTeX

@misc{cryptoeprint:2020/042,
      author = {Arpita Patra and Ajith Suresh},
      title = {BLAZE: Blazing Fast Privacy-Preserving Machine Learning},
      howpublished = {Cryptology ePrint Archive, Paper 2020/042},
      year = {2020},
      doi = {10.14722/ndss.2020.24202},
      note = {\url{https://eprint.iacr.org/2020/042}},
      url = {https://eprint.iacr.org/2020/042}
}
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.