Paper 2019/1315

Trident: Efficient 4PC Framework for Privacy Preserving Machine Learning

Harsh Chaudhari, Rahul Rachuri, and Ajith Suresh

Abstract

Machine learning has started to be deployed in fields such as healthcare and finance, which involves dealing with a lot of sensitive data. This propelled the need for and growth of privacy-preserving machine learning (PPML). We propose an actively secure four-party protocol (4PC), and a framework for PPML, showcasing its applications on four of the most widely-known machine learning algorithms -- Linear Regression, Logistic Regression, Neural Networks, and Convolutional Neural Networks. Our 4PC protocol tolerating at most one malicious corruption is practically efficient as compared to Gordon et al. (ASIACRYPT 2018) as the 4th party in our protocol is not active in the online phase, except input sharing and output reconstruction stages. Concretely, we reduce the online communication as compared to them by 1 ring element. We use the protocol to build an efficient mixed-world framework (Trident) to switch between the Arithmetic, Boolean, and Garbled worlds. Our framework operates in the offline-online paradigm over rings and is instantiated in an outsourced setting for machine learning, where the data is secretly shared among the servers. Also, we propose conversions especially relevant to privacy-preserving machine learning. With the privilege of having an extra honest party, we outperform the current state-of-the-art ABY3 (for three parties), in terms of both rounds as well as communication complexity. The highlights of our framework include using a minimal number of expensive circuits overall as compared to ABY3. This can be seen in our technique for truncation, which does not affect the online cost of multiplication and removes the need for any circuits in the offline phase. Our B2A conversion has an improvement of $\mathbf{7} \times$ in rounds and $\mathbf{18} \times$ in the communication complexity. The practicality of our framework is argued through improvements in the benchmarking of the aforementioned algorithms when compared with ABY3. All the protocols are implemented over a 64-bit ring in both LAN and WAN settings. Our improvements go up to $\mathbf{187} \times$ for the training phase and $\mathbf{158} \times$ for the prediction phase when observed over LAN and WAN.

Note: This work appeared at the 26th Annual Network and Distributed System Security Symposium (NDSS) 2020. Update: An improved version of this framework is available at https://eprint.iacr.org/2021/755

Metadata
Available format(s)
PDF
Category
Cryptographic protocols
Publication info
Published elsewhere. Major revision. The 26th Annual Network and Distributed System Security Symposium (NDSS) 2020
DOI
10.14722/ndss.2020.23005
Keywords
Secure ComputationPrivacy-preserving Machine Learning4PCMixed World ConversionsActive Security
Contact author(s)
ajith @ iisc ac in
rachuri @ cs au dk
History
2021-06-08: last of 3 revisions
2019-11-17: received
See all versions
Short URL
https://ia.cr/2019/1315
License
Creative Commons Attribution
CC BY

BibTeX

@misc{cryptoeprint:2019/1315,
      author = {Harsh Chaudhari and Rahul Rachuri and Ajith Suresh},
      title = {Trident: Efficient {4PC} Framework for Privacy Preserving Machine Learning},
      howpublished = {Cryptology {ePrint} Archive, Paper 2019/1315},
      year = {2019},
      doi = {10.14722/ndss.2020.23005},
      url = {https://eprint.iacr.org/2019/1315}
}
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.