eprint.iacr.org will be offline for approximately an hour for routine maintenance at 11pm UTC on Tuesday, April 16. We lost some data between April 12 and April 14, and some authors have been notified that they need to resubmit their papers.

Paper 2020/592

SWIFT: Super-fast and Robust Privacy-Preserving Machine Learning

Nishat Koti, Mahak Pancholi, Arpita Patra, and Ajith Suresh

Abstract

Performing machine learning (ML) computation on private data while maintaining data privacy, aka Privacy-preserving Machine Learning (PPML), is an emergent field of research. Recently, PPML has seen a visible shift towards the adoption of the Secure Outsourced Computation (SOC) paradigm due to the heavy computation that it entails. In the SOC paradigm, computation is outsourced to a set of powerful and specially equipped servers that provide service on a pay-per-use basis. In this work, we propose SWIFT, a robust PPML framework for a range of ML algorithms in SOC setting, that guarantees output delivery to the users irrespective of any adversarial behaviour. Robustness, a highly desirable feature, evokes user participation without the fear of denial of service. At the heart of our framework lies a highly-efficient, maliciously-secure, three-party computation (3PC) over rings that provides guaranteed output delivery (GOD) in the honest-majority setting. To the best of our knowledge, SWIFT is the first robust and efficient PPML framework in the 3PC setting. SWIFT is as fast as (and is strictly better in some cases than) the best-known 3PC framework BLAZE (Patra et al. NDSS'20), which only achieves fairness. We extend our 3PC framework for four parties (4PC). In this regime, SWIFT is as fast as the best known fair 4PC framework Trident (Chaudhari et al. NDSS'20) and twice faster than the best-known robust 4PC framework FLASH (Byali et al. PETS'20). We demonstrate our framework's practical relevance by benchmarking popular ML algorithms such as Logistic Regression and deep Neural Networks such as VGG16 and LeNet, both over a 64-bit ring in a WAN setting. For deep NN, our results testify to our claims that we provide improved security guarantee while incurring no additional overhead for 3PC and obtaining 2x improvement for 4PC.

Note: This article is the full and extended version of an article to appear in USENIX Security’21.

Metadata
Available format(s)
PDF
Category
Cryptographic protocols
Publication info
Published elsewhere. Major revision. 30th USENIX Security Symposium (USENIX Security '21)
Keywords
PPMLMPC3PC4PCMulti-party ComputationHonest-majorityRobustGuaranteed Output DeliveryPrivacy Preserving Machine Learning
Contact author(s)
kotis @ iisc ac in
mahakp @ iisc ac in
arpita @ iisc ac in
ajith @ iisc ac in
History
2021-02-17: last of 4 revisions
2020-05-22: received
See all versions
Short URL
https://ia.cr/2020/592
License
Creative Commons Attribution
CC BY

BibTeX

@misc{cryptoeprint:2020/592,
      author = {Nishat Koti and Mahak Pancholi and Arpita Patra and Ajith Suresh},
      title = {SWIFT: Super-fast and Robust Privacy-Preserving Machine Learning},
      howpublished = {Cryptology ePrint Archive, Paper 2020/592},
      year = {2020},
      note = {\url{https://eprint.iacr.org/2020/592}},
      url = {https://eprint.iacr.org/2020/592}
}
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.