Paper 2022/663
SafeNet: The Unreasonable Effectiveness of Ensembles in Private Collaborative Learning
Abstract
Secure multiparty computation (MPC) has been proposed to allow multiple mutually distrustful data owners to jointly train machine learning (ML) models on their combined data. However, by design, MPC protocols faithfully compute the training functionality, which the adversarial ML community has shown to leak private information and can be tampered with in poisoning attacks. In this work, we argue that model ensembles, implemented in our framework called SafeNet, are a highly MPC-amenable way to avoid many adversarial ML attacks. The natural partitioning of data amongst owners in MPC training allows this approach to be highly scalable at training time, provide provable protection from poisoning attacks, and provably defense against a number of privacy attacks. We demonstrate SafeNet's efficiency, accuracy, and resilience to poisoning on several machine learning datasets and models trained in end-to-end and transfer learning scenarios. For instance, SafeNet reduces backdoor attack success significantly, while achieving $39\times$ faster training and $36 \times$ less communication than the four-party MPC framework of Dalskov et al. Our experiments show that ensembling retains these benefits even in many non-iid settings. The simplicity, cheap setup, and robustness properties of ensembling make it a strong first choice for training ML models privately in MPC.
Metadata
- Available format(s)
- Category
- Cryptographic protocols
- Publication info
- Preprint.
- Keywords
- Machine Learning Poisoning Attacks Secure Computation
- Contact author(s)
-
chaudhari ha @ northeastern edu
jagielski @ google com
a oprea @ northeastern edu - History
- 2022-09-08: last of 2 revisions
- 2022-05-28: received
- See all versions
- Short URL
- https://ia.cr/2022/663
- License
-
CC BY
BibTeX
@misc{cryptoeprint:2022/663, author = {Harsh Chaudhari and Matthew Jagielski and Alina Oprea}, title = {{SafeNet}: The Unreasonable Effectiveness of Ensembles in Private Collaborative Learning}, howpublished = {Cryptology {ePrint} Archive, Paper 2022/663}, year = {2022}, url = {https://eprint.iacr.org/2022/663} }