Paper 2025/906
Covert Attacks on Machine Learning Training in Passively Secure MPC
Abstract
Secure multiparty computation (MPC) allows data owners to train machine learning models on combined data while keeping the underlying training data private. The MPC threat model either considers an adversary who passively corrupts some parties without affecting their overall behavior, or an adversary who actively modifies the behavior of corrupt parties. It has been argued that in some settings, active security is not a major concern, partly because of the potential risk of reputation loss if a party is detected cheating. In this work we show explicit, simple, and effective attacks that an active adversary can run on existing passively secure MPC training protocols, while keeping essentially zero risk of the attack being detected. The attacks we show can compromise both the integrity and privacy of the model, including attacks reconstructing exact training data. Our results challenge the belief that a threat model that does not include malicious behavior by the involved parties may be reasonable in the context of PPML, motivating the use of actively secure protocols for training.
Metadata
- Available format(s)
-
PDF
- Category
- Applications
- Publication info
- Preprint.
- Keywords
- secure multiparty computationprivacy-preserving machine learningadversarial attacks
- Contact author(s)
-
jagielski @ google com
srachuri @ visa com
daniel escudero @ protonmail com
peter scholl @ cs au dk - History
- 2025-05-21: approved
- 2025-05-21: received
- See all versions
- Short URL
- https://ia.cr/2025/906
- License
-
CC BY
BibTeX
@misc{cryptoeprint:2025/906, author = {Matthew Jagielski and Rahul Rachuri and Daniel Escudero and Peter Scholl}, title = {Covert Attacks on Machine Learning Training in Passively Secure {MPC}}, howpublished = {Cryptology {ePrint} Archive, Paper 2025/906}, year = {2025}, url = {https://eprint.iacr.org/2025/906} }