Paper 2018/854
Universal Multi-Party Poisoning Attacks
Saeed Mahloujifar, Mahammad Mahmoody, and Ameer Mohammed
Abstract
In this work, we demonstrate universal multi-party poisoning attacks that adapt and apply to any multi-party learning process with arbitrary interaction pattern between the parties. More generally, we introduce and study $(k,p)$-poisoning attacks in which an adversary controls $k\in[m]$ of the parties, and for each corrupted party $P_i$, the adversary submits some poisoned data $T'_i$ on behalf of $P_i$ that is still "$(1-p)$-close" to the correct data $T_i$ (e.g., $1-p$ fraction of $T'_i$ is still honestly generated). We prove that for any "bad" property $B$ of the final trained hypothesis $h$ (e.g., $h$ failing on a particular test example or having "large" risk) that has an arbitrarily small constant probability of happening without the attack, there always is a $(k,p)$-poisoning attack that increases the probability of $B$ from $\mu$ to by $\mu^{1-p \cdot k/m} = \mu + \Omega(p \cdot k/m)$. Our attack only uses clean labels, and it is online. More generally, we prove that for any bounded function $f(x_1,\dots,x_n) \in [0,1]$ defined over an $n$-step random process $x = (x_1,\dots,x_n)$, an adversary who can override each of the $n$ blocks with \emph{even dependent} probability $p$ can increase the expected output by at least $\Omega(p \cdot \mathrm{Var}[f(x)])$.
Metadata
- Available format(s)
- Category
- Foundations
- Publication info
- Published elsewhere. Minor revision. ICML 2019
- Keywords
- BiasingCoin-TossingPoisoningMulti-party learning
- Contact author(s)
- mohammad @ virginia edu
- History
- 2021-11-04: revised
- 2018-09-20: received
- See all versions
- Short URL
- https://ia.cr/2018/854
- License
-
CC BY
BibTeX
@misc{cryptoeprint:2018/854, author = {Saeed Mahloujifar and Mahammad Mahmoody and Ameer Mohammed}, title = {Universal Multi-Party Poisoning Attacks}, howpublished = {Cryptology {ePrint} Archive, Paper 2018/854}, year = {2018}, url = {https://eprint.iacr.org/2018/854} }