You are looking at a specific version 20170927:141731 of this paper. See the latest version.

Paper 2017/950

Blockwise $p$-Tampering Attacks on Cryptographic Primitives, Extractors, and Learners

Saeed Mahloujifar and Mohammad Mahmoody

Abstract

Austrin, Chung, Mahmoody, Pass and Seth (Crypto'14) studied the notion of bitwise $p$-tampering attacks over randomized algorithms in which an efficient `virus' gets to control each bit of the randomness with independent probability $p$ in an online way. The work of Austrin et al. showed how to break certain `privacy primitives' (e.g., encryption, commitments, etc.) through bitwise $p$-tampering, by giving a bitwise $p$-tampering biasing attack for increasing the average $E[f(U_n)]$ of any efficient function $f \colon \{0,1\}^n \to [-1,+1]$ by $\Omega(p \cdot Var[f(U_n)])$ where $Var[f(U_n)]$ is the variance of $f(U_n)$. In this work, we revisit and extend the bitwise tampering model of Austrin et al. to blockwise setting, where blocks of randomness becomes tamperable with independent probability $p$. Our main result is an efficient blockwise $p$-tampering attack to bias the average $E[f(X)]$ of any efficient function $f$ mapping arbitrary $X$ to $[-1,+1]$ by $\Omega(p \cdot Var[f(X)])$ regardless of how $X$ is partitioned into individually tamperable blocks $X=(X_1,\dots,X_n)$. Relying on previous works, our main biasing attack immediately implies efficient attacks against the privacy primitives as well as seedless multi-source extractors, in a model where the attacker gets to tamper with each block (or source) of the randomness with independent probability $p$. Further, we show how to increase the classification error of deterministic learners in the so called `targeted poisoning' attack model under Valiant's adversarial noise. In this model, an attacker has a `target' test data $d$ in mind and wishes to increase the error of classifying $d$ while she gets to tamper with each training example with independent probability $p$ an in an online way.

Metadata
Available format(s)
PDF
Publication info
Published by the IACR in TCC 2017
Keywords
TamperingExtractorsAdversarial LearningRandomness.
Contact author(s)
mahmoody @ gmail com
History
2018-11-27: last of 2 revisions
2017-09-27: received
See all versions
Short URL
https://ia.cr/2017/950
License
Creative Commons Attribution
CC BY
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.