Paper 2019/218
Computational Limitations in Robust Classification and Win-Win Results
Akshay Degwekar and Vinod Vaikuntanathan
Abstract
We continue the study of statistical/computational tradeoffs in learning robust classifiers, following the recent work of Bubeck, Lee, Price and Razenshteyn who showed examples of classification tasks where (a) an efficient robust classifier exists, *in the small-perturbation regime*; (b) a non-robust classifier can be learned efficiently; but (c) it is computationally hard to learn a robust classifier, assuming the hardness of factoring large numbers. Indeed, the question of whether a robust classifier for their task exists in the large perturbation regime seems related to important open questions in computational number theory. In this work, we extend their work in three directions. First, we demonstrate classification tasks where computationally efficient robust classification is impossible, even when computationally unbounded robust classifiers exist. We rely on the hardness of decoding problems with preprocessing on codes and lattices. Second, we show hard-to-robustly-learn classification tasks *in the large-perturbation regime*. Namely, we show that even though an efficient classifier that is very robust (namely, tolerant to large perturbations) exists, it is computationally hard to learn any non-trivial robust classifier. Our first task relies on the existence of one-way functions, a minimal assumption in cryptography, and the second on the hardness of the learning parity with noise problem. In the latter setting, not only does a non-robust classifier exist, but also an efficient algorithm that generates fresh new labeled samples given access to polynomially many training examples (termed as generation by Kearns et. al. (1994)). Third, we show that any such counterexample implies the existence of cryptographic primitives such as one-way functions or even forms of public-key encryption. This leads us to a win-win scenario: either we can quickly learn an efficient robust classifier, or we can construct new instances of popular and useful cryptographic primitives.
Metadata
- Available format(s)
- Category
- Foundations
- Publication info
- Preprint. MINOR revision.
- Keywords
- Robust ClassificationLearning Parity with Noise
- Contact author(s)
- vinodv @ mit edu
- History
- 2019-02-27: received
- Short URL
- https://ia.cr/2019/218
- License
-
CC BY
BibTeX
@misc{cryptoeprint:2019/218, author = {Akshay Degwekar and Vinod Vaikuntanathan}, title = {Computational Limitations in Robust Classification and Win-Win Results}, howpublished = {Cryptology {ePrint} Archive, Paper 2019/218}, year = {2019}, url = {https://eprint.iacr.org/2019/218} }