You are looking at a specific version 20210224:213952 of this paper. See the latest version.

Paper 2021/201

DAUnTLeSS: Data Augmentation and Uniform Transformation for Learning with Scalability and Security

Hanshen Xiao and Srinivas Devadas

Abstract

We revisit private optimization and learning from an information processing view. The main contribution of this paper is twofold. First, different from the classic cryptographic framework of operation-by-operation obfuscation, a novel private learning and inference framework via either data-dependent or random transformation on the sample domain is proposed. Second, we propose a novel security analysis framework, termed probably approximately correct (PAC) inference resistance, which bridges the information loss in data processing and prior knowledge. Through data mixing, we develop an information theoretical security amplifier with a foundation of PAC security. We study the applications of such a framework from generalized linear regression models to modern learning techniques, such as deep learning. On the information theoretical privacy side, we compare three privacy interpretations: ambiguity, statistical indistinguishability (Differential Privacy) and PAC inference resistance, and precisely describe the information leakage of our framework. We show the advantages of this new random transform approach with respect to underlying privacy guarantees, computational efficiency and utility for fully connected neural networks.

Metadata
Available format(s)
PDF
Category
Foundations
Publication info
Preprint. MINOR revision.
Keywords
Information-theoretical securityPrivate machine learningProbably approximately correct inferenceDifferential Privacy
Contact author(s)
hsxiao @ mit edu,devadas @ mit edu
History
2021-05-25: revised
2021-02-24: received
See all versions
Short URL
https://ia.cr/2021/201
License
Creative Commons Attribution
CC BY
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.