We study the applications of such a framework from generalized linear regression models to modern learning techniques, such as deep learning. On the information theoretical privacy side, we compare three privacy interpretations: ambiguity, statistical indistinguishability (Differential Privacy) and PAC inference resistance, and precisely describe the information leakage of our framework. We show the advantages of this new random transform approach with respect to underlying privacy guarantees, computational efficiency and utility for fully connected neural networks.
Category / Keywords: foundations / Information-theoretical security, Private machine learning, Probably approximately correct inference, Differential Privacy, Date: received 24 Feb 2021 Contact author: hsxiao at mit edu,devadas@mit edu Available format(s): PDF | BibTeX Citation Version: 20210224:213952 (All versions of this report) Short URL: ia.cr/2021/201