Paper 2017/715

Privacy-Preserving Deep Learning via Additively Homomorphic Encryption

Le Trieu Phong, Yoshinori Aono, Takuya Hayashi, Lihua Wang, and Shiho Moriai

Abstract

We build a privacy-preserving deep learning system in which many learning participants perform neural network-based deep learning over a combined dataset of all, without actually revealing the participants' local data. To that end, we revisit the previous work by Shokri and Shmatikov (ACM CCS 2015) and point out that local data information may be actually leaked to an honest-but-curious server. We then move on to fix that problem via building an enhanced system with following properties: (1) no information is leaked to the server; and (2) accuracy is kept intact, compared to that of the ordinary deep learning system also over the combined dataset. Our system is a bridge between deep learning and cryptography: we utilise stochastic gradient descent (SGD) applied to neural networks, in combination with additively homomorphic encryption. We show that our usage of encryption adds tolerable overhead to the ordinary deep learning system.

Metadata
Available format(s)
PDF
Category
Applications
Publication info
Published elsewhere. Minor revision. IEEE Transactions on Information Forensics and Security
DOI
10.1109/TIFS.2017.2787987
Contact author(s)
phong @ nict go jp
History
2018-01-04: last of 5 revisions
2017-07-27: received
See all versions
Short URL
https://ia.cr/2017/715
License
Creative Commons Attribution
CC BY

BibTeX

@misc{cryptoeprint:2017/715,
      author = {Le Trieu Phong and Yoshinori Aono and Takuya Hayashi and Lihua Wang and Shiho Moriai},
      title = {Privacy-Preserving Deep Learning via Additively Homomorphic Encryption},
      howpublished = {Cryptology {ePrint} Archive, Paper 2017/715},
      year = {2017},
      doi = {10.1109/TIFS.2017.2787987},
      url = {https://eprint.iacr.org/2017/715}
}
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.