Paper 2023/1320

Practical Privacy-Preserving Machine Learning using Fully Homomorphic Encryption

Michael Brand, RMIT University
Gaëtan Pradel, Royal Holloway University of London, INCERT GIE
Abstract

Machine learning is a widely-used tool for analysing large datasets, but increasing public demand for privacy preservation and the corresponding introduction of privacy regulations have severely limited what data can be analysed, even when this analysis is for societal benefit. Homomorphic encryption, which allows computation on encrypted data, is a natural solution to this dilemma, allowing data to be analysed without sacrificing privacy. Because homomorphic encryption is computationally expensive, however, current solutions are mainly restricted to use it for inference and not training. In this work, we present a practically viable approach to privacy-preserving machine learning training using fully homomorphic encryption. Our method achieves fast training speeds, taking less than 45 seconds to train a binary classifier over thousands of samples on a single mid-range computer, significantly outperforming state-of-the-art results.

Metadata
Available format(s)
PDF
Category
Cryptographic protocols
Publication info
Preprint.
Keywords
PrivacyFully Homomorphic EncryptionMachine Learning TrainingSupport Vector Machines
Contact author(s)
michael brand @ rmit edu au
gpradel @ incert lu
History
2023-09-08: approved
2023-09-05: received
See all versions
Short URL
https://ia.cr/2023/1320
License
Creative Commons Attribution
CC BY

BibTeX

@misc{cryptoeprint:2023/1320,
      author = {Michael Brand and Gaëtan Pradel},
      title = {Practical Privacy-Preserving Machine Learning using Fully Homomorphic Encryption},
      howpublished = {Cryptology {ePrint} Archive, Paper 2023/1320},
      year = {2023},
      url = {https://eprint.iacr.org/2023/1320}
}
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.