Cryptology ePrint Archive: Report 2019/1334

Privacy-Preserving Distributed Machine Learning based on Secret Sharing

Ye Dong and Xiaojun Chen and Liyan Shen

Abstract: Machine Learning has been widely applied in practice, such as disease diagnosis, target detection. Commonly, a good model relies on massive training data collected from different sources. However, the collected data might expose sensitive information. To solve the problem, researchers have proposed many excellent methods that combine machine learning with privacy protection technologies, such as secure multiparty computation(MPC), homomorphic encryption(HE), and differential privacy. In the meanwhile, some other researchers proposed distributed machine learning which allows the clients to store their data locally but train a model collaboratively. The first kind of method focuses on security, but the performance and accuracy remain to be improved, while the second provides higher accuracy and better performance but weaker security, for instance, the adversary can launch membership attacks from the gradients' updates in plaintext. In this paper, we join secret sharing to distributed machine learning to achieve reliable performance, accuracy and high-level security. Next, we design, implement, and evaluate a practical system to jointly learn an accurate model under semi-honest and servers-only malicious adversary security, respectively. And the experiments show our protocols achieve the best overall performance as well.

Category / Keywords: applications / Secret Sharing and Distributed Machine Learning and Privacy-Preserving

Date: received 19 Nov 2019

Contact author: 19950512dy at gmail com

Available format(s): PDF | BibTeX Citation

Note: This paper is for personal research only, for copyright please refer to Springer copyright.

Version: 20191120:131356 (All versions of this report)

Short URL: ia.cr/2019/1334


[ Cryptology ePrint archive ]