Cryptology ePrint Archive: Report 2017/502

DeepSecure: Scalable Provably-Secure Deep Learning

Bita Darvish Rouhani and M. Sadegh Riazi and Farinaz Koushanfar

Abstract: This paper proposes DeepSecure, a novel framework that enables scalable execution of the state-of-the-art Deep Learning (DL) models in a privacy-preserving setting. DeepSecure targets scenarios in which neither of the involved parties including the cloud servers that hold the DL model parameters or the delegating clients who own the data is willing to reveal their information. Our framework is the first to empower accurate and scalable DL analysis of data generated by distributed clients without sacrificing the security to maintain efficiency. The secure DL computation in DeepSecure is performed using Yao’s Garbled Circuit (GC) protocol. We devise GC-optimized realization of various components used in DL. Our optimized implementation achieves more than 58-fold higher throughput per sample compared with the best prior solution. In addition to our optimized GC realization, we introduce a set of novel low-overhead pre-processing techniques which further reduce the GC overall runtime in the context of deep learning. Extensive evaluations of various DL applications demonstrate up to two orders-of-magnitude additional runtime improvement achieved as a result of our pre-processing methodology. We also provide mechanisms to securely delegate GC computations to a third party in constrained embedded settings.

Category / Keywords: applications / Deep Learning, Secure Function Evaluation, Garbled Circuit, Content-Aware Data Pre-processing

Date: received 23 May 2017, last revised 1 Jun 2017

Contact author: bita at ucsd edu

Available format(s): PDF | BibTeX Citation

Version: 20170602:162643 (All versions of this report)

Short URL:

Discussion forum: Show discussion | Start new discussion

[ Cryptology ePrint archive ]