Cryptology ePrint Archive: Report 2017/1016

Differentially Private Access Patterns in Secure Computation

Sahar Mazloom and S. Dov Gordon

Abstract: We explore a new security model for secure computation on large datasets. We assume that two servers have been employed to compute on private data that was collected from many users, and, in order to improve the efficiency of their computation, we establish a new tradeoff with privacy. Specifically, instead of claiming that the servers learn nothing about the input values, we claim that what they do learn from the computation preserves the differential privacy of the input. Leveraging this relaxation of the security model allows us to build a protocol that leaks some information in the form of access patterns to memory, while also providing a formal bound on what is learned from the leakage.

We then demonstrate that this leakage is useful in a broad class of computations. We show that computations such as histograms, PageRank and matrix factorization, which can be performed in common graph-parallel frameworks such as MapReduce or Pregel, benefit from our relaxation. We implement a protocol for securely executing graph-parallel computations, and evaluate the performance on the three examples just mentioned above. We demonstrate marked improvement over prior implementations for these computations.

Category / Keywords: cryptographic protocols / secure computation, differential privacy

Date: received 12 Oct 2017, last revised 17 Oct 2017

Contact author: gordon at gmu edu

Available format(s): PDF | BibTeX Citation

Version: 20171018:022119 (All versions of this report)

Short URL: ia.cr/2017/1016

Discussion forum: Show discussion | Start new discussion


[ Cryptology ePrint archive ]