Paper 2017/1016
Differentially Private Access Patterns in Secure Computation
Sahar Mazloom and S. Dov Gordon
Abstract
We explore a new security model for secure computation on large datasets. We assume that two servers have been employed to compute on private data that was collected from many users, and, in order to improve the efficiency of their computation, we establish a new tradeoff with privacy. Specifically, instead of claiming that the servers learn nothing about the input values, we claim that what they do learn from the computation preserves the differential privacy of the input. Leveraging this relaxation of the security model allows us to build a protocol that leaks some information in the form of access patterns to memory, while also providing a formal bound on what is learned from the leakage. We then demonstrate that this leakage is useful in a broad class of computations. We show that computations such as histograms, PageRank and matrix factorization, which can be performed in common graph-parallel frameworks such as MapReduce or Pregel, benefit from our relaxation. We implement a protocol for securely executing graph-parallel computations, and evaluate the performance on the three examples just mentioned above. We demonstrate marked improvement over prior implementations for these computations.
Metadata
- Available format(s)
- Category
- Cryptographic protocols
- Publication info
- Published elsewhere. Minor revision. ACM CCS
- Keywords
- secure computationdifferential privacy
- Contact author(s)
- gordon @ gmu edu
- History
- 2018-10-16: last of 4 revisions
- 2017-10-18: received
- See all versions
- Short URL
- https://ia.cr/2017/1016
- License
-
CC BY
BibTeX
@misc{cryptoeprint:2017/1016, author = {Sahar Mazloom and S. Dov Gordon}, title = {Differentially Private Access Patterns in Secure Computation}, howpublished = {Cryptology {ePrint} Archive, Paper 2017/1016}, year = {2017}, url = {https://eprint.iacr.org/2017/1016} }