You are looking at a specific version 20181025:135000 of this paper. See the latest version.

Paper 2018/1001

Illuminating the Dark or how to recover what should not be seen

Sergiu Carpov and Caroline Fontaine and Damien Ligier and Renaud Sirdey

Abstract

Functional encryption (FE) is a cryptographic primitive which allows to partially decrypt ciphertexts, e.g. evaluate a function over encrypted inputs and obtain the output in clear. The downside of employing FE schemes is that some details about input data ``leak''. We call information leakage of a FE scheme the maximal information one can gain about input data from the clear-text output of FE evaluated function. FE which are usable in practice support only limited functionalities, in particular linear or quadratic polynomial evaluation. In a first contribution of this work we describe how to combine a quadratic FE scheme with a classification algorithm in order to perform a classification over encrypted data use-case. Compared to direct usage of FE for a linear or a polynomial classifier our method allows to increase classification accuracy and/or decrease the number of used FE secret keys. In a second contribution we show how to estimate the information leakage of the classification use-case and how to compare it to an ideal information leakage. The ideal information leakage is the minimal information leakage intrinsic to achieve the use-case requirement (e.g. perform a classification task). We introduce a method for estimating the information leakage (real and ideal ones) based on machine learning techniques, in particular on neural networks. We perform extensive experimentations using MNIST image classification and Census Income datasets. In the case of MNIST, we were able to reconstruct images which are close (in terms of MSE distance and as well as visually) to original images. The knowledge of someones handwriting style facilitate the possibility to impersonate him, to steal his identity, etc. As for the second dataset, we were able to increase the accuracy of predicting input dataset features (e.g. an individual's race) from FE outputs available in clear. Obtained information leakages represent a major security flaw of FE based classifiers because they reveal sensible information about individuals.

Metadata
Available format(s)
PDF
Category
Public-key cryptography
Publication info
Preprint. MINOR revision.
Keywords
Functional encryptionInformation leakagePrivate classification
Contact author(s)
sergiu carpov @ cea fr
History
2019-06-25: last of 4 revisions
2018-10-22: received
See all versions
Short URL
https://ia.cr/2018/1001
License
Creative Commons Attribution
CC BY
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.