Paper 2022/360
Privacy-Preserving Contrastive Explanations with Local Foil Trees
Thijs Veugen, Bart Kamphorst, and Michiel Marcus
Abstract
We present the first algorithm that combines privacy-preserving technologies and state-of-the-art explainable AI to enable privacy-friendly explanations of black-box AI models. We provide a secure algorithm for contrastive explanations of black-box machine learning models that securely trains and uses local foil trees. Our work shows that the quality of these explanations can be upheld whilst ensuring the privacy of both the training data, and the model itself.
Note: To be published in CSCML 2022
Metadata
- Available format(s)
- Category
- Applications
- Publication info
- Preprint. MINOR revision.
- Keywords
- Secure multi-party computationexplainable AIdecision tree
- Contact author(s)
- thijs veugen @ tno nl
- History
- 2022-03-30: last of 3 revisions
- 2022-03-18: received
- See all versions
- Short URL
- https://ia.cr/2022/360
- License
-
CC BY
BibTeX
@misc{cryptoeprint:2022/360, author = {Thijs Veugen and Bart Kamphorst and Michiel Marcus}, title = {Privacy-Preserving Contrastive Explanations with Local Foil Trees}, howpublished = {Cryptology {ePrint} Archive, Paper 2022/360}, year = {2022}, url = {https://eprint.iacr.org/2022/360} }