Paper 2024/980
FaultyGarble: Fault Attack on Secure Multiparty Neural Network Inference
Abstract
The success of deep learning across a variety of applications, including inference on edge devices, has led to increased concerns about the privacy of users’ data and deep learning models. Secure multiparty computation allows parties to remedy this concern, resulting in a growth in the number of such proposals and improvements in their efficiency. The majority of secure inference protocols relying on multiparty computation assume that the client does not deviate from the protocol and passively attempts to extract information. Yet clients, driven by different incentives, can act maliciously to actively deviate from the protocol and disclose the deep learning model owner’s private information. Interestingly, faults are well understood in multiparty computation-related literature, although fault attacks have not been explored. Our paper introduces the very first fault attack against secure inference implementations relying on garbled circuits as a prime example of multiparty computation schemes. In this regard, laser fault injection coupled with a model-extraction attack is successfully mounted against existing solutions that have been assumed to be secure against active attacks. Notably, the number of queries required for the attack is equal to that of the best model extraction attack mounted against the secure inference engines under the semi-honest scenario.
Metadata
- Available format(s)
- Category
- Attacks and cryptanalysis
- Publication info
- Preprint.
- Keywords
- Multiparty ComputationGarbled CircuitsMalicious AdversaryNeural Network InferenceLaser Fault Attack
- Contact author(s)
-
mhashemi @ wpi edu
dmmehta2 @ wpi edu
krmitard @ wpi edu
stajik @ wpi edu
fganji @ wpi edu - History
- 2024-09-05: revised
- 2024-06-18: received
- See all versions
- Short URL
- https://ia.cr/2024/980
- License
-
CC BY-NC-ND
BibTeX
@misc{cryptoeprint:2024/980, author = {Mohammad Hashemi and Dev Mehta and Kyle Mitard and Shahin Tajik and Fatemeh Ganji}, title = {{FaultyGarble}: Fault Attack on Secure Multiparty Neural Network Inference}, howpublished = {Cryptology {ePrint} Archive, Paper 2024/980}, year = {2024}, url = {https://eprint.iacr.org/2024/980} }