You are looking at a specific version 20190510:122337 of this paper. See the latest version.

Paper 2019/461

Experimental Evaluation of Deep Neural Network Resistance Against Fault Injection Attacks

Xiaolu Hou and Jakub Breier and Dirmanto Jap and Lei Ma and Shivam Bhasin and Yang Liu

Abstract

Deep learning is becoming a basis of decision making systems in many application domains, such as autonomous vehicles, health systems, etc., where the risk of misclassification can lead to serious consequences. It is necessary to know to which extent are Deep Neural Networks (DNNs) robust against various types of adversarial conditions. In this paper, we experimentally evaluate DNNs implemented in embedded device by using laser fault injection, a physical attack technique that is mostly used in security and reliability communities to test robustness of various systems. We show practical results on four activation functions, ReLu, softmax, sigmoid, and tanh. Our results point out the misclassification possibilities for DNNs achieved by injecting faults into the hidden layers of the network. We evaluate DNNs by using several different attack strategies to show which are the most efficient in terms of misclassification success rates. Protection techniques against these attacks are also presented. Outcomes of this work should be taken into account when deploying devices running DNNs in environments where malicious attacker could tamper with the environmental parameters that would bring the device into unstable conditions, resulting into faults.

Metadata
Available format(s)
PDF
Category
Implementation
Publication info
Preprint. MINOR revision.
Keywords
fault attackneural networkdeep learning
Contact author(s)
jakub breier @ gmail com
History
2021-04-19: last of 2 revisions
2019-05-10: received
See all versions
Short URL
https://ia.cr/2019/461
License
Creative Commons Attribution
CC BY
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.