Paper 2022/1087

I Know What Your Layers Did: Layer-wise Explainability of Deep Learning Side-channel Analysis

Guilherme Perin, Leiden University
Sengim Karayalcin, Leiden University
Lichao Wu, Technical University of Darmstadt
Stjepan Picek, Radboud University Nijmegen
Abstract

Deep neural networks have proven effective for second-order profiling side-channel attacks, even in a black-box setting with no prior knowledge of masks and implementation details. While such attacks have been successful, no explanations were provided for understanding why a variety of deep neural networks can (or cannot) learn high-order leakages and what the limitations are. In other words, we lack the explainability on neural network layers combining (or not) unknown and random secret shares, which is a necessary step to defeat, e.g., Boolean masking countermeasures. In this paper, we use information-theoretic metrics to explain the internal activities of deep neural network layers. We propose a novel methodology for the explainability of deep learning-based profiling side-channel analysis (denoted ExDL-SCA) to understand the processing of secret masks. Inspired by the Information Bottleneck theory, our explainability methodology uses perceived information to explain and detect the different phenomena that occur in deep neural networks, such as fitting, compression, and generalization. We provide experimental results on masked AES datasets showing what relevant features deep neural networks use, and where in the networks relevant features are learned and irrelevant features are compressed. Using our method, evaluators can determine what secret masks are being exploited by the network, which allows for more detailed feedback on the implementations. This paper opens new perspectives for understanding the role of different neural network layers in profiling side-channel attacks.

Metadata
Available format(s)
PDF
Category
Attacks and cryptanalysis
Publication info
Preprint.
Keywords
Side-channel AnalysisDeep learningPerceived InformationInformation Bottleneck TheoryActivation Patching
Contact author(s)
guilhermeperin7 @ gmail com
s karayalcin @ liacs leidenuniv nl
lichao wu @ tu-darmstadt de
stjepan picek @ ru nl
History
2024-10-04: last of 2 revisions
2022-08-21: received
See all versions
Short URL
https://ia.cr/2022/1087
License
Creative Commons Attribution
CC BY

BibTeX

@misc{cryptoeprint:2022/1087,
      author = {Guilherme Perin and Sengim Karayalcin and Lichao Wu and Stjepan Picek},
      title = {I Know What Your Layers Did: Layer-wise Explainability of Deep Learning Side-channel Analysis},
      howpublished = {Cryptology {ePrint} Archive, Paper 2022/1087},
      year = {2022},
      url = {https://eprint.iacr.org/2022/1087}
}
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.