Paper 2022/1087

I Know What Your Layers Did: Layer-wise Explainability of Deep Learning Side-channel Analysis

Guilherme Perin, Leiden University
Lichao Wu, Delft University of Technology
Stjepan Picek, Radboud University Nijmegen
Abstract

Masked cryptographic implementations can be vulnerable to higher-order attacks. For instance, deep neural networks have proven effective for second-order profiling side-channel attacks even in a black-box setting (no prior knowledge of masks and implementation details). While such attacks have been successful, no explanations were provided for understanding why a variety of deep neural networks can (or cannot) learn high-order leakages and what the limitations are. In other words, we lack the explainability on neural network layers combining (or not) unknown and random secret shares, which is a necessary step to defeat, e.g., Boolean masking countermeasures. In this paper, we use information-theoretic metrics to explain the internal activities of deep neural network layers. We propose a novel methodology for the explainability of deep learning-based profiling side-channel analysis (denoted ExDL-SCA) to understand the processing of secret masks. Inspired by the Information Bottleneck theory, our explainability methodology uses perceived information to explain and detect the different phenomena that occur in deep neural networks, such as fitting, compression, and generalization. We provide experimental results on masked AES datasets showing where, what, and why deep neural networks learn relevant features from input trace sets while compressing irrelevant ones, including noise. This paper opens new perspectives for understanding the role of different neural network layers in profiling side-channel attacks.

Metadata
Available format(s)
PDF
Category
Attacks and cryptanalysis
Publication info
Preprint.
Keywords
Side-channel AnalysisDeep learningMaskingExplainabilityPerceived InformationInformation Bottleneck Theory
Contact author(s)
guilhermeperin7 @ gmail com
L Wu-4 @ tudelft nl
stjepan picek @ ru nl
History
2023-02-17: revised
2022-08-21: received
See all versions
Short URL
https://ia.cr/2022/1087
License
Creative Commons Attribution
CC BY

BibTeX

@misc{cryptoeprint:2022/1087,
      author = {Guilherme Perin and Lichao Wu and Stjepan Picek},
      title = {I Know What Your Layers Did: Layer-wise Explainability of Deep Learning Side-channel Analysis},
      howpublished = {Cryptology ePrint Archive, Paper 2022/1087},
      year = {2022},
      note = {\url{https://eprint.iacr.org/2022/1087}},
      url = {https://eprint.iacr.org/2022/1087}
}
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.