Paper 2021/827

TransNet: Shift Invariant Transformer Network for Power Attack

Suvadeep Hajra, Sayandeep Saha, Manaar Alam, and Debdeep Mukhopadhyay


Masking and desynchronization of power traces are two widely used countermeasures against power attacks. Higher-order power attacks are used to break cryptographic implementations protected by masking countermeasures. However, they require to capture long-distance dependency, the dependencies among distant Points-of-Interest (PoIs) along the time axis, which together contribute to the information leakage. Desynchronization of power traces provides resistance against power attacks by randomly shifting the individual traces, thus, making the PoIs misaligned for different traces. Consequently, a successful attack against desynchronized traces requires to be invariant to the random shifts of the power traces. A successful attack against cryptographic implementations protected by both masking and desynchronization countermeasures requires to be both shift-invariant and capable of capturing long-distance dependency. Recently, Transformer Network (TN) has been introduced in natural language processing literature. TN is better than both Convolutional Neural Network (CNN) and Recurrent Neural Network (RNN) at capturing long-distance dependency, and thus, a natural choice against masking countermeasures. Furthermore, a TN can be made shift-invariant making it robust to desynchronization of traces as well. In this work, we introduce a TN-based model, namely TransNet, for power attacks. Our experiments show that the proposed TransNet model successfully attacks implementation protected by both masking and desynchronization even when it is trained on only synchronized traces. Particularly, it can bring down the mean key rank below 1 using only 400 power traces if evaluated on highly desynchronized ASCAD_desync100 dataset even when it is trained on ASCAD dataset which has no trace desynchronization. Moreover, if compared to other state-of-the-art deep learning models, our proposed model performs significantly better when the attack traces are highly desynchronized.

Available format(s)
Publication info
Preprint. Minor revision.
side channel analysismasking countermeasuretransformer network
Contact author(s)
suvadeep hajra @ gmail com
2021-06-21: received
Short URL
Creative Commons Attribution


      author = {Suvadeep Hajra and Sayandeep Saha and Manaar Alam and Debdeep Mukhopadhyay},
      title = {TransNet: Shift Invariant Transformer Network for Power Attack},
      howpublished = {Cryptology ePrint Archive, Paper 2021/827},
      year = {2021},
      note = {\url{}},
      url = {}
Note: In order to protect the privacy of readers, does not use cookies or embedded third party content.