Cryptology ePrint Archive: Report 2021/827

TransNet: Shift Invariant Transformer Network for Power Attack

Suvadeep Hajra and Sayandeep Saha and Manaar Alam and Debdeep Mukhopadhyay

Abstract: Masking and desynchronization of power traces are two widely used countermeasures against power attacks. Higher-order power attacks are used to break cryptographic implementations protected by masking countermeasures. However, they require to capture long-distance dependency, the dependencies among distant Points-of-Interest (PoIs) along the time axis, which together contribute to the information leakage. Desynchronization of power traces provides resistance against power attacks by randomly shifting the individual traces, thus, making the PoIs misaligned for different traces. Consequently, a successful attack against desynchronized traces requires to be invariant to the random shifts of the power traces. A successful attack against cryptographic implementations protected by both masking and desynchronization countermeasures requires to be both shift-invariant and capable of capturing long-distance dependency. Recently, Transformer Network (TN) has been introduced in natural language processing literature. TN is better than both Convolutional Neural Network (CNN) and Recurrent Neural Network (RNN) at capturing long-distance dependency, and thus, a natural choice against masking countermeasures. Furthermore, a TN can be made shift-invariant making it robust to desynchronization of traces as well. In this work, we introduce a TN-based model, namely TransNet, for power attacks. Our experiments show that the proposed TransNet model successfully attacks implementation protected by both masking and desynchronization even when it is trained on only synchronized traces. Particularly, it can bring down the mean key rank below 1 using only 400 power traces if evaluated on highly desynchronized ASCAD_desync100 dataset even when it is trained on ASCAD dataset which has no trace desynchronization. Moreover, if compared to other state-of-the-art deep learning models, our proposed model performs significantly better when the attack traces are highly desynchronized.

Category / Keywords: implementation / side channel analysis, masking countermeasure, transformer network

Date: received 16 Jun 2021

Contact author: suvadeep hajra at gmail com

Available format(s): PDF | BibTeX Citation

Version: 20210621:075024 (All versions of this report)

Short URL: ia.cr/2021/827


[ Cryptology ePrint archive ]