Paper 2023/006
Exploring multi-task learning in the context of two masked AES implementations
Abstract
This paper investigates different ways of applying multi-task learning in the context of two masked AES implementations (via the ASCAD-r and ASCAD-v2 databases). Enabled by multi-task learning, we propose novel architectures that significantly increase the consistency and performance of deep neural networks in a context where the attacker can not access the randomness of the countermeasures during profiling. Our work provides a wide range of experiments to understand the benefits of multi-task strategies against the current single-task state of the art. We show that multi-task learning is significantly more performant than single-task models on all our experiments. Furthermore, such strategies achieve novel milestones against protected implementations as we propose a new best attack on ASCAD-r and ASCAD-v2, along with models that defeat for the first time all masks of the affine masking on ASCAD-v2.
Metadata
- Available format(s)
-
PDF
- Category
- Attacks and cryptanalysis
- Publication info
- Preprint.
- Keywords
- Side Channel AttacksMaskingDeep LearningMulti-task Learning
- Contact author(s)
-
thomas marquet @ aau at
elisabeth oswald @ aau at - History
- 2023-09-10: last of 3 revisions
- 2023-01-02: received
- See all versions
- Short URL
- https://ia.cr/2023/006
- License
-
CC BY
BibTeX
@misc{cryptoeprint:2023/006, author = {Thomas Marquet and Elisabeth Oswald}, title = {Exploring multi-task learning in the context of two masked AES implementations}, howpublished = {Cryptology ePrint Archive, Paper 2023/006}, year = {2023}, note = {\url{https://eprint.iacr.org/2023/006}}, url = {https://eprint.iacr.org/2023/006} }