You are looking at a specific version 20210609:113909 of this paper. See the latest version.

Paper 2021/401

Output Prediction Attacks on Block Ciphers using Deep Learning

Hayato Kimura and Keita Emura and Takanori Isobe and Ryoma Ito and Kazuto Ogawa and Toshihiro Ohigashi

Abstract

Cryptanalysis of symmetric-key ciphers, e.g., linear/differential cryptanalysis, requires an adversary to know the internal structures of the target ciphers. On the other hand, deep learning-based cryptanalysis has attracted significant attention because the adversary is not assumed to have knowledge of the target ciphers except the algorithm interfaces. Such cryptanalysis in a blackbox setting is extremely strong; thus, we must design symmetric-key ciphers that are secure against deep learning-based cryptanalysis. However, almost previous attacks do not clarify what features or internal structures affect success probabilities. Although Benamira et al. (Eurocrypt 2021) and Chen et al. (ePrint 2021) analyzed Gohr’s results (CRYPTO 2019), they did not find any deep learning specific characteristic where it affects the success probabilities of deep learning-based attacks but does not affect those of linear/differential cryptanalysis. Therefore, it is difficult to employ the results of such cryptanalysis to design deep learning- resistant symmetric-key ciphers. In this paper, we focus on two toy SPN block ciphers (small PRESENT and small AES) and one toy Feistel block cipher (small TWINE) and propose deep learning-based output prediction attacks. Due to its small internal structures, we can construct deep learning models by employing the maximum number of plaintext/ciphertext pairs, and we can precisely calculate the rounds in which full diffusion occurs. Specifically for the SPN block ciphers, we demonstrate the following: (1) our attacks work against a similar number of rounds attacked by linear/differential cryptanalysis, (2) our attacks realize output predictions (precisely ciphertext prediction and plaintext recovery) that are much stronger than distinguishing attacks, and (3) swapping the component order or replacement components affects the success probabilities of the proposed attacks. It is particularly worth noting that this is a deep learning specific characteristic because swapping/replacement does not affect the success probabilities of linear/differential cryptanalysis. We also confirm whether the proposed attacks also work on the Feistel block cipher. We expect that our results will be an important stepping stone in the design of deep learning-resistant symmetric key ciphers.

Metadata
Available format(s)
PDF
Category
Secret-key cryptography
Publication info
Preprint. MINOR revision.
Keywords
Deep LearningBlock CipherSPNFeistel
Contact author(s)
h_kimura @ star tokai-u jp,k-emura @ nict go jp,itorym @ nict go jp,takanori isobe @ ai u-hyogo ac jp,kaz_ogawa @ nict go jp,ohigashi @ tsc u-tokai ac jp
History
2022-12-26: last of 6 revisions
2021-03-27: received
See all versions
Short URL
https://ia.cr/2021/401
License
Creative Commons Attribution
CC BY
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.