Paper 2024/852
Breaking Indistinguishability with Transfer Learning: A First Look at SPECK32/64 Lightweight Block Ciphers
Abstract
In this research, we introduce MIND-Crypt, a novel attack framework that uses deep learning (DL) and transfer learning (TL) to challenge the indistinguishability of block ciphers, specifically SPECK32/64 encryption algorithm in CBC mode (Cipher Block Chaining) against Known Plaintext Attacks (KPA). Our methodology includes training a DL model with ciphertexts of two messages encrypted using the same key. The selected messages have the same byte-length and differ by only one bit at the binary level. This DL model employs a residual network architecture. For the TL, we use the trained DL model as a feature extractor, and these features are then used to train a shallow machine learning, such as XGBoost. This dual strategy aims to distinguish ciphertexts of two encrypted messages, addressing traditional cryptanalysis challenges. Our findings demonstrate that the deep learning model achieves an accuracy of approximately 99% under consistent cryptographic conditions (Same Key or Rounds) with the SPECK32/64 cipher. However, performance degrades to random guessing levels (50%) when tested with ciphertext generated from different keys or different encryption rounds of SPECK32/64. To enhance the results, the DL model requires retraining with different keys or encryption rounds using larger datasets ($10^{7}$ samples). To overcome this limitation, we implement TL, achieving an accuracy of about 53% with just 10,000 samples, which is better than random guessing. Further training with 580,000 samples increases accuracy to nearly 99%, showing a substantial reduction in data requirements by over 94%. This shows that an attacker can utilize machine learning models to break indistinguishability by accessing pairs of plaintexts and their corresponding ciphertexts encrypted with the same key, without directly interacting with the communicating parties.
Metadata
- Available format(s)
- Category
- Attacks and cryptanalysis
- Publication info
- Preprint.
- Keywords
- IndistinguishabilityCryptanalysisMachine Learning
- Contact author(s)
-
danijy @ tamu edu
kalyan @ tamu edu
nsaxena @ tamu edu - History
- 2024-05-31: approved
- 2024-05-30: received
- See all versions
- Short URL
- https://ia.cr/2024/852
- License
-
CC BY
BibTeX
@misc{cryptoeprint:2024/852, author = {Jimmy Dani and Kalyan Nakka and Nitesh Saxena}, title = {Breaking Indistinguishability with Transfer Learning: A First Look at {SPECK32}/64 Lightweight Block Ciphers}, howpublished = {Cryptology {ePrint} Archive, Paper 2024/852}, year = {2024}, url = {https://eprint.iacr.org/2024/852} }