Paper 2024/297
Accelerating Training and Enhancing Security Through Message Size Optimization in Symmetric Cryptography
Abstract
This research extends Abadi and Andersen's exploration of neural networks using secret keys for information protection in multiagent systems. Focusing on enhancing confidentiality properties, we employ end-to-end adversarial training with neural networks Alice, Bob, and Eve. Unlike prior work limited to 64-bit messages, our study spans message sizes from 4 to 1024 bits, varying batch sizes and training steps. An innovative aspect involves training model Bob to approach a minimal error value close to zero and examining its effect on the feasibility of the model. This research unveils the neural networks' adaptability and scalability in encryption and decryption across diverse scenarios, offering valuable insights into their optimization potential for secure communication.
Note: The widespread use of neural networks in recent years has greatly improved our ability to handle complex tasks. These networks have proven effective in various applications, from generating fake images with Generative Adversarial Networks (GANs) to exploring symmetric cryptography. Building on previous work by Martín Abadi and David G Andersen, our research focuses on secure communication within a multiagent system, using an end-to-end adversarial training system with neural networks named Alice, Bob, and Eve. We aim to enhance confidentiality properties, crucial for secure communication, by employing flexible end-to-end adversarial training instead of traditional cryptographic approaches. This approach allows neural networks Alice and Bob to adapt dynamically to potential threats, particularly addressing interception by a third network, Eve. Expanding on previous work, our research conducts comprehensive experiments with message sizes ranging from 4 to 1024 bits, adjusting batch sizes and training steps to understand how neural networks scale in mastering encryption and decryption processes across different conditions. A key aspect of our research is training model Bob to approach minimal error values, refining the learning process and emphasizing confidentiality goals. Our paper is structured into sections covering introduction, model overview, experiments, results and analyses, testing, and conclusions, highlighting the potential impact on the community. Each aspect of our research provides unique benefits, from exploring diverse message sizes to dynamic adjustments that enhance scalability and efficiency. By minimizing Bob's error, we aim to achieve the highest levels of confidentiality in secure communication. Overall, this research significantly advances our understanding of neural network adaptability and optimization potential, paving the way for future advancements in secure information exchange.
Metadata
- Available format(s)
- Publication info
- Preprint.
- Keywords
- Symmetric neural networkAliceBobEveCryptographyAdversarial neural cryptography
- Contact author(s)
-
abhisar sos 07 @ gmail com
madhavyadav4595 @ gmail com
gmishratech28 @ gmail com - History
- 2024-02-23: approved
- 2024-02-21: received
- See all versions
- Short URL
- https://ia.cr/2024/297
- License
-
CC BY-SA
BibTeX
@misc{cryptoeprint:2024/297, author = {ABHISAR and Madhav Yadav and Girish Mishra}, title = {Accelerating Training and Enhancing Security Through Message Size Optimization in Symmetric Cryptography}, howpublished = {Cryptology {ePrint} Archive, Paper 2024/297}, year = {2024}, url = {https://eprint.iacr.org/2024/297} }