Paper 2023/1729

CompactTag: Minimizing Computation Overheads in Actively-Secure MPC for Deep Neural Networks

Yongqin Wang, University of Southern California
Pratik Sarkar, Supra Research
Nishat Koti, Indian Institute of Science Bangalore
Arpita Patra, Indian Institute of Science Bangalore
Murali Annavaram, University of Southern California
Abstract

Secure Multiparty Computation (MPC) protocols enable secure evaluation of a circuit by several parties, even in the presence of an adversary who maliciously corrupts all but one of the parties. These MPC protocols are constructed using the well-known secret-sharing-based paradigm (SPDZ and SPD$\mathbb{Z}_{2^k}$), where the protocols ensure security against a malicious adversary by computing Message Authentication Code (MAC) tags on the input shares and then evaluating the circuit with these input shares and tags. However, this tag computation adds a significant runtime overhead, particularly for machine learning (ML) applications with computationally intensive linear layers, such as convolutions and fully connected layers. To alleviate the tag computation overhead, we introduce CompactTag, a lightweight algorithm for generating MAC tags specifically tailored for linear layers in ML. Linear layer operations in ML, including convolutions, can be transformed into Toeplitz matrix multiplications. For the multiplication of two matrices with dimensions T1 × T2 and T2 × T3 respectively, SPD$\mathbb{Z}_{2^k}$ required O(T1 · T2 · T3) local multiplications for the tag computation. In contrast, CompactTag only requires O(T1 · T2 + T1 · T3 + T2 · T3) local multiplications, resulting in a substantial performance boost for various ML models. We empirically compared our protocol to the SPD$\mathbb{Z}_{2^k}$ protocol for various ML circuits, including ResNet Training-Inference, Transformer Training-Inference, and VGG16 Training-Inference. SPD$\mathbb{Z}_{2^k}$ dedicated around 30% of its online runtime for tag computation. CompactTag speeds up this tag computation bottleneck by up to 23×, resulting in up to 1.47× total online phase runtime speedups for various ML workloads.

Metadata
Available format(s)
PDF
Category
Cryptographic protocols
Publication info
Preprint.
Keywords
Machine LearningSecure ComputationNeural NetworksDishonest MajorityPPML
Contact author(s)
yongqin @ usc edu
pratik93 @ bu edu
kotis @ iisc ac in
arpita @ iisc ac in
annavara @ usc edu
History
2023-11-13: approved
2023-11-08: received
See all versions
Short URL
https://ia.cr/2023/1729
License
Creative Commons Attribution
CC BY

BibTeX

@misc{cryptoeprint:2023/1729,
      author = {Yongqin Wang and Pratik Sarkar and Nishat Koti and Arpita Patra and Murali Annavaram},
      title = {CompactTag: Minimizing Computation Overheads in Actively-Secure MPC for Deep Neural Networks},
      howpublished = {Cryptology ePrint Archive, Paper 2023/1729},
      year = {2023},
      note = {\url{https://eprint.iacr.org/2023/1729}},
      url = {https://eprint.iacr.org/2023/1729}
}
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.