Paper 2023/1763

Secure Transformer Inference

Mu Yuan, University of Science and Technology of China
Lan Zhang, University of Science and Technology of China
Xiang-Yang Li, University of Science and Technology of China
Abstract

Security of model parameters and user data is critical for Transformer-based services, such as ChatGPT. While recent strides in secure two-party protocols have successfully addressed security concerns in serving Transformer models, their adoption is practically infeasible due to the prohibitive cryptographic overheads involved. Drawing insights from our hands-on experience in developing two real-world Transformer-based services, we identify the inherent efficiency bottleneck in the two-party assumption. To overcome this limitation, we propose a novel three-party threat model. Within this framework, we design a semi-symmetric permutation-based protection scheme and present STIP, the first secure Transformer inference protocol without any inference accuracy loss. Experiments on representative Transformer models in real systems show that STIP has practical security and outperforms state-of-the-art secure two-party protocols in efficiency by millions of times.

Metadata
Available format(s)
PDF
Category
Applications
Publication info
Preprint.
Keywords
large language modeltransformerinferencesecure protocol
Contact author(s)
ym0813 @ mail ustc edu cn
zhanglan @ ustc edu cn
xiangyangli @ ustc edu cn
History
2024-05-08: revised
2023-11-15: received
See all versions
Short URL
https://ia.cr/2023/1763
License
Creative Commons Attribution
CC BY

BibTeX

@misc{cryptoeprint:2023/1763,
      author = {Mu Yuan and Lan Zhang and Xiang-Yang Li},
      title = {Secure Transformer Inference},
      howpublished = {Cryptology ePrint Archive, Paper 2023/1763},
      year = {2023},
      note = {\url{https://eprint.iacr.org/2023/1763}},
      url = {https://eprint.iacr.org/2023/1763}
}
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.