Paper 2023/1763

Secure Transformer Inference

Mu Yuan, University of Science and Technology of China, Chinese University of Hong Kong
Lan Zhang, University of Science and Technology of China
Guoliang Xing, Chinese University of Hong Kong
Xiang-Yang Li, University of Science and Technology of China
Abstract

Security of model parameters and user data is critical for Transformer-based services, such as ChatGPT. While recent strides in secure two-party protocols have successfully addressed security concerns in serving Transformer models, their adoption is practically infeasible due to the prohibitive cryptographic overheads involved. Drawing insights from our hands-on experience in developing two real-world Transformer-based services, we identify the inherent efficiency bottleneck in the two-party assumption. To overcome this limitation, we propose a novel three-party threat model that consists of model developer, model server, and data owner. Based on this framework, we design a semi-symmetric permutation-based protection scheme and present STIP, the first secure Transformer inference protocol without any inference accuracy loss. We analyze STIP's resistance to brute force, known-plaintext, and social engineering attacks and prove the privacy leakage upper bound using distance correlation. And we propose a method to integrate the trusted execution environment with STIP to make model parameters resistant to model extraction and fine-tuning attacks. Experiments on six representative series of Transformer models, with up to 70 billion parameters, in real systems show that STIP has strong security and no loss of accuracy. For auto-regressive token generation, STIP achieves 31.7 ms latency for LLaMA2-7b model, significantly reducing the 5-minute overhead of the state-of-the-art two-party protocols.

Metadata
Available format(s)
-- withdrawn --
Category
Applications
Publication info
Preprint.
Keywords
large language modeltransformerinferencesecure protocol
Contact author(s)
ym0813 @ mail ustc edu cn
zhanglan @ ustc edu cn
glxing @ ie cuhk edu hk
xiangyangli @ ustc edu cn
History
2024-10-28: withdrawn
2023-11-15: received
See all versions
Short URL
https://ia.cr/2023/1763
License
Creative Commons Attribution
CC BY
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.