Paper 2023/1763

Secure Transformer Inference

Mu Yuan, University of Science and Technology of China
Lan Zhang, University of Science and Technology of China
Xiang-Yang Li, University of Science and Technology of China
Abstract

We present a three-party protocol that can protect both Transformer parameters and user data during the inference phase. For each feedforward inference process, our protocol only introduces permutation computation of input and output data on the user side. Our protocol, Secure Transformer Inference Protocol (STIP), can be applied to real-world services like ChatGPT.

Metadata
Available format(s)
PDF
Category
Applications
Publication info
Preprint.
Keywords
large language modeltransformerinferencesecure protocol
Contact author(s)
ym0813 @ mail ustc edu cn
zhanglan @ ustc edu cn
xiangyangli @ ustc edu cn
History
2023-11-17: approved
2023-11-15: received
See all versions
Short URL
https://ia.cr/2023/1763
License
Creative Commons Attribution
CC BY

BibTeX

@misc{cryptoeprint:2023/1763,
      author = {Mu Yuan and Lan Zhang and Xiang-Yang Li},
      title = {Secure Transformer Inference},
      howpublished = {Cryptology ePrint Archive, Paper 2023/1763},
      year = {2023},
      note = {\url{https://eprint.iacr.org/2023/1763}},
      url = {https://eprint.iacr.org/2023/1763}
}
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.