Paper 2024/1132

A New PPML Paradigm for Quantized Models

Tianpei Lu, The State Key Laboratory of Blockchain and Data Security, Zhejiang University
Bingsheng Zhang, The State Key Laboratory of Blockchain and Data Security, Zhejiang University
Xiaoyuan Zhang, The State Key Laboratory of Blockchain and Data Security, Zhejiang University
Kui Ren, The State Key Laboratory of Blockchain and Data Security, Zhejiang University
Abstract

Model quantization has become a common practice in machine learning (ML) to improve efficiency and reduce computational/communicational overhead. However, adopting quantization in privacy-preserving machine learning (PPML) remains challenging due to the complex internal structure of quantized operators, which leads to inefficient protocols under the existing PPML frameworks. In this work, we propose a new PPML paradigm that is tailor-made for and can benefit from quantized models. Our main observation is that lookup tables can ignore the complex internal constructs of any functions which can be used to simplify the quantized operator evaluation. We view the model inference process as a sequence of quantized operators, and each operator is implemented by a lookup table. We then develop an efficient private lookup table evaluation protocol, and its online communication cost is only $\log n$, where $n$ is the size of the lookup table. On a single CPU core, our protocol can evaluate $2^{26}$ tables with 8-bit input and 8-bit output per second. The resulting PPML framework for quantized models offers extremely fast online performance. The experimental results demonstrate that our quantization strategy achieves substantial speedups over SOTA PPML solutions, improving the online performance by $40\sim 60 \times$ w.r.t. convolutional neural network (CNN) models, such as AlexNet, VGG16, and ResNet18, and by $10\sim 25 \times$ w.r.t. large language models (LLMs), such as GPT-2, GPT-Neo, and Llama2.

Note: Major revision, NDSS 2025

Metadata
Available format(s)
PDF
Category
Cryptographic protocols
Publication info
Published elsewhere. NDSS 25
DOI
10.14722/ndss.2025.242872
Keywords
Model QuantizationPPMLMPC
Contact author(s)
lutianpei @ zju edu cn
bingsheng @ zju edu cn
zhangxiaoyuan @ zju edu cn
kuiren @ zju edu cn
History
2024-11-30: last of 2 revisions
2024-07-11: received
See all versions
Short URL
https://ia.cr/2024/1132
License
Creative Commons Attribution
CC BY

BibTeX

@misc{cryptoeprint:2024/1132,
      author = {Tianpei Lu and Bingsheng Zhang and Xiaoyuan Zhang and Kui Ren},
      title = {A New {PPML} Paradigm for Quantized Models},
      howpublished = {Cryptology {ePrint} Archive, Paper 2024/1132},
      year = {2024},
      doi = {10.14722/ndss.2025.242872},
      url = {https://eprint.iacr.org/2024/1132}
}
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.