Paper 2024/703
An Efficient and Extensible Zero-knowledge Proof Framework for Neural Networks
Abstract
In recent years, cloud vendors have started to supply paid services for data analysis by providing interfaces of their well-trained neural network models. However, customers lack tools to verify whether outcomes supplied by cloud vendors are correct inferences from particular models, in the face of lazy or malicious vendors. The cryptographic primitive called zero-knowledge proof (ZKP) addresses this problem. It enables the outcomes to be verifiable without leaking information about the models. Unfortunately, existing ZKP schemes for neural networks have high computational overheads, especially for the non-linear layers in neural networks. In this paper, we propose an efficient and extensible ZKP framework for neural networks. Our work improves the performance of the proofs for non-linear layers. Compared to previous works relying on the technology of bit decomposition, we convert complex non-linear relations into range and exponent relations, which significantly reduces the number of constraints required to prove non-linear layers. Moreover, we adopt a modular design to make our framework compatible with more neural networks. Specifically, we propose two enhanced range and lookup proofs as basic blocks. They are efficient in proving the satisfaction of range and exponent relations. Then, we constrain the correct calculation of primitive non-linear operations using a small number of range and exponent relations. Finally, we build our ZKP framework from the primitive operations to the entire neural networks, offering the flexibility for expansion to various neural networks. We implement our ZKPs for convolutional and transformer neural networks. The evaluation results show that our work achieves over $168.6\times$ (up to $477.2\times$) speedup for separated non-linear layers and $41.4\times$ speedup for the entire ResNet-101 convolutional neural network, when compared with the state-of-the-art work, Mystique. In addition, our work can prove GPT-2, a transformer neural network with $117$ million parameters, in $287.1$ seconds, achieving $35.7\times$ speedup over ZKML, which is a state-of-the-art work supporting transformer neural networks.
Metadata
- Available format(s)
- Category
- Applications
- Publication info
- Preprint.
- Keywords
- Zero-knowledge Proof; Neural Networks
- Contact author(s)
-
lutao2020 @ zju edu cn
whaoyu @ zju edu cn
wenjiequ @ u nus edu
zhwang @ zju edu cn
jin-yehe @ outlook com
tianyangtao @ u nus edu
chenwz @ zju edu cn
jhzhang @ nus edu sg - History
- 2024-05-10: approved
- 2024-05-07: received
- See all versions
- Short URL
- https://ia.cr/2024/703
- License
-
CC BY
BibTeX
@misc{cryptoeprint:2024/703, author = {Tao Lu and Haoyu Wang and Wenjie Qu and Zonghui Wang and Jinye He and Tianyang Tao and Wenzhi Chen and Jiaheng Zhang}, title = {An Efficient and Extensible Zero-knowledge Proof Framework for Neural Networks}, howpublished = {Cryptology {ePrint} Archive, Paper 2024/703}, year = {2024}, url = {https://eprint.iacr.org/2024/703} }