Paper 2024/1018
Sparsity-Aware Protocol for ZK-friendly ML Models: Shedding Lights on Practical ZKML
Abstract
As deep learning is being widely adopted across various domains, ensuring the integrity of models has become increasingly crucial. Despite the recent advances in Zero-Knowledge Machine Learning (ZKML) techniques, proving the inference over large ML models is still prohibitive. To enable practical ZKML, model simplification techniques like pruning and quantization should be applied without hesitation. Contrary to conventional belief, recent development in ML space have demonstrated that these simplification techniques not only condense complex models into forms with sparse, low-bit weight matrices, but also maintain exceptionally high model accuracies that matches its unsimplified counterparts. While such transformed models seem inherently ZK-friendly, directly applying existing ZK proof frameworks still lead to suboptimal inference proving performance. To make ZKML truly practical, a quantization-and-pruning-aware ZKML framework is needed. In this paper, we propose SpaGKR, a novel sparsity-aware ZKML framework that is proven to surpass capabilities of existing ZKML methods. SpaGKR is a general framework that is widely applicable to any computation structure where sparsity arises. It is designed to be modular - all existing GKR-based ZKML frameworks can be seamlessly integrated with it to get remarkable compounding performance enhancements. We tailor SpaGKR specifically to the most commonly-used neural network structure - the linear layer, and propose the SpaGKR-LS protocol that achieves asymptotically optimal prover time. Notably, when applying SpaGKR-LS to a special series of simplified model - ternary network, it achieves further efficiency gains by additionally leveraging the low-bit nature of model parameters.
Metadata
- Available format(s)
- Category
- Cryptographic protocols
- Publication info
- Preprint.
- Keywords
- zero knowledge proofszero knowledge machine learningGKR
- Contact author(s)
-
alan @ brevis network
victor @ brevis network
mdong @ brevis network - History
- 2024-06-28: approved
- 2024-06-24: received
- See all versions
- Short URL
- https://ia.cr/2024/1018
- License
-
CC BY
BibTeX
@misc{cryptoeprint:2024/1018, author = {Alan Li and Qingkai Liang and Mo Dong}, title = {Sparsity-Aware Protocol for {ZK}-friendly {ML} Models: Shedding Lights on Practical {ZKML}}, howpublished = {Cryptology {ePrint} Archive, Paper 2024/1018}, year = {2024}, url = {https://eprint.iacr.org/2024/1018} }