Paper 2024/162

Zero-Knowledge Proofs of Training for Deep Neural Networks

Kasra Abbaszadeh, University of Maryland, College Park
Christodoulos Pappas, Hong Kong University of Science and Technology
Jonathan Katz, Google (United States), University of Maryland, College Park
Dimitrios Papadopoulos, Hong Kong University of Science and Technology
Abstract

A zero-knowledge proof of training (zkPoT) enables a party to prove that they have correctly trained a committed model based on a committed dataset without revealing any additional information about the model or the dataset. An ideal zkPoT should offer provable security and privacy guarantees, succinct proof size and verifier runtime, and practical prover efficiency. In this work, we present \name, a zkPoT targeted for deep neural networks (DNNs) that achieves all these goals at once. Our construction enables a prover to iteratively train their model via (mini-batch) gradient descent, where the number of iterations need not be fixed in advance; at the end of each iteration, the prover generates a commitment to the trained model parameters attached with a succinct zkPoT, attesting to the correctness of the executed iterations. The proof size and verifier time are independent of the number of iterations. Our construction relies on two building blocks. First, we propose an optimized GKR-style (sumcheck-based) proof system for the gradient-descent algorithm with concretely efficient prover cost; this allows the prover to generate a proof for each iteration. We then show how to recursively compose these proofs across multiple iterations to attain succinctness. As of independent interest, we propose a generic framework for efficient recursive composition of GKR-style proofs, along with aggregatable polynomial commitments. Benchmarks indicate that \name\ can handle the training of complex models such as VGG-11 with 10~million parameters and batch size~$16$. The prover runtime is $15$~minutes per iteration, which is $\mathbf{24 \times}$ faster than generic recursive proofs, with prover memory overhead $\mathbf{27\times}$ lower. The proof size is $1.63$~megabytes, and the verifier runtime is only $130$~milliseconds, where both are independent of the number of iterations and the size of the dataset.

Note: An extended version of the paper appearing in CCS'24. Compared to the initial version, this version of the paper reports the evaluation of the implementation of a tree-based recursion strategy. Additionally, this version clarifies the linear-time prover and reports revised R1CS size calculations.

Metadata
Available format(s)
PDF
Category
Cryptographic protocols
Publication info
Published elsewhere. Major revision. ACM CCS 2024
Keywords
zero-knowledge proofsmachine learningdeep neural networksproof of trainingincrementally verifiable computation
Contact author(s)
kasraz @ umd edu
cpappas @ connect ust hk
jkatz2 @ gmail com
dipapado @ cse ust hk
History
2024-07-22: last of 5 revisions
2024-02-04: received
See all versions
Short URL
https://ia.cr/2024/162
License
Creative Commons Attribution
CC BY

BibTeX

@misc{cryptoeprint:2024/162,
      author = {Kasra Abbaszadeh and Christodoulos Pappas and Jonathan Katz and Dimitrios Papadopoulos},
      title = {Zero-Knowledge Proofs of Training for Deep Neural Networks},
      howpublished = {Cryptology ePrint Archive, Paper 2024/162},
      year = {2024},
      note = {\url{https://eprint.iacr.org/2024/162}},
      url = {https://eprint.iacr.org/2024/162}
}
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.