Paper 2024/227

Adaptively Sound Zero-Knowledge SNARKs for UP

Surya Mathialagan, Massachusetts Institute of Technology
Spencer Peters, Cornell University
Vinod Vaikuntanathan, Massachusetts Institute of Technology
Abstract

We study succinct non-interactive arguments (SNARGs) and succinct non-interactive arguments of knowledge (SNARKs) for the class $\mathsf{UP}$ in the reusable designated verifier model. $\mathsf{UP}$ is an expressive subclass of $\mathsf{NP}$ consisting of all $\mathsf{NP}$ languages where each instance has at most one witness; a designated verifier SNARG (dvSNARG) is one where verification of the SNARG proof requires a private verification key; and such a dvSNARG is reusable if soundness holds even against a malicious prover with oracle access to the (private) verification algorithm. Our main results are as follows. (1) A reusably and adaptively sound zero-knowledge (zk) dvSNARG for $\mathsf{UP}$, from subexponential LWE and evasive LWE (a relatively new but popular variant of LWE). Our SNARGs achieve very short proofs of length $(1 + o(1)) \cdot \lambda$ bits for $2^{-\lambda}$ soundness error. (2) A generic transformation that lifts any ``Sahai-Waters-like'' (zk) SNARG to an adaptively sound (zk) SNARG, in the designated-verifier setting. In particular, this shows that the Sahai-Waters SNARG for $\mathsf{NP}$ is adaptively sound in the designated verifier setting, assuming subexponential hardness of the underlying assumptions. The resulting SNARG proofs have length $(1 + o(1)) \cdot \lambda$ bits for $2^{-\lambda}$ soundness error. Our result sidesteps the Gentry-Wichs barrier for adaptive soundness by employing an exponential-time security reduction. (3) A generic transformation, building on the work of Campanelli, Ganesh, that lifts any adaptively sound (zk) SNARG for $\mathsf{UP}$ to an adaptively sound (zk) SNARK for $\mathsf{UP}$, while preserving zero-knowledge. The resulting SNARK achieves the strong notion of black-box extraction. There are barriers to achieving such SNARKs for all of $\mathsf{NP}$ from falsifiable assumptions, so our restriction to $\mathsf{UP}$ is, in a sense, necessary. Applying (3) to our SNARG for $\mathsf{UP}$ from evasive LWE (1), we obtain a reusably and adaptively sound designated-verifier zero-knowledge SNARK for $\mathsf{UP}$ from subexponential LWE and evasive LWE. Moreover, applying both (2) and (3) to the Sahai-Waters SNARG, we obtain the same result from LWE, subexponentially secure one-way functions, and subexponentially secure indistinguishability obfuscation. Both constructions have succinct proofs of size $\mathsf{poly}(\lambda)$. These are the first SNARK constructions (even in the designated-verifier setting) for a non-trivial subset of $\mathsf{NP}$ from (sub-exponentially) falsifiable assumptions.

Metadata
Available format(s)
PDF
Publication info
Preprint.
Keywords
SNARGSNARKNIZKUP
Contact author(s)
smathi @ mit edu
sp2473 @ cornell edu
vinodv @ mit edu
History
2024-04-01: last of 2 revisions
2024-02-14: received
See all versions
Short URL
https://ia.cr/2024/227
License
Creative Commons Attribution
CC BY

BibTeX

@misc{cryptoeprint:2024/227,
      author = {Surya Mathialagan and Spencer Peters and Vinod Vaikuntanathan},
      title = {Adaptively Sound Zero-Knowledge SNARKs for UP},
      howpublished = {Cryptology ePrint Archive, Paper 2024/227},
      year = {2024},
      note = {\url{https://eprint.iacr.org/2024/227}},
      url = {https://eprint.iacr.org/2024/227}
}
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.