Paper 2024/227
Adaptively Sound ZeroKnowledge SNARKs for UP
Abstract
We study succinct noninteractive arguments (SNARGs) and succinct noninteractive arguments of knowledge (SNARKs) for the class $\mathsf{UP}$ in the reusable designated verifier model. $\mathsf{UP}$ is an expressive subclass of $\mathsf{NP}$ consisting of all $\mathsf{NP}$ languages where each instance has at most one witness; a designated verifier SNARG (dvSNARG) is one where verification of the SNARG proof requires a private verification key; and such a dvSNARG is reusable if soundness holds even against a malicious prover with oracle access to the (private) verification algorithm. Our main results are as follows. (1) A reusably and adaptively sound zeroknowledge (zk) dvSNARG for $\mathsf{UP}$, from subexponential LWE and evasive LWE (a relatively new but popular variant of LWE). Our SNARGs achieve very short proofs of length $(1 + o(1)) \cdot \lambda$ bits for $2^{\lambda}$ soundness error. (2) A generic transformation that lifts any ``SahaiWaterslike'' (zk) SNARG to an adaptively sound (zk) SNARG, in the designatedverifier setting. In particular, this shows that the SahaiWaters SNARG for $\mathsf{NP}$ is adaptively sound in the designated verifier setting, assuming subexponential hardness of the underlying assumptions. The resulting SNARG proofs have length $(1 + o(1)) \cdot \lambda$ bits for $2^{\lambda}$ soundness error. Our result sidesteps the GentryWichs barrier for adaptive soundness by employing an exponentialtime security reduction. (3) A generic transformation, building on the work of Campanelli, Ganesh, that lifts any adaptively sound (zk) SNARG for $\mathsf{UP}$ to an adaptively sound (zk) SNARK for $\mathsf{UP}$, while preserving zeroknowledge. The resulting SNARK achieves the strong notion of blackbox extraction. There are barriers to achieving such SNARKs for all of $\mathsf{NP}$ from falsifiable assumptions, so our restriction to $\mathsf{UP}$ is, in a sense, necessary. Applying (3) to our SNARG for $\mathsf{UP}$ from evasive LWE (1), we obtain a reusably and adaptively sound designatedverifier zeroknowledge SNARK for $\mathsf{UP}$ from subexponential LWE and evasive LWE. Moreover, applying both (2) and (3) to the SahaiWaters SNARG, we obtain the same result from LWE, subexponentially secure oneway functions, and subexponentially secure indistinguishability obfuscation. Both constructions have succinct proofs of size $\mathsf{poly}(\lambda)$. These are the first SNARK constructions (even in the designatedverifier setting) for a nontrivial subset of $\mathsf{NP}$ from (subexponentially) falsifiable assumptions.
Metadata
 Available format(s)
 Publication info
 Preprint.
 Keywords
 SNARGSNARKNIZKUP
 Contact author(s)

smathi @ mit edu
sp2473 @ cornell edu
vinodv @ mit edu  History
 20240401: last of 2 revisions
 20240214: received
 See all versions
 Short URL
 https://ia.cr/2024/227
 License

CC BY
BibTeX
@misc{cryptoeprint:2024/227, author = {Surya Mathialagan and Spencer Peters and Vinod Vaikuntanathan}, title = {Adaptively Sound ZeroKnowledge {SNARKs} for {UP}}, howpublished = {Cryptology ePrint Archive, Paper 2024/227}, year = {2024}, note = {\url{https://eprint.iacr.org/2024/227}}, url = {https://eprint.iacr.org/2024/227} }