Paper 2024/227
Adaptively Sound Zero-Knowledge SNARKs for UP
Surya Mathialagan, Massachusetts Institute of Technology
Spencer Peters, Cornell University
Vinod Vaikuntanathan, Massachusetts Institute of Technology
Abstract
We study succinct non-interactive arguments (SNARGs) and succinct non-interactive arguments of knowledge (SNARKs) for the class in the reusable designated verifier model. is an expressive subclass of consisting of all languages where each instance has at most one witness; a designated verifier SNARG (dvSNARG) is one where verification of the SNARG proof requires a private verification key; and such a dvSNARG is reusable if soundness holds even against a malicious prover with oracle access to the (private) verification algorithm.
Our main results are as follows.
(1) A reusably and adaptively sound zero-knowledge (zk) dvSNARG for , from subexponential LWE and evasive LWE (a relatively new but popular variant of LWE). Our SNARGs achieve very short proofs of length bits for soundness error.
(2) A generic transformation that lifts any ``Sahai-Waters-like'' (zk) SNARG to an adaptively sound (zk) SNARG, in the designated-verifier setting. In particular, this shows that the Sahai-Waters SNARG for is adaptively sound in the designated verifier setting, assuming subexponential hardness of the underlying assumptions. The resulting SNARG proofs have length bits for soundness error. Our result sidesteps the Gentry-Wichs barrier for adaptive soundness by employing an exponential-time security reduction.
(3) A generic transformation, building on the work of Campanelli, Ganesh, that lifts any adaptively sound (zk) SNARG for to an adaptively sound (zk) SNARK for , while preserving zero-knowledge. The resulting SNARK achieves the strong notion of black-box extraction. There are barriers to achieving such SNARKs for all of from falsifiable assumptions, so our restriction to is, in a sense, necessary.
Applying (3) to our SNARG for from evasive LWE (1), we obtain a reusably and adaptively sound designated-verifier zero-knowledge SNARK for from subexponential LWE and evasive LWE. Moreover, applying both (2) and (3) to the Sahai-Waters SNARG, we obtain the same result from LWE, subexponentially secure one-way functions, and subexponentially secure indistinguishability obfuscation. Both constructions have succinct proofs of size . These are the first SNARK constructions (even in the designated-verifier setting) for a non-trivial subset of from (sub-exponentially) falsifiable assumptions.