All papers (23118 results)
Non-Interactive Zero-Knowledge Proofs with Certified Deletion
We introduce the notion of non-interactive zero-knowledge (NIZK) proofs with certified deletion. Our notion enables the recipient of a quantum NIZK proof for a (quantumly hard) NP statement to delete the proof and collapse it into a classical deletion certificate. Once this certificate is successfully validated, we require the recipient of the proof to lose their ability to find accepting inputs to NIZK verification.
We formally define this notion and build several candidate constructions from standard cryptographic assumptions. In particular, we propose a primary construction from classical NIZK for NP and one-way functions, albeit with two limitations: (i) deletion certificates are only privately verifiable, and (ii) both prover and verifier are required to be quantum algorithms. We resolve these hurdles in two extensions that assume the quantum hardness of the learning with errors problem. The first one achieves publicly verifiable certificates, and the second one requires merely classical communication between classical provers and quantum verifiers.
Notions of Quantum Reductions and Impossibility of Statistical NIZK
Non-Interactive Zero-Knowledge Arguments (NIZKs) are cryptographic protocols that enable a prover to demonstrate the validity of an $\mathsf{NP}$ statement to a verifier with a single message, without revealing any additional information. The soundness and zero-knowledge properties of a NIZK correspond to security against a malicious prover and a malicious verifier respectively. Statistical NIZKs (S-NIZKs) are a variant of NIZKs for which the zero-knowledge property is guaranteed to hold information-theoretically. Previous works have shown that S-NIZKs satisfying a weak version of soundness known as static soundness exist based on standard assumptions. However, the work of Pass (TCC 2013) showed that S-NIZKs with the stronger \emph{adaptive} soundness property are inherently challenging to obtain. The work proved that standard (black-box) proof techniques are insufficient to prove the security of an S-NIZK based on any standard (falsifiable)
assumption. We extend this result to the setting where parties can perform quantum computations and communicate using quantum information, while the quantum security reduction is restricted to query the adversary classically. To this end, we adapt the well-known meta-reduction paradigm for showing impossibility results to the quantum setting. Additionally, we reinterpret our result using a new framework for studying quantum reductions, which we believe to be of independent interest.
The LaZer Library: Lattice-Based Zero Knowledge and Succinct Proofs for Quantum-Safe Privacy
The hardness of lattice problems offers one of the most promising
security foundations for quantum-safe cryptography. Basic schemes
for public key encryption and digital signatures are already close to
standardization at NIST and several other standardization bodies,
and the research frontier has moved on to building primitives with
more advanced privacy features. At the core of many such primi-
tives are zero-knowledge proofs. In recent years, zero-knowledge
proofs for (and using) lattice relations have seen a dramatic jump
in efficiency and they currently provide arguably the shortest, and
most computationally efficient, quantum-safe proofs for many sce-
narios. The main difficulty in using these proofs by non-experts
(and experts!) is that they have a lot of moving parts and a lot of
internal parameters depend on the particular instance that one is
trying to prove.
Our main contribution is a library for zero-knowledge and suc-
cinct proofs which consists of efficient and flexible C code under-
neath a simple-to-use Python interface. Users without any back-
ground in lattice-based proofs should be able to specify the lattice
relations and the norm bounds that they would like to prove and the
library will automatically create a proof system, complete with the
intrinsic parameters, using either the succinct proofs of LaBRADOR
(Beullens and Seiler, Crypto 2023) or the linear-size, though smaller
for certain application, proofs of Lyubashevsky et al. (Crypto 2022).
The Python interface also allows for common operations used in
lattice-based cryptography which will enable users to write and pro-
totype their full protocols within the syntactically simple Python
environment.
We showcase some of the library’s usefulness by giving protocol
implementations for blind signatures, anonymous credentials, the
zero-knowledge proof needed in the recent Swoosh protocol (Gaj-
land et al., Usenix 2024), proving knowledge of Kyber keys, and an
aggregate signature scheme. Most of these are the most efficient,
from a size, speed, and memory perspective, known quantum-safe
instantiations.
Single-Server Client Preprocessing PIR with Tight Space-Time Trade-off
This paper partly solves the open problem of tight trade-off of client storage and server time in the client preprocessing setting of private information retrieval (PIR). In the client preprocessing setting of PIR, the client is allowed to store some hints generated from the database in a preprocessing phase and use the hints to assist online queries. We construct a new single-server client preprocessing PIR scheme. For a database with $n$ entries of size $w$, our protocol uses $S=O((n/T) \cdot (\log n + w))$ bits of client storage and $T$ amortized server probes over $n/T$ queries, where $T$ is a tunable online time parameter. Our scheme matches (up to constant factors) a $ST = \Omega(nw)$ lower bound generalized from a recent work by Yeo (EUROCRYPT 2023) and a communication barrier generalized from Ishai, Shi, and Wichs (CRYPTO 2024).
From a technical standpoint, we present a novel organization of hints where each PIR query consumes a hint, and entries in the consumed hint are relocated to other hints. We then present a new data structure to track the hint relocations and use small-domain pseudorandom permutations to make the hint storage sublinear while maintaining efficient lookups in the hints.
KLaPoTi: An asymptotically efficient isogeny group action from 2-dimensional isogenies
We construct and implement an efficient post-quantum commutative cryptographic group action based on combining the SCALLOP framework for group actions from isogenies of oriented elliptic curves on one hand with the recent Clapoti method for polynomial-time evaluation of the CM group action on elliptic curves on the other.
We take advantage of the very attractive performance of $(2^e, 2^e)$-isogenies between products of elliptic curves in the theta coordinate system.
To successfully apply Clapoti in dimension $2$, it is required to resolve a particular quadratic diophantine norm equation, for which we employ a slight variant of the KLPT algorithm.
Our work marks the first practical instantiation of the CM group action for which both the setup as well as the online phase can be computed in (heuristic) polynomial time.
Khatam: Reducing the Communication Complexity of Code-Based SNARKs
We prove that Basefold(Crypto 2024) is secure in the $\textit{list decoding regime}$, within the double Johnson bound and with error probability $\frac{O(n)}{|F|}$. At the heart of this proof is a new, stronger statement for $\textit{correlated agreement}$, which roughly states that if a linear combination of vectors $\pi_L + r \pi_R$ agrees with a codeword at every element in $S \subset [n]$, then so do $\pi_L, \pi_R$. Our result is purely combinatorial and therefore extends to any finite field and any linear code. As such, it can be applied to any coding-based multilinear Polynomial Commitment Scheme to reduce its communication complexity.
Zero-Knowledge Location Privacy via Accurate Floating-Point SNARKs
We introduce Zero-Knowledge Location Privacy (ZKLP), enabling users to prove to third parties that they are within a specified geographical region while not disclosing their exact location. ZKLP supports varying levels of granularity, allowing for customization depending on the use case. To realize ZKLP, we introduce the first set of Zero-Knowledge Proof (ZKP) circuits that are fully compliant to the IEEE 754 standard for floating-point arithmetic.
Our results demonstrate that our floating point circuits amortize efficiently, requiring only $64$ constraints per multiplication for $2^{15}$ single-precision floating-point multiplications. We utilize our floating point implementation to realize the ZKLP paradigm. In comparison to a baseline, we find that our optimized implementation has $15.9 \times$ less constraints utilizing single precision floating-point values, and $12.2 \times$ less constraints when utilizing double precision floating-point values. We demonstrate the practicability of ZKLP by building a protocol for privacy preserving peer-to-peer proximity testing - Alice can test if she is close to Bob by receiving a single message, without either party revealing any other information about their location. In such a configuration, Bob can create a proof of (non-)proximity in $0.26 s$, whereas Alice can verify her distance to about $470$ peers per second.
Verifying Jolt zkVM Lookup Semantics
Lookups are a popular way to express repeated constraints in state-of-the art SNARKs. This is especially the case for zero-knowledge virtual machines (zkVMs), which produce succinct proofs of correct execution for programs expressed as bytecode according to a specific instruction set architecture (ISA). The Jolt zkVM (Arun, Setty & Thaler, Eurocrypt 2024) for RISC-V ISA employs Lasso (Setty, Thaler & Wahby, Eurocrypt 2024), an efficient lookup argument for massive structured tables, to prove correct execution of instructions. Internally, Lasso performs multiple lookups into smaller subtables, then combines the results.
We present an approach to formally verify Lasso-style lookup arguments against the semantics of instruction set architectures. We demonstrate our approach by formalizing and verifying all Jolt 32-bit instructions corresponding to the RISC-V base instruction set (RV32I) using the ACL2 theorem proving system. Our formal ACL2 model has undergone extensive validation against the Rust implementation of Jolt. Due to ACL2's bit-blasting, rewriting, and developer-friendly features, our formalization is highly automated.
Through formalization, we also discovered optimizations to the Jolt codebase, leading to improved efficiency without impacting correctness or soundness. In particular, we removed one unnecessary lookup each for four instructions, and reduced the sizes of three subtables by 87.5\%.
Ideal Pseudorandom Codes
Pseudorandom codes are error-correcting codes with the property that no efficient adversary can distinguish encodings from uniformly random strings. They were recently introduced by Christ and Gunn [CRYPTO 2024] for the purpose of watermarking the outputs of randomized algorithms, such as generative AI models. Several constructions of pseudorandom codes have since been proposed, but none of them are robust to error channels that depend on previously seen codewords. This stronger kind of robustness is referred to as adaptive robustness, and it is important for meaningful applications to watermarking.
In this work, we show the following.
- Adaptive robustness: We show that the pseudorandom codes of Christ and Gunn are adaptively robust, resolving a conjecture posed by Cohen, Hoover, and Schoenbach [S&P 2025]. Our proof involves several new ingredients, combining ideas from both cryptography and coding theory and taking hints from the analysis of Boolean functions.
- Ideal security: We define an ideal pseudorandom code as one which is indistinguishable from the ideal functionality, capturing both the pseudorandomness and robustness properties in one simple definition. We show that any adaptively robust pseudorandom code for single-bit messages can be bootstrapped to build an ideal pseudorandom code with linear information rate, under no additional assumptions.
- CCA security: In the setting where the encoding key is made public, we define a CCA-secure pseudorandom code in analogy with CCA-secure encryption. We show that any adaptively robust public-key pseudorandom code for single-bit messages can be used to build a CCA-secure pseudorandom code with linear information rate, in the random oracle model.
Together with the result of Christ and Gunn, it follows that there exist ideal pseudorandom codes assuming the $2^{O(\sqrt{n})}$-hardness of LPN. This extends to CCA security in the random oracle model. These results immediately imply stronger robustness guarantees for generative AI watermarking schemes, such as the practical quality-preserving image watermarks of Gunn, Zhao, and Song (2024).
Cryptographically Secure Digital Consent
In the digital age, the concept of consent for online actions executed by third parties is crucial for maintaining trust and security in third-party services.
This work introduces the notion of cryptographically secure digital consent, which aims to replicate the traditional consent process in the online world.
We provide a flexible digital consent solution that accommodates different use cases and ensures the integrity of the consent process.
The proposed framework involves a client (referring to the user or their devices), an identity manager (which authenticates the client), and an agent (which executes the action upon receiving consent).
It supports various applications and ensures compatibility with existing identity managers.
We require the client to keep no more than a password. The design addresses several security and privacy challenges, including preventing offline dictionary attacks, ensuring non-repudiable consent, and preventing unauthorized actions by the agent.
Security is maintained even if either the identity manager or the agent is compromised, but not both.
Our notion of an identity manager is broad enough to include combinations of different authentication factors such as a password, a smartphone, a security device, biometrics, or an e-passport. We demonstrate applications for signing PDF documents, e-banking, and key recovery.
Pushing the QAM method for finding APN functions further
APN functions offer optimal resistance to differential attacks and are instrumental in the design of block ciphers in cryptography. While finding APN functions is very difficult in general, a promising way to construct APN functions is through symmetric matrices called Quadratic APN matrices (QAM). It is known that the search space for the QAM method can be reduced by means of orbit partitions induced by linear equivalences. This paper builds upon and improves these approaches in the case of homogeneous quadratic functions over $\mathbb{F}_{2^n}$ with coefficients in the subfield $\mathbb{F}_{2^m}$. We propose an innovative approach for computing orbit partitions for cases where it is infeasible due to the large search space, resulting in the applications for the dimensions $(n,m)=(8,4)$, and $(n,m)=(9,3)$. We find and classify, up to CCZ-equivalence, all quadratic APN functions for the cases of $(n,m)=(8,2),$ and $(n,m)=(10,1)$, discovering a new APN function in dimension $8$. Also, we show that an exhaustive search for $(n,m) = (10,2)$ is infeasible for the QAM method using currently available means, following partial searches for this case.
A Query Reconstruction Attack on the Chase-Shen Substring-Searchable Symmetric Encryption Scheme
Searchable symmetric encryption (SSE) enables queries over symmetrically encrypted databases. To achieve practical efficiency, SSE schemes incur a certain amount of leakage; however, this leads to the possibility of leakage cryptanalysis, i.e., cryptanalytic attacks that exploit the leakage from the target SSE scheme to subvert its data and query privacy guarantees. Leakage cryptanalysis has been widely studied in the context of SSE schemes supporting either keyword queries or range queries, often with devastating consequences. However, little or no attention has been paid to cryptanalysing substring-SSE schemes, i.e., SSE schemes supporting arbitrary substring queries over encrypted data. This is despite their relevance to many real-world applications, e.g., in the context of securely querying outsourced genomic databases. In particular, the first ever substring-SSE scheme, proposed nearly a decade ago by Chase and Shen (PoPETS '15), has not been cryptanalysed to date.
In this paper, we present the first leakage cryptanalysis of the substring-SSE scheme of Chase and Shen. We propose a novel inference-based query reconstruction attack that: (i) exploits a reduced version of the actual leakage profile of their scheme, and (ii) assumes a weaker attack model as compared to the one in which Chase and Shen originally claimed security. We implement our attack and experimentally validate its success rate and efficiency over two real-world datasets. Our attack achieves high query reconstruction rate with practical efficiency, and scales smoothly to large datasets containing $100,000$ strings.
To the best of our knowledge, ours is the first and only query reconstruction attack on (and the first systematic leakage cryptanalysis of) any substring-SSE scheme till date.
Symmetric Encryption on a Quantum Computer
Classical symmetric encryption algorithms use $N$ bits of a shared
secret key to transmit $N$ bits of a message over a one-way channel in
an information theoretically secure manner. This paper proposes a hybrid
quantum-classical symmetric cryptosystem that uses a quantum computer to
generate the secret key. The algorithm leverages quantum circuits to
encrypt a message using a one-time pad-type technique whilst requiring
a shorter classical key. We show that for an $N$-qubit circuit, the
maximum number of bits needed to specify a quantum circuit grows as
$N^{3/2}$ while the maximum number of bits that the quantum circuit
can encode grows as $N^2$. We do not utilise the full expressive
capability of the quantum circuits as we focus on second order Pauli
expectation values only. The potential exists to encode an exponential
number of bits using higher orders of Pauli expectation values.
Moreover, using a parameterised quantum circuit (PQC), we could
further augment the amount of securely shared information by
introducing a secret key dependence on some of the PQC parameters.
The algorithm may be suitable for an early fault-tolerant quantum
computer implementation as some degree of noise can be tolerated.
Simulation results are presented along with experimental results on
the 84-qubit Rigetti Ankaa-2 quantum computer.
Hybrid Zero-Knowledge from Garbled Circuits
We present techniques for constructing zero-knowledge argument systems from garbled circuits, extending the GC-to-ZK compiler by Jawurek, Kerschbaum, and Orlandi (ACM CCS 2023) and the GC-to-Σ compiler by Hazay and Venkitasubramaniam (J. Crypto, 2020) to the following directions:
- Our schemes are hybrid, commit-and-prove zero-knowledge argument systems that establish a connection between secrets embedded in algebraic commitments and a relation represented by a Boolean circuit.
- Our schemes incorporate diverse cross-domain secrets embedded within distinct algebraic commitments, simultaneously supporting Pedersen-like commitments and lattice-based commitments.
As an application, we develop circuit-represented compositions of Σ-protocols that support attractive access structures, such as weighted thresholds, that can be easily represented by a small circuit. For predicates P1, . . . , Pn individually associated with a Σ-protocol, and a predicate C represented by a Boolean circuit, we construct a Σ-protocol for proving C(P1, . . . , Pn) = 1. This result answers positively an open question posed by Abe, et. al., at TCC 2021.
Scutum: Temporal Verification for Cross-Rollup Bridges via Goal-Driven Reduction
Scalability remains a key challenge for blockchain adoption. Rollups—especially zero-knowledge (ZK) and optimistic rollups—address this by processing transactions off-chain while maintaining Ethereum’s security, thus reducing gas fees and improving speeds. Cross-rollup bridges like Orbiter Finance enable seamless asset transfers across various Layer 2 (L2) rollups and between L2 and Layer 1 (L1) chains. However, the increasing reliance on these bridges raises significant security concerns, as evidenced by major hacks like those of Poly Network and Nomad Bridge, resulting in losses of hundreds of millions of dollars. Traditional security analysis methods such as static analysis and fuzzing are inadequate for cross-rollup bridges due to their complex designs involving multiple entities, smart contracts, and zero-knowledge circuits. These systems require reasoning about temporal sequences of events across different entities, which exceeds the capabilities of conventional analyzers.
In this paper, we introduce a scalable verifier to systematically assess the security of cross-rollup bridges. Our approach features a comprehensive multi-model framework that captures both individual behaviors and complex interactions using temporal properties. To enhance scalability, we approximate temporal safety verification through reachability analysis of a graph representation of the contracts, leveraging advanced program analysis techniques. Additionally, we incorporate a conflict-driven refinement loop to eliminate false positives and improve precision. Our evaluation on mainstream cross-rollup bridges, including Orbiter Finance, uncovered multiple zero-day vulnerabilities, demonstrating the practical utility of our method. The tool also exhibited favorable runtime performance, enabling efficient analysis suitable for real-time or near-real-time applications.
Private Neural Network Training with Packed Secret Sharing
We present a novel approach for training neural networks that leverages packed Shamir secret sharing scheme. For specific training protocols based on Shamir scheme, we demonstrate how to realize the conversion between packed sharing and Shamir sharing without additional communication overhead. We begin by introducing a method to locally convert between Shamir sharings with secrets stored at different slots. Building upon this conversion, we achieve free conversion from packed sharing to Shamir sharing. We then show how to embed the conversion from Shamir sharing to packed sharing into the truncation used during the training process without incurring additional communication costs. With free conversion between packed sharing and Shamir sharing, we illustrate how to utilize the packed scheme to parallelize certain computational steps involved in neural network training. On this basis, we propose training protocols with information-theoretic security between general $n$ parties under the semi-honest model. The experimental results demonstrate that, compared to previous work in this domain, applying the packed scheme can effectively improve training efficiency. Specifically, when packing $4$ secrets into a single sharing, we observe a reduction of more than $20\%$ in communication overhead and an improvement of over $10\%$ in training speed under the WAN setting.
How to Delete Without a Trace: Certified Deniability in a Quantum World
Is it possible to comprehensively destroy a piece of quantum information, so that nothing is left behind except the memory of that one had it at some point? For example, various works, most recently Morimae, Poremba, and Yamakawa (TQC '24), show how to construct a signature scheme with certified deletion where a user who deletes a signature on $m$ cannot later produce a signature for $m$. However, in all of the existing schemes, even after deletion the user is still able keep irrefutable evidence that $m$ was signed, and thus they do not fully capture the spirit of deletion.
In this work, we initiate the study of certified deniability in order to obtain a more comprehensive notion of deletion. Certified deniability uses a simulation-based security definition, ensuring that any information the user has kept after deletion could have been learned without being given the deleteable object to begin with; meaning that deletion leaves no trace behind! We define and construct two non-interactive primitives that satisfy certified deniability in the quantum random oracle model: signatures and non-interactive zero-knowledge arguments (NIZKs). As a consequence, for example, it is not possible to delete a signature/NIZK and later provide convincing evidence that it used to exist. Notably, our results utilize uniquely quantum phenomena to bypass Pass's (CRYPTO '03) celebrated result showing that deniable NIZKs are impossible even in the random oracle model.
Fast Two-party Threshold ECDSA with Proactive Security
We present a new construction of two-party, threshold ECDSA, building on a 2017 scheme of Lindell and improving his scheme in several ways.
ECDSA signing is notoriously hard to distribute securely, due to non-linearities in the signing function. Lindell's scheme uses Paillier encryption to encrypt one party's key share and handle these non-linearities homomorphically, while elegantly avoiding any expensive zero knowledge proofs over the Paillier group during the signing process. However, the scheme pushes that complexity into key generation. Moreover, avoiding ZK proofs about Paillier ciphertexts during signing comes with a steep price -- namely, the scheme requires a ``global abort" when a malformed ciphertext is detected, after which an entirely new key must be generated.
We overcome all of these issues with a proactive Refresh procedure. Since the Paillier decryption key is part of the secret that must be proactively refreshed, our first improvement is to radically accelerate key generation by replacing one of Lindell's ZK proofs -- which requires 80 Paillier ciphertexts for statistical security $2^{-40}$ -- with a much faster "weak" proof that requires only 2 Paillier ciphertexts, and which proves a weaker statement about a Paillier ciphertext that we show is sufficient in the context of our scheme. Secondly, our more efficient key generation procedure also makes frequent proactive Refreshes practical. Finally, we show that adding noise to one party's key share suffices to avoid the need to reset the public verification key when certain bad behavior is detected. Instead, we prove that our Refresh procedure, performed after each detection, suffices for addressing the attack, allowing the system to continue functioning without disruption to applications that rely on the verification key.
Our scheme is also very efficient, competitive with the best constructions that do not provide proactive security, and state-of-the-art among the few results that do. Our optimizations to ECDSA key generation speed up runtime and improve bandwidth over Lindell's key generation by factors of 7 and 13, respectively. Our Key Generation protocol requires 20% less bandwidth than existing constructions, completes in only 3 protocol messages, and executes much faster than all but OT-based key generation. For ECDSA signing, our extra Refresh protocol does add a 10X latency and 5X bandwidth overhead compared to Lindell. However, this still fits in 150 ms runtime and about 5.4 KB of messages when run in our AWS cluster benchmark.
A Tight Analysis of GHOST Consistency
The GHOST protocol has been proposed as an improvement to the Nakamoto consensus mechanism that underlies Bitcoin. In contrast to the Nakamoto fork-choice rule, the GHOST rule justifies selection of a chain with weights computed over subtrees rather than individual paths. This mechanism has been adopted by a variety of consensus protocols, and is a part of the currently deployed protocol supporting Ethereum.
We establish an exact characterization of the security region of the GHOST protocol, identifying the relationship between the rate of honest block production, the rate of adversarial block production, and network delays that guarantee that the protocol reaches consensus. In contrast to the closely related Nakamoto consensus protocol, we find that the region depends on the convention used by the protocol for tiebreaking; we establish tight results for both adversarial tiebreaking, in which ties are broken adversarially in order to frustrate consensus, and deterministic tiebreaking, in which ties between pairs of blocks are broken consistently throughout an execution. We provide explicit attacks for both conventions which stall consensus outside of the security region.
Our results conclude that the security region of GHOST can be strictly improved by incorporating a tiebreaking mechanism; in either case, however, the final region of security is inferior to the region of Nakamoto consensus.
Compiled Nonlocal Games from any Trapdoor Claw-Free Function
A recent work of Kalai et al. (STOC 2023) shows how to compile any multi-player nonlocal game into a protocol with a single computationally-bounded prover. Subsequent works have built on this to develop new cryptographic protocols, where a completely classical client can verify the validity of quantum computation done by a quantum server. Their compiler relies on the existence of quantum fully-homomorphic encryption.
In this work, we propose a new compiler for transforming nonlocal games into single-prover protocols.
Our compiler is based on the framework of measurement-based quantum computation.
It can be instantiated assuming the existence of \emph{any} trapdoor function that satisfies the claw-freeness property.
Leveraging results by Natarajan and Zhang (FOCS 2023) on compiled nonlocal games, our work implies the existence of new protocols to classically verify quantum computation from potentially weaker computational assumptions than previously known.
Classic McEliece Hardware Implementation with Enhanced Side-Channel and Fault Resistance
In this work, we propose the first hardware implementation of Classic McEliece protected with countermeasures against Side-Channel Attacks (SCA) and Fault Injection Attacks (FIA). Classic Mceliece is one of the leading candidates for Key Encapsulation Mechanisms (KEMs) in the ongoing round 4 of the NIST standardization process for post-quantum cryptography. In particular, we implement a range of generic countermeasures against SCA and FIA, particularly protected the vulnerable operations such as additive Fast Fourier Transform (FFT) and Gaussian elimination, that have been targeted by prior SCA and FIA attacks. We also perform a detailed SCA evaluation demonstrating no leakage even with 100000 traces (improvement of more than 100× the number of traces compared to unprotected implementation). This comes at a modest total area overhead of between 4× to 7×, depending on the type of implemented SCA countermeasure. Furthermore, we present a thorough ASIC benchmark for SCA and FIA protected Classic McEliece design.
OPTIMSM: FPGA hardware accelerator for Zero-Knowledge MSM
The Multi-Scalar Multiplication (MSM) is the main barrier to accelerating Zero-Knowledge applications. In recent years, hardware acceleration of this algorithm on both FPGA and GPU has become a popular research topic and the subject of a multi-million dollar prize competition (ZPrize). This work presents OPTIMSM: Optimized Processing Through Iterative Multi-Scalar Multiplication. This novel accelerator focuses on the acceleration of the MSM algorithm for any Elliptic Curve (EC) by improving upon the Pippenger algorithm. A new iteration technique is introduced to decouple the required buckets from the window size, resulting in fewer EC computations for the same on-chip memory resources. Furthermore, we combine known optimizations from the literature for the first time to achieve additional latency improvements. Our enhanced MSM implementation significantly reduces computation time, achieving a speedup of up to $\times 12.77$ compared to recent FPGA implementations. Specifically, for the BLS12-381 curve, we reduce the computation time for an MSM of size $2^{24}$ to 914 ms using a single compute unit on the U55C FPGA or to 231 ms using four U55C devices. These results indicate a substantial improvement in efficiency, paving the way for more scalable and efficient Zero-Knowledge proof systems.
Cloning Games, Black Holes and Cryptography
The no-cloning principle has played a foundational role in quantum information and cryptography. Following a long-standing tradition of studying quantum mechanical phenomena through the lens of interactive games, Broadbent and Lord (TQC 2020) formalized cloning games in order to quantitatively capture no-cloning in the context of unclonable encryption schemes.
The conceptual contribution of this paper is the new, natural, notion of Haar cloning games together with two applications. In the area of black-hole physics, our game reveals that, in an idealized model of a black hole which features Haar random (or pseudorandom) scrambling dynamics, the information from infalling entangled qubits can only be recovered from either the interior or the exterior of the black hole---but never from both places at the same time. In the area of quantum cryptography, our game helps us construct succinct unclonable encryption schemes from the existence of pseudorandom unitaries, thereby, for the first time, bridging the gap between ``MicroCrypt'' and unclonable cryptography. The technical contribution of this work is a tight analysis of Haar cloning games which requires us to overcome many long-standing barriers in our understanding of cloning games:
1. Are there cloning games which admit no non-trivial winning strategies? Resolving this particular question turns out to be crucial for our application to black-hole physics. Existing work analyzing the $n$-qubit BB84 game and the subspace coset game only achieve the bounds of $2^{-0.228n}$ and $2^{-0.114n+o(n)}$, respectively, while the trivial adversarial strategy wins with probability $2^{-n}$. We show that the Haar cloning game is the hardest cloning game, by demonstrating a worst-case to average-case reduction for a large class of games which we refer to as oracular cloning games. We then show that the Haar cloning game admits no non-trivial winning strategies.
2. All existing works analyze $1\mapsto 2$ cloning games; can we prove bounds on $t\mapsto t+1$ games for large $t$? Such bounds are crucial in our application to unclonable cryptography. Unfortunately, the BB84 game is not even $2\mapsto 3$ secure, and the subspace coset game is not $t\mapsto t+1$ secure for a polynomially large $t$. We show that the Haar cloning game is $t\mapsto t+1$ secure provided that $t = o(\log n / \log \log n)$, and we conjecture that this holds for $t$ that is polynomially large (in $n$).
Answering these questions provably requires us to go beyond existing methods (Tomamichel, Fehr, Kaniewski and Wehner, New Journal of Physics 2013). In particular, we show a new technique for analyzing cloning games with respect to binary phase states through the lens of binary subtypes, and combine it with novel bounds on the operator norms of block-wise tensor products of matrices.
BrakingBase - a linear prover, poly-logarithmic verifier, field agnostic polynomial commitment scheme
We propose a Polynomial Commitment Scheme (PCS), called BrakingBase, which allows a prover to commit to multilinear (or univariate) polynomials with $n$ coefficients in $O(n)$ time. The evaluation protocol of BrakingBase operates with an $O(n)$ time-complexity for the prover, while the verifier time-complexity and proof-complexity are $O(\lambda \log^2 n)$, where $λ$ is the security parameter. Notably, BrakingBase is field-agnostic, meaning it can be instantiated over any field of sufficiently large size. Additionally, BrakingBase can be combined with the Polynomial Interactive Oracle Proof (PIOP) from Spartan (Crypto 2020) to yield a Succinct Non-interactive ARgument of Knowledge (SNARK) with a linear-time prover, as well as poly-logarithmic complexity for both the verifier runtime and the proof size. We obtain our PCS by combining the Brakedown and Basefold PCS. The commitment protocol of BrakingBase is similar to that of Brakedown. The evaluation protocol of BrakingBase improves upon Brakedown’s verifier work by reducing it through multiple instances of the sum-check protocol. Basefold PCS is employed to commit to and later evaluate the multilinear extension (MLE) of the witnesses involved in the sum-check protocol at random points. This includes the MLE corresponding to the parity-check matrix of the linear-time encodable code used in Brakedown. We show that this matrix is sparse and use the Spark compiler from Spartan to evaluate its multilinear extension at a random point. We implement BrakingBase and compare its performance to Brakedown and Basefold over a 128 bit prime field.
Constructing Dembowski–Ostrom permutation polynomials from upper triangular matrices
We establish a one-to-one correspondence between Dembowski-Ostrom (DO) polynomials and upper triangular matrices. Based on this correspondence, we give a bijection between DO permutation polynomials and a special class of upper triangular matrices, and construct a new batch of DO permutation polynomials. To the best of our knowledge, almost all other known DO permutation polynomials are located in finite fields of $\mathbb{F}_{2^n}$, where $n$ contains odd factors (see Table 1). However, there are no restrictions on $n$ in our results, and especially the case of $n=2^m$ has not been studied in the literature. For example, we provide a simple necessary and sufficient condition to determine when $\gamma\, Tr(\theta_{i}x)Tr(\theta_{j}x) + x$ is a DO permutation polynomial. In addition, when the upper triangular matrix degenerates into a diagonal matrix and the elements on the main diagonal form a basis of $\mathbb{F}_{q^{n}}$ over $\mathbb{F}_{q}$, this diagonal matrix corresponds to all linearized permutation polynomials. In a word, we construct several new DO permutation polynomials, and our results can be viewed as an extension of linearized permutation polynomials.
A Composability Treatment of Bitcoin's Transaction Ledger with Variable Difficulty
As the first proof-of-work (PoW) permissionless blockchain, Bitcoin aims at maintaining a decentralized yet consistent transaction ledger as protocol participants (“miners”) join and leave as they please. This is achieved by means of a subtle PoW difficulty adjustment mechanism that adapts to the perceived block generation rate, and important steps have been taken in previous work to provide a rigorous analysis of the conditions (such as bounds on dynamic participation) that are sufficient for Bitcoin’s security properties to be ascertained.
Such existing analysis, however, is property-based, and as such only guarantees security when the protocol is run $\textbf{in isolation}$. In this paper we present the first (to our knowledge) simulation-based analysis of the Bitcoin ledger in the dynamic setting where it operates, and show that the protocol abstraction known as the Bitcoin backbone protocol emulates, under certain participation restrictions, Bitcoin’s intended specification. Our formulation and analysis extend the existing Universally Composable treatment for the fixed-difficulty setting, and develop techniques that might be of broader applicability, in particular to other composable formulations of blockchain protocols that rely on difficulty adjustment.
Anonymous Public-Key Quantum Money and Quantum Voting
Quantum information allows us to build quantum money schemes, where a bank can issue banknotes in the form of authenticatable quantum states that cannot be cloned or counterfeited: a user in possession of k banknotes cannot produce k +1 banknotes. Similar to paper banknotes, in existing quantum money schemes, a banknote consists of an unclonable quantum state and a classical serial number, signed by bank. Thus, they lack one of the most fundamental properties cryptographers look for in a currency scheme: privacy. In this work, we first further develop the formal definitions of privacy for quantum money schemes. Then, we construct the first public-key quantum money schemes that satisfy these security notions. Namely,
• Assuming existence of indistinguishability obfuscation and hardness of Learning with Errors, we construct a public-key quantum money scheme with anonymity against users and traceability by authorities.
Since it is a policy choice whether authorities should be able to track banknotes or not, we also construct an untraceable money scheme, where no one (not even the authorities) can track banknotes.
• Assuming existence of indistinguishability obfuscation and hardness of Learning with Er- rors, we construct a public-key quantum money scheme with untraceability.
Further, we show that the no-cloning principle, a result of quantum mechanics, allows us to construct schemes, with security guarantees that are classically impossible, for a seemingly unrelated application: voting!
• Assuming existence of indistinguishability obfuscation and hardness of Learning with Errors, we construct a universally verifiable quantum voting scheme with classical votes.
Finally, as a technical tool, we introduce the notion of publicly rerandomizable encryption with strong correctness, where no adversary is able to produce a malicious ciphertext and a malicious random tape such that the ciphertext before and after rerandomization (with the malicious tape) decrypts to different values! We believe this might be of independent interest. • Assuming the (quantum) hardness of Learning with Errors, we construct a (post-quantum) classical publicly rerandomizable encryption scheme with strong correctness
SCIF: Privacy-Preserving Statistics Collection with Input Validation and Full Security
Secure aggregation is the distributed task of securely computing a sum of values (or a vector of values) held by a set of parties, revealing only the output (i.e., the sum) in the computation. Existing protocols, such as Prio (NDSI’17), Prio+ (SCN’22), Elsa (S&P’23), and Whisper (S&P’24), support secure aggregation with input validation to ensure inputs belong to a specified domain. However, when malicious servers are present, these protocols primarily guarantee privacy but not input validity. Also, malicious server(s) can cause the protocol to abort. We introduce SCIF, a novel multi-server secure aggregation protocol with input validation, that remains secure even in the presence of malicious actors, provided fewer than one-third of the servers are malicious. Our protocol overcomes previous limitations by providing two key properties: (1) guaranteed output delivery, ensuring malicious parties cannot prevent the protocol from completing, and (2) guaranteed input inclusion, ensuring no malicious party can prevent an honest party’s input from being included in the computation. Together, these guarantees provide strong resilience against denial-of-service attacks. Moreover, SCIF offers these guarantees without increasing client costs over Prio and keeps server costs moderate. We present a robust end-to-end implementation of SCIF and demonstrate the ease with which it can be instrumented by integrating it in a simulated Tor network for privacy-preserving measurement.
On the Power of Oblivious State Preparation
We put forth Oblivious State Preparation (OSP) as a cryptographic primitive that unifies techniques developed in the context of a quantum server interacting with a classical client. OSP allows a classical polynomial-time sender to input a choice of one out of two public observables, and a quantum polynomial-time receiver to recover an eigenstate of the corresponding observable -- while keeping the sender's choice hidden from any malicious receiver.
We obtain the following results:
- The existence of (plain) trapdoor claw-free functions implies OSP, and the existence of dual-mode trapdoor claw-free functions implies round-optimal (two-round) OSP.
- OSP implies the existence of proofs of quantumness, test of a qubit, blind classical delegation of quantum computation, and classical verification of quantum computation.
- Two-round OSP implies quantum money with classical communication, classically-verifiable position verification, and (additionally assuming classical FHE with log-depth decryption) quantum FHE.
Thus, the OSP abstraction helps separate the cryptographic layer from the information-theoretic layer when building cryptosystems across classical and quantum participants. Indeed, several of the aforementioned applications were previously only known via tailored LWE-based constructions, whereas our OSP-based constructions yield new results from a wider variety of assumptions, including hard problems on cryptographic group actions.
Finally, towards understanding the minimal hardness assumptions required to realize OSP, we prove the following:
- OSP implies oblivious transfer between one classical and one quantum party.
- Two-round OSP implies public-key encryption with classical keys and ciphertexts.
In particular, these results help to ''explain'' the use of public-key cryptography in the known approaches to establishing a ''classical leash'' on a quantum server. For example, combined with a result of Austrin et al. (CRYPTO 22), we conclude that perfectly-correct OSP cannot exist unconditionally in the (quantum) random oracle model.
VCVio: A Formally Verified Forking Lemma and Fiat-Shamir Transform, via a Flexible and Expressive Oracle Representation
As cryptographic protocols continue to become more complex and specialized, their security proofs have grown more complex as well, making manual verification of their correctness more difficult. Formal verification via proof assistants has become a popular approach to solving this, by allowing researchers to write security proofs that can be verified correct by a computer.
In this paper we present a new framework of this kind for verifying security proofs, taking a foundational approach to representing and reasoning about protocols. We implement our framework in the Lean programming language, and give a number of security proofs to demonstrate that our system is both powerful and usable, with comparable automation to similar systems.
Our framework is especially focused on reasoning about and manipulating oracle access, and we demonstrate the usefulness of this approach by implementing both a general forking lemma and a version of the Fiat-Shamir transform for sigma protocols. As a simple case study we then instantiate these to an implementation of a Schnorr-like signature scheme.
SoK: On the Physical Security of UOV-based Signature Schemes
Multivariate cryptography currently centres mostly around UOV-based signature schemes: All multivariate round 2 candidates in the selection process for additional digital signatures by NIST are either UOV itself or close variations of it: MAYO, QR-UOV, SNOVA, and UOV. Also schemes which have been in the focus of the multivariate research community, but are broken by now - like Rainbow and LUOV - are based on UOV. Both UOV and the schemes based on it have been frequently analyzed regarding their physical security in the course of the NIST process. However, a comprehensive analysis regarding the physical security of UOV-based signature schemes is missing.
In this work, we want to bridge this gap and create a comprehensive overview of physical attacks on UOV and its variants from the second round of NIST’s selection process for additional post-quantum signature schemes, which just started. First, we collect all existing side-channel and fault attacks on UOV-based schemes and transfer them to the current UOV specification. Since UOV was subject to significant changes over the past few years, e.g., adaptions to the expanded secret key, some attacks need to be reassessed. Next, we introduce new physical attacks in order to obtain an overview as complete as possible. We then show how all these attacks would translate to MAYO, QR-UOV, and SNOVA. To improve the resistance of UOV-based signature schemes towards physical attacks, we discuss and introduce dedicated countermeasures. As related result, we observe that certain implementation decisions, like key compression techniques and randomization choices, also have a large impact on the physical security, in particular on the effectiveness of the considered fault attacks. Finally, we provide implementations of UOV and MAYO for the ARM Cortex-M4 architecture that feature first-order masking and protection against selected fault attacks. We benchmark the resulting overhead on a NUCLEO-L4R5ZI board and validate our approach by performing a TVLA on original and protected subroutines, yielding significantly smaller t-values for the latter.
Improved ML-DSA Hardware Implementation With First Order Masking Countermeasure
We present the protected hardware implementation of the Module-Lattice-Based Digital Signature Standard (ML-DSA). ML-DSA is an extension of Dilithium 3.1, which is the winner of the Post Quantum Cryptography (PQC) competition in the digital signature category. The proposed design is based on the existing high-performance Dilithium 3.1 design. We implemented existing Dilithium masking gadgets in hardware, which were only implemented in software. The masking gadgets are integrated with the unprotected ML-DSA design and functional verification of the complete design is verified with the Known Answer Tests(KATs) generated from ML-DSA reference software. We also present the practical power side-channel attack experimental results by implementing masking gadgets on the standard side-channel evaluation FPGA board and collecting power traces up-to 1 million traces. The proposed protected design has the overhead of 1.127× LUT, 1.2× Flip-Flop, and 378× execution time compared to unprotected design. The experimental results show that it resists side-channel attacks.
Attacking Automotive RKE Security: How Smart are your ‘Smart’ Keys?
Remote Keyless Entry (RKE) systems are ubiqui-
tous in modern day automobiles, providing convenience for
vehicle owners - occasionally at the cost of security. Most
automobile companies have proprietary implementations of
RKE; these are sometimes built on insecure algorithms and
authentication mechanisms. This paper presents a compre-
hensive study conducted on the RKE systems of multiple
cars from four automobile manufacturers not previously
explored.
Specifically, we analyze the design, implementation, and
security levels of 7 different cars manufactured by Honda,
Maruti-Suzuki, Toyota, and Mahindra. We also do a deep
dive into the RKE system of a particular Honda model.
We evaluate the susceptibility of these systems to known
vulnerabilities (such as RollJam and RollBack at-
tacks). This is accomplished using a novel tool – ‘Puck-
py’, that helps analyze RKE protocols. Our tool automates
several aspects of the protocol analysis process, reducing
time and logistical constraints in RKE research; we provide
standardized protocols to execute various attacks using our
Puck-Py tool. We find that, despite having a long period
of time to fix security issues, several popular automobiles
remain susceptible to attacks, including the basic RollJam
attack.
Succinct Randomized Encodings from Non-compact Functional Encryption, Faster and Simpler
Succinct randomized encodings allow encoding the input $x$ of a time-$t$ uniform computation $M(x)$ in sub-linear time $o(t)$. The resulting encoding $\tilde{x}$ allows recovering the result of the computation $M(x)$, but hides any other information about $x$. Such encodings are known to have powerful applications such as reducing communication in MPC, bootstrapping advanced encryption schemes, and constructing time-lock puzzles.
Until not long ago, the only known constructions were based on indistinguishability obfuscation, and in particular they were not based on standard post-quantum assumptions. In terms of efficiency, these constructions' encoding time is $\rm{polylog}(t)$, essentially the best one can hope for. Recently, a new construction was presented based on Circular Learning with Errors, an assumption similar to the one used in fully-homomorphic encryption schemes, and which is widely considered to be post-quantum resistant. However, the encoding efficiency significantly falls behind obfuscation-based scheme and is $\approx \sqrt{t} \cdot s$, where $s$ is the space of the computation.
We construct, under the same assumption, succinct randomized encodings with encoding time $\approx t^{\varepsilon} \cdot s$ for arbitrarily small constant $\varepsilon<1$. Our construction is relatively simple, generic and relies on any non-compact single-key functional encryption that satisfies a natural {\em efficiency preservation} property.
SophOMR: Improved Oblivious Message Retrieval from SIMD-Aware Homomorphic Compression
Privacy-preserving blockchains and private messaging services that ensure receiver-privacy face a significant UX challenge: each client must scan every payload posted on the public bulletin board individually to avoid missing messages intended for them. Oblivious Message Retrieval (OMR) addresses this issue by securely outsourcing this expensive scanning process to a service provider using Homomorphic Encryption (HE).
In this work, we propose a new OMR scheme that substantially improves upon the previous state-of-the-art, PerfOMR (USENIX Security'24). Our implementation demonstrates reductions of 3.3x in runtime, 2.2x in digest size, and 1.5x in key size, in a scenario with 65536 payloads (each 612 bytes), of which up to 50 are pertinent.
At the core of these improvements is a new homomorphic compression mechanism, where ciphertexts of length proportional to the number of total payloads are compressed into a digest whose length is proportional to the upper bound on the number of pertinent payloads. Unlike previous approaches, our scheme fully exploits the native homomorphic SIMD structure of the underlying HE scheme, significantly enhancing efficiency. In the setting described above, our compression scheme achieves 7.4x speedup compared to PerfOMR.
Revisiting Leakage-Resilient MACs and Succinctly-Committing AEAD: More Applications of Pseudo-Random Injections
Pseudo-Random Injections (PRIs) have had several applications in symmetric-key cryptography, such as in the idealization of Authenticated Encryption with Associated Data (AEAD) schemes, building robust AEAD, and, recently, in converting a committing AEAD scheme into a succinctly committing AEAD scheme. In Crypto 2024, Bellare and Hoang showed that if an AEAD scheme is already committing, it can be transformed into a succinctly committed scheme by encrypting part of the plaintext using a PRI. In this paper, we revisit the applications of PRIs in building Message Authentication Codes (MACs) and AEAD schemes.
First, we look at some of the properties and definitions PRIs, such as collision resistance and unforgeability when used as a MAC with small plaintext space, under different leakage models. Next, we show how they can be combined with collision-resistant hash functions to build a MAC for long plaintexts, offering flexible security depending on how the PRI and equality check are implemented. If both the PRI and equality check are leak-free, the MAC provides almost optimal security, but the security
only degrades a little if the equality check is only leakage-resilient (rather than leak-free). If the equality check has unbounded leakage, the security drops to a baseline security, rather than being completely insecure. Next, we show how to use PRIs to build a succinctly committing online AEAD scheme dubbed as scoAE from scratch that achieves succinct CMT4 security, privacy, and Ciphertext Integrity with Misuse and Leakage (CIML2) security. Last but not least, we show how to build a succinct nonce Misuse-Resistant (MRAE) AEAD scheme, dubbed as scMRAE. The construction combines the SIV paradigm with PRI-based encryption (e.g. the Encode-then-Encipher (EtE) framework).
Batching Adaptively-Sound SNARGs for NP
A succinct non-interactive argument (SNARG) for NP allows a prover to convince a verifier that an NP statement $x$ is true with a proof whose size is sublinear in the length of the traditional NP witness. Moreover, a SNARG is adaptively sound if the adversary can choose the statement it wants to prove after seeing the scheme parameters. Very recently, Waters and Wu (STOC 2024) showed how to construct adaptively-sound SNARGs for NP in the plain model from falsifiable assumptions (specifically, sub-exponentially-secure indistinguishability obfuscation, sub-exponentially-secure one-way functions, and polynomial hardness of discrete log).
We consider the batch setting where the prover wants to prove a collection of $T$ statements $x_1, \ldots, x_T$ and its goal is to construct a proof whose size is sublinear in both the size of a single witness and the number of instances $T$. In this setting, existing constructions either require the size of the public parameters to scale linearly with $T$ (and thus, can only support an a priori bounded number of instances), or only provide non-adaptive soundness, or have proof size that scales linearly with the size of a single NP witness. In this work, we give two approaches for batching adaptively-sound SNARGs for NP, and in particular, show that under the same set of assumptions as those underlying the Waters-Wu adaptively-sound SNARG, we can obtain an adaptively-sound SNARG for batch NP where the size of the proof is $\mathsf{poly}(\lambda)$ and the size of the CRS is $\mathsf{poly}(\lambda + |C|)$, where $\lambda$ is a security parameter and $|C|$ is the size of the circuit that computes the associated NP relation.
Our first approach builds directly on top of the Waters-Wu construction and relies on indistinguishability obfuscation and a homomorphic re-randomizable one-way function. Our second approach shows how to combine ideas from the Waters-Wu SNARG with the chaining-based approach by Garg, Sheridan, Waters, and Wu (TCC 2022) to obtain a SNARG for batch NP.
Pseudorandom Function-like States from Common Haar Unitary
Recent active studies have demonstrated that cryptography without one-way functions (OWFs) could be possible in the quantum world. Many fundamental primitives that are natural quantum analogs of OWFs or pseudorandom generators (PRGs) have been introduced, and their mutual relations and applications have been studied. Among them, pseudorandom function-like state generators (PRFSGs) [Ananth, Qian, and Yuen, Crypto 2022] are one of the most important primitives. PRFSGs are a natural quantum analogue of pseudorandom functions (PRFs), and imply many applications such as IND-CPA secret-key encryption (SKE) and EUF-CMA message authentication code (MAC). However, only known constructions of (many-query-secure) PRFSGs are ones from OWFs or pseudorandom unitaries (PRUs).
In this paper, we construct classically-accessible adaptive secure PRFSGs in the invertible quantum Haar random oracle (QHRO) model which is introduced in [Chen and Movassagh, Quantum]. The invertible QHRO model is an idealized model where any party can access a public single Haar random unitary and its inverse, which can be considered as a quantum analog of the random oracle model. Our PRFSG constructions resemble the classical Even-Mansour encryption based on a single permutation, and are secure against any unbounded polynomial number of queries to the oracle and construction. To our knowledge, this is the first application in the invertible QHRO model without any assumption or conjecture. The previous best construction in the idealized model is PRFSGs secure up to o(λ/ log λ) queries in the common Haar state model [Ananth, Gulati, and Lin, TCC 2024].
We develop new techniques on Haar random unitaries to prove the selective and adaptive security of our PRFSGs. For selective security, we introduce a new formula, which we call the Haar twirl approximation formula. For adaptive security, we show the unitary reprogramming lemma and the unitary resampling lemma. These have their own interest, and may have many further applications. In particular, by using the approximation formula, we give an alternative proof of the non-adaptive security of the PFC ensemble [Metger, Poremba, Sinha, and Yuen, FOCS 2024] as an additional result.
Finally, we prove that our construction is not PRUs or quantum-accessible non-adaptive PRFSGs by presenting quantum polynomial time attacks. Our attack is based on generalizing the hidden subgroup problem where the relevant function outputs quantum states.
Linear Proximity Gap for Reed-Solomon Codes within the 1.5 Johnson Bound
We establish a linear proximity gap for Reed-Solomon (RS) codes within the one-and-a-half Johnson bound. Specifically, we investigate the proximity gap for RS codes, revealing that any affine subspace is either entirely $\delta$-close to an RS code or nearly all its members are $\delta$-far from it. When $\delta$ is within the one-and-a-half Johnson bound, we prove an upper bound on the number of members (in the affine subspace) that are $\delta$-close to the RS code for the latter case. Our bound is linear in the length of codewords. In comparison, Ben-Sasson, Carmon, Ishai, Kopparty and Saraf [FOCS 2020] prove a linear bound when $\delta$ is within the unique decoding bound and a quadratic bound when $\delta$ is within the Johnson bound. Note that when the rate of the RS code is smaller than 0.23, the one-and-a-half Johnson bound is larger than the unique decoding bound.
Proximity gaps for Reed-Solomon (RS) codes have implications in various RS code-based protocols. In many cases, a stronger property than individual distance—known as correlated agreement—is required, i.e., functions in the affine subspace are not only $\delta$-close to an RS code, but also agree on the same evaluation domain. Our results support this stronger property.
Foundations of Adaptor Signatures
Adaptor signatures extend the functionality of regular signatures through the computation of pre-signatures on messages for statements of NP relations. Pre-signatures are publicly verifiable; they simultaneously hide and commit to a signature of an underlying signature scheme on that message. Anybody possessing a corresponding witness for the statement can adapt the pre-signature to obtain the "regular" signature. Adaptor signatures have found numerous applications for conditional payments in blockchain systems, like payment channels (CCS'20, CCS'21), private coin mixing (CCS'22, SP'23), and oracle-based payments (NDSS'23).
In our work, we revisit the state of the security of adaptor signatures and their constructions. In particular, our two main contributions are:
- Security Gaps and Definitions: We review the widely-used security model of adaptor signatures due to Aumayr et al. (ASIACRYPT'21) and identify gaps in their definitions that render known protocols for private coin-mixing and oracle-based payments insecure. We give simple counterexamples of adaptor signatures that are secure w.r.t. their definitions but result in insecure instantiations of these protocols. To fill these gaps, we identify a minimal set of modular definitions that align with these practical applications.
- Secure Constructions: Despite their popularity, all known constructions are (1) derived from identification schemes via the Fiat-Shamir transform in the random oracle model or (2) require modifications to the underlying signature verification algorithm, thus making the construction useless in the setting of cryptocurrencies. More concerningly, all known constructions were proven secure w.r.t. the insufficient definitions of Aumayr et al., leaving us with no provably secure adaptor signature scheme to use in applications.
Firstly, in this work, we salvage all current applications by proving the security of the widely-used Schnorr adaptor signatures under our proposed definitions. We then provide several new constructions, including presenting the first adaptor signature schemes for Camenisch-Lysyanskaya (CL), Boneh-Boyen-Shacham (BBS+), and Waters signatures, all of which are proven secure in the standard model. Our new constructions rely on a new abstraction of digital signatures, called dichotomic signatures, which covers the essential properties we need to build adaptor signatures. Proving the security of all constructions (including identification-based schemes) relies on a novel non-black-box proof technique. Both our digital signature abstraction and the proof technique could be of independent interest to the community.
Breaking BASS
We provide several attacks on the BASS signature scheme introduced by Grigoriev, Ilmer, Ovchinnikov and Shpilrain in 2023. We lay out a trivial forgery attack which generates signatures passing the scheme's probabilistic signature verification with high probability. Generating these forgeries is faster than generating signatures honestly. Moreover, we describe a key-only attack which allows us to recover an equivalent private key from a signer's public key. The time complexity of this recovery is asymptotically the same as that of signing messages.
An Unstoppable Ideal Functionality for Signatures and a Modular Analysis of the Dolev-Strong Broadcast
Many foundational results in the literature of consensus follow the Dolev-Yao model (FOCS '81), which treats digital signatures as ideal objects with perfect correctness and unforgeability. However, no work has yet formalized an ideal signature scheme that is both suitable for this methodology and possible to instantiate, or a composition theorem that ensures security when instantiating it cryptographically.
The Universal Composition (UC) framework would ensure composition if we could specify an ideal functionality for signatures and prove it UC-realizable. Unfortunately, all signature functionalities heretofore proposed are problematic when used to construct higher-level protocols: either the functionality internally computes a computationally secure signature, and therefore higher-level protocols must rely upon computational assumptions, or else the functionality introduces a new attack surface that does not exist when the functionality is realized. As a consequence, no consensus protocol has ever been analyzed in a modular way using existing ideal signature functionalities.
We propose a new unstoppable ideal functionality for signatures that is UC-realized exactly by the set of standard EUF-CMA signature schemes that are consistent and linear time. No adversary can prevent honest parties from obtaining perfectly ideal signature services from our functionality. We showcase its usefulness by presenting the first modular analysis of the Dolev-Strong broadcast protocol (SICOMP '83) in the UC framework. Our result can be interpreted as a step toward a sound realization of the Dolev-Yao methodology.
Encrypted RAM Delegation: Applications to Rate-1 Extractable Arguments, Homomorphic NIZKs, MPC, and more
In this paper we introduce the notion of encrypted RAM delegation. In an encrypted RAM delegation scheme, the prover creates a succinct proof for a group of two input strings $x_\mathsf{pb}$ and $x_\mathsf{pr}$, where $x_\mathsf{pb}$ corresponds to a large \emph{public} input and $x_\mathsf{pr}$ is a \emph{private} input. A verifier can check correctness of computation of $\mathcal{M}$ on $(x_\mathsf{pb}, x_\mathsf{pr})$, given only the proof $\pi$ and $x_\mathsf{pb}$.
We design encrypted RAM delegation schemes from a variety of standard assumptions such as DDH, or LWE, or $k$-linear. We prove strong knowledge soundness guarantee for our scheme as well as a special input hiding property to ensure that $\pi$ does not leak anything about $x_\mathsf{pr}$.
We follow this by describing multiple applications of encrypted RAM delegation. First, we show how to design a rate-1 non-interactive zero-knowledge (NIZK) argument system with a straight-line extractor. Despite over 30+ years of research, the only known construction in the literature for rate-1 NIZKs from standard assumptions relied on fully homomorphic encryption. Thus, we provide the first rate-1 NIZK scheme based purely on DDH or $k$-linear assumptions.
Next, we also design fully-homomorphic NIZKs from encrypted RAM delegation. The only prior solution crucially relied on algebraic properties of pairing-based NIZKs, thus was only known from the decision linear assumption. We provide the first fully-homomorphic NIZK system from LWE (thus post-quantum security) and from DDH-hard groups.
We also provide a communication-complexity-preserving compiler for a wide class of semi-malicious multiparty computation (MPC) protocols to obtain fully malicious MPC protocols. This gives the first such compiler for a wide class of MPC protocols as any comparable compiler provided in prior works relied on strong non-falsifiable assumptions such as zero-knowledge succinct non-interactive arguments of knowledge (zkSNARKs). Moreover, we also show many other applications to composable zero-knowledge batch arguments, succinct delegation of committed programs, and fully context-hiding multi-key multi-hop homomorphic signatures.
Smoothing Parameter and Shortest Vector Problem on Random Lattices
Lattice problems have many applications in various domains of computer science. There is currently a gap in the understanding of these problems with respect to their worst-case complexity and their average-case behaviour.
For instance, the Shortest Vector problem (SVP) on an n-dimensional lattice has worst-case complexity $2^{n+o(n)}$ \cite{ADRS15}.
However, in practice, people rely on heuristic (unproven) sieving algorithms of time complexity $2^{0.292n+o(n)}$ \cite{BeckerDGL16}
to assess the security of lattice-based cryptography schemes. Those heuristic algorithms are experimentally verified
for lattices used in cryptography, which are usually random in some way.
In this paper, we try to bridge the gap between worst-case and heuristic algorithms. Using the formalism of random real lattices developped by Siegel, we show a tighter upper bound on an important lattice parameter called the smoothing parameter that applies to almost all random lattices. This allows us to obtain a $2^{n/2+o(n)}$ time algorithm for an approximation version of the SVP on random lattices with a small constant approximation factor.
Quantum Chosen-Cipher Attack on Camellia
The Feistel structure represents a fundamental architectural component within the domain of symmetric cryptographic algorithms, with a substantial body of research conducted within the context of classical computing environments. Nevertheless, research into specific symmetric cryptographic algorithms utilizing the Feistel structure is relatively scarce in quantum computing environments. This paper builds upon a novel 4-round distinguisher proposed by Ito et al. for the Feistel structure under the quantum chosen-ciphertext attack (qCCA) setting. It introduces a 5-round distinguisher for Camellia. The efficacy of the distinguisher has been empirically validated. Furthermore, this paper combines Grover's algorithm with Simon's algorithm, utilizing an analysis of Camellia's key scheduling characteristics to construct a 9-round key recovery attack on Camellia algorithm. The time complexity for acquiring the correct key bits is $2^{61.5}$, and it requires 531 quantum bits. This represents the inaugural chosen-ciphertext attack on Camellia under the Q2 model.
Siniel: Distributed Privacy-Preserving zkSNARK
Uncategorized
Uncategorized
Zero-knowledge Succinct Non-interactive Argument of Knowledge (zkSNARK) is a powerful cryptographic primitive, in which a prover convinces a verifier that a given statement is true without leaking any additional information. However, existing zkSNARKs suffer from high computation overhead in the proof generation. This limits the applications of zkSNARKs, such as private payments, private smart contracts, and anonymous credentials. Private delegation has become a prominent way to accelerate proof generation.
In this work, we propose Siniel, an efficient private delegation framework for zkSNARKs constructed from polynomial interactive oracle proof (PIOP) and polynomial commitment scheme (PCS). Our protocol allows a computationally limited prover (a.k.a. delegator) to delegate its expensive prover computation to several workers without leaking any information about the private witness. Most importantly, compared with the recent work EOS (USENIX'23), the state-of-the-art zkSNARK prover delegation framework, a prover in Siniel needs not to engage in the MPC protocol after sending its shares of private witness. This means that a Siniel prover can outsource the entire computation to the workers.
We compare Siniel with EOS and show significant performance advantages of the former. The experimental results show that, under low bandwidth conditions (10MBps), Siniel saves about 16% time for delegators than that of EOS, whereas under high bandwidth conditions (1000MBps), Siniel saves about 80% than EOS.
ColliderScript: Covenants in Bitcoin via 160-bit hash collisions
We introduce a method for enforcing covenants on Bitcoin outputs without requiring any changes to Bitcoin by designing a hash collision based equivalence check which bridges Bitcoin's limited Big Script to Bitcoin's Small Script. This allows us evaluate the signature of the spending transaction (available only to Big Script) in Small Script. As Small Script enables arbitrary computations, we can introspect into the spending transaction and enforce covenants on it.
Our approach leverages finding collisions in the $160$-bit hash functions: SHA-1 and RIPEMD-160. By the birthday bound this should cost $\sim2^{80}$ work. Each spend of our covenant costs $\sim2^{86}$ hash queries and $\sim2^{56}$ bytes of space. For security, we rely on an assumption regarding the hardness of finding a $3$-way collision (with short random inputs) in $160$-bit hash functions, arguing that if the assumption holds, breaking covenant enforcement requires $\sim2^{110}$ hash queries. To put this in perspective, the work to spend our covenant is $\sim33$ hours of the Bitcoin mining network, whereas breaking our covenant requires $\sim 450,000$ years of the Bitcoin mining network.
We believe there are multiple directions of future work that can significantly improve these numbers.
Evaluating covenants and our equivalence check requires performing many operations in Small Script, which must take no more than $4$ megabytes in total size, as Bitcoin does not allow transactions greater than $4$ megabytes. We only provide rough estimates of the transaction size because, as of this writing, no Small Script implementations of the hash functions required, SHA-1 and RIPEMD-160, have been written.
Investigation of the Optimal Linear Characteristics of BAKSHEESH (Full Version)
This paper aims to provide a more comprehensive understanding of the optimal linear characteristics of BAKSHEESH. Initially, an explicit formula for the absolute correlation of the $R$-round optimal linear characteristic of BAKSHEESH is proposed when $R \geqslant 12$. By examining the linear characteristics of BAKSHEESH with three active S-boxes per round, we derive some properties of the three active S-boxes in each round. Furthermore, we demonstrate that there is only one 1-round iterative linear characteristic with three active S-boxes. Since the 1-round linear characteristic is unique, it must be included in any $R$-round ($R \geqslant 12$) linear characteristics of BAKSHEESH with three active S-boxes per round. Finally, we confirm that BAKSHEESH's total number of $R$-round optimal linear characteristics is $3072$ for $R \geqslant 12$. All of these characteristics are generated by employing the 1-round iterative linear characteristic.
Privacy-Preserving Multi-Party Search via Homomorphic Encryption with Constant Multiplicative Depth
We propose a privacy-preserving multiparty search protocol
using threshold-level homomorphic encryption, which we prove correct
and secure to honest but curious adversaries. Unlike existing approaches,
our protocol maintains a constant circuit depth. This feature enhances
its suitability for practical applications involving dynamic underlying
databases.
Consensus Under Adversary Majority Done Right
A spectre is haunting consensus protocols—the spectre of adversary majority. The literature is inconclusive, with possibilities and impossibilities running abound. Dolev and Strong in 1983 showed an early possibility for up to 99% adversaries. Yet, we have known impossibility results for adversaries above 1/2 in synchrony, and above 1/3 in partial synchrony. What gives? It is high time that we pinpoint the culprit of this confusion: the critical role of the modeling details of clients. Are the clients sleepy or always-on? Are they silent or communicating? Can validators be sleepy too? We systematize models for consensus across four dimensions (sleepy/always-on clients, silent/communicating clients, sleepy/always-on validators, and synchrony/partial-synchrony), some of which are new, and tightly characterize the achievable safety and liveness resiliences with matching possibilities and impossibilities for each of the sixteen models. To this end, we unify folklore and earlier results, and fill gaps left in the literature with new protocols and impossibility theorems.
Quantum One-Time Protection of any Randomized Algorithm
The meteoric rise in power and popularity of machine learning models dependent on valuable training data has reignited a basic tension between the power of running a program locally and the risk of exposing details of that program to the user. At the same time, fundamental properties of quantum states offer new solutions to data and program security that can require strikingly few quantum resources to exploit, and offer advantages outside of mere computational run time. In this work, we demonstrate such a solution with quantum one-time tokens.
A quantum one-time token is a quantum state that permits a certain program to be evaluated exactly once. One-time security guarantees, roughly, that the token cannot be used to evaluate the program more than once. We propose a scheme for building quantum one-time tokens for any randomized classical program, which include generative AI models. We prove that the scheme satisfies an interesting definition of one-time security as long as outputs of the classical algorithm have high enough min-entropy, in a black box model.
Importantly, the classical program being protected does not need to be implemented coherently on a quantum computer. In fact, the size and complexity of the quantum one-time token is independent of the program being protected, and additional quantum resources serve only to increase the security of the protocol. Due to this flexibility in adjusting the security, we believe that our proposal is parsimonious enough to serve as a promising candidate for a near-term useful demonstration of quantum computing in either the NISQ or early fault tolerant regime.
FLock: Robust and Privacy-Preserving Federated Learning based on Practical Blockchain State Channels
\textit{Federated Learning} (FL) is a distributed machine learning paradigm that allows multiple clients to train models collaboratively without sharing local data. Numerous works have explored security and privacy protection in FL, as well as its integration with blockchain technology. However, existing FL works still face critical issues. \romannumeral1) It is difficult to achieving \textit{poisoning robustness} and \textit{data privacy} while ensuring high \textit{model accuracy}. Malicious clients can launch \textit{poisoning attacks} that degrade the global model. Besides, aggregators can infer private data from the gradients, causing \textit{privacy leakages}. Existing privacy-preserving poisoning defense FL solutions suffer from decreased model accuracy and high computational overhead. \romannumeral2) Blockchain-assisted FL records iterative gradient updates on-chain to prevent model tampering, yet existing schemes are not compatible with practical blockchains and incur high costs for maintaining the gradients on-chain. Besides, incentives are overlooked, where unfair reward distribution hinders the sustainable development of the FL community. In this work, we propose FLock, a robust and privacy-preserving FL scheme based on practical blockchain state channels. First, we propose a lightweight secure \textit{Multi-party Computation} (MPC)-friendly robust aggregation method through quantization, median, and Hamming distance, which could resist poisoning attacks against up to $<50\%$ malicious clients. Besides, we propose communication-efficient Shamir's secret sharing-based MPC protocols to protect data privacy with high model accuracy. Second, we utilize blockchain off-chain state channels to achieve immutable model records and incentive distribution. FLock achieves cost-effective compatibility with practical cryptocurrency platforms, e.g. Ethereum, along with fair incentives, by merging the secure aggregation into a multi-party state channel. In addition, a pipelined \textit{Byzantine Fault-Tolerant} (BFT) consensus is integrated where each aggregator can reconstruct the final aggregated results. Lastly, we implement FLock and the evaluation results demonstrate that FLock enhances robustness and privacy, while maintaining efficiency and high model accuracy. Even with 25 aggregators and 100 clients, FLock can complete one secure aggregation for ResNet in $2$ minutes over a WAN. FLock successfully implements secure aggregation with such a large number of aggregators, thereby enhancing the fault tolerance of the aggregation.
Isogeny interpolation and the computation of isogenies from higher dimensional representations
The Supersingular Isogeny Diffie-Hellman (SIDH) scheme is a public key cryptosystem that was submitted to the National Institute of Standards and Technology's competition for the standardization of post-quantum cryptography protocols. The private key in SIDH consists of an isogeny whose degree is a prime power. In July 2022, Castryck and Decru discovered an attack that completely breaks the scheme by recovering Bob's secret key, using isogenies between higher dimensional abelian varieties to interpolate and reconstruct the isogenies comprising the SIDH private key. The original attack applies in theory to any prime power degree, but the implementation accompanying the original attack required one of the SIDH keys involved in a key exchange to have degree equal to a power of $2$. An implementation of the power of $3$ case was published subsequently by Decru and Kunzweiler. However, despite the passage of several years, nobody has published any implementations for prime powers other than $2$ or $3$, and for good reason --- the necessary higher dimensional isogeny computations rapidly become more complicated as the base prime increases. In this paper, we provide for the first time a fully general isogeny interpolation implementation that works for any choice of base prime, and provide timing benchmarks for various combinations of SIDH base prime pairs. We remark that the technique of isogeny interpolation now has constructive applications as well as destructive applications, and that our methods may open the door to increased flexibility in constructing isogeny-based digital signatures and cryptosystems.
How Fast Does the Inverse Walk Approximate a Random Permutation?
For a finite field $\mathbb{F}$ of size $n$, the (patched) inverse permutation $\operatorname{INV}: \mathbb{F} \to \mathbb{F}$ computes the inverse of $x$ over $\mathbb{F}$ when $x\neq 0$ and outputs $0$ when $x=0$, and the $\operatorname{ARK}_K$ (for AddRoundKey) permutation adds a fixed constant $K$ to its input, i.e.,
$$\operatorname{INV}(x) = x^{n-2} \hspace{.1in} \mbox{and} \hspace{.1in} \operatorname{ARK}_K(x) = x + K \;.$$
We study the process of alternately applying the $\operatorname{INV}$ permutation followed by a random linear permutation $\operatorname{ARK}_K$, which is a random walk over the alternating (or symmetric) group that we call the inverse walk.
We show both lower and upper bounds on the number of rounds it takes for this process to approximate a random permutation over $\mathbb{F}$. We show that $r$ rounds of the inverse walk over the field of size $n$ with $$r = \Theta\left(n\log^2 n + n\log n\log \frac{1}{\epsilon}\right)$$ rounds generates a permutation that is $\epsilon$-close (in variation distance) to a uniformly random even permutation (i.e. a permutation from the alternating group $A_{n}$). This is tight, up to logarithmic factors.
Our result answers an open question from the work of Liu, Pelecanos, Tessaro and Vaikuntanathan (CRYPTO 2023) by providing a missing piece in their proof of $t$-wise independence of (a variant of) AES. It also constitutes a significant improvement on a result of Carlitz (Proc. American Mathematical Society, 1953) who showed a reachability result: namely, that every even permutation can be generated eventually by composing $\operatorname{INV}$ and $\operatorname{ARK}$. We show a tight convergence result, namely a tight quantitative bound on the number of rounds to reach a random (even) permutation.
How Much Public Randomness Do Modern Consensus Protocols Need?
Modern blockchain-based consensus protocols
aim for efficiency (i.e., low communication and round complexity) while maintaining security against adaptive adversaries.
These goals are usually achieved using a public randomness beacon to select roles for each participant.
We examine to what extent this randomness is necessary.
Specifically, we provide tight bounds on the amount of entropy a Byzantine Agreement protocol must consume from a beacon in order to enjoy efficiency and adaptive security.
We first establish that no consensus protocol can simultaneously be efficient, be adaptively secure, and use $O(\log n)$ bits of beacon entropy. We then show this bound is tight and, in fact, a trilemma by presenting three consensus protocols that achieve any two of these three properties.
On the Jordan-Gauss graphs and new multivariate public keys
We suggest two families of multivariate public keys defined over arbitrary finite commutative ring \(K\) with unity. The first one has quadratic multivariate public rule, this family is an obfuscation of previously defined cryptosystem defined in terms of well known algebraic graphs \(D(n, K)\) with the partition sets isomorphic to \(K^n\). Another family of cryptosystems uses the combination of Eulerian transformation of \(K[x_1, x_2, \ldots, x_n]\) sending each variable \(x_i\) to a monomial term with the quadratic encryption map of the first cryptosystem. The resulting map has unbounded degree and the density \(O(n^4)\) like the cubic multivariate map. The space of plaintexts of the second cryptosystem is the variety \((K^*)^n\) and the space of ciphertexts is the affine space \(K^n\).
Towards Explainable Side-Channel Leakage: Unveiling the Secrets of Microarchitecture
We explore the use of microbenchmarks, small assembly code snippets, to detect microarchitectural side-channel leakage in CPU implementations. Specifically, we investigate the effectiveness of microbenchmarks in diagnosing the predisposition to side-channel leaks in two commonly used RISC-V cores: Picorv32 and Ibex. We propose a new framework that involves diagnosing side-channel leaks, identifying leakage points, and constructing leakage profiles to understand the underlying causes. We apply our framework to several realistic case studies that test our framework for explaining side-channel leaks and showcase the subtle interaction of data via order-reducing leaks.
Discrete gaussian sampling for BKZ-reduced basis
Discrete Gaussian sampling on lattices is a fundamental problem in lattice-based cryptography. In this paper, we revisit the Markov chain Monte Carlo (MCMC)-based Metropolis-Hastings-Klein (MHK) algorithm proposed by Wang and Ling
and study its complexity under the Geometric Series Assuption (GSA) when the given basis is BKZ-reduced. We give experimental evidence that the GSA is accurate in this context, and we give a very simple approximate formula for the complexity of the sampler that is accurate over a large range of parameters and easily computable. We apply our results to the dual attack on LWE of [Pouly and Shen 2024] and significantly improve the complexity estimates of the attack. Finally, we provide some results of independent interest on the Gaussian mass of a random $q$-ary lattices.
Revisiting subgroup membership testing on pairing-friendly curves via the Tate pairing
In 2023, Koshelev proposed an efficient method for subgroup membership testing on a list of non-pairing-friendly curves via the Tate pairing. In fact, this method can also be applied to certain pairing-friendly curves, such as the BLS and BW13 families, at a cost of two small Tate pairings. In this paper, we revisit Koshelev's method to enhance its efficiency for these curve families. First, we present explicit formulas for computing the two small Tate pairings. Compared to the original formulas, the new versions offer shorter Miller iterations and reduced storage requirements. Second, we provide a high-speed software implementation on a 64-bit processor. Our results demonstrate that the new method is up to $62.0\%$ and $22.4\%$ faster than the state-of-the-art on the BW13-310 and BLS24-315 curves, respectively, while being $14.1\%$ slower on BLS12-381. When precomputation is utilized, our method achieves speed improvements of up to $34.8\%$, $110.6\%$, and $63.9\%$ on the BLS12-381, BW13-310, and BLS24-315 curves, respectively.
Stealth and Beyond: Attribute-Driven Accountability in Bitcoin Transactions
Bitcoin enables decentralized, pseudonymous transactions, but balancing privacy with accountability remains a challenge. This paper introduces a novel dual accountability mechanism that enforces both sender and recipient compliance in Bitcoin transactions. Senders are restricted to spending Unspent Transaction Outputs (UTXOs) that meet specific criteria, while recipients must satisfy legal and ethical requirements before receiving funds. We enhance stealth addresses by integrating compliance attributes, preserving privacy while ensuring policy adherence. Our solution introduces a new cryptographic primitive, Identity-Based Matchmaking Signatures (IB-MSS), which supports streamlined auditing. Our approach is fully compatible with existing Bitcoin infrastructure and does not require changes to the core protocol, preserving both privacy and decentralization while enabling transaction auditing and compliance.
Advanced Transparency System
In contemporary times, there are many situations where users need to verify that their information is correctly retained by servers. At the same time, servers need to maintain transparency logs. Many algorithms have been designed to address this problem. For example, Certificate Transparency (CT) helps track certificates issued by Certificate Authorities (CAs), while CONIKS aims to provide key transparency for end users. However, these algorithms often suffer from either high append time or imbalanced inclusion-proof cost and consistency-proof cost. To find an optimal solution, we constructed two different but similar authenticated data structures tailored to two different lookup protocols. We propose ATS (Advanced Transparency System), which uses only linear storage cost to reduce append time and balances the time costs for both servers and users. When addressing the value-lookup problem, this system allows servers to append user information in constant time and enables radical-level inclusion proof and consistency proof. For the key transparency problem, the system requires logarithmic time complexity for the append operation and offers acceptable inclusion proof and consistency proof.
An Efficient and Secure Boolean Function Evaluation Protocol
Boolean functions play an important role in designing and analyzing many cryptographic systems, such as block ciphers, stream ciphers, and hash functions, due to their unique cryptographic properties such as nonlinearity, correlation immunity, and algebraic properties. The secure evaluation of Boolean functions or Secure Boolean Evaluation (SBE) is an important area of research. SBE allows parties to jointly compute Boolean functions without exposing their private inputs. SBE finds applications in privacy-preserving protocols and secure multi-party computations. In this manuscript, we present an efficient and generic two-party protocol
(namely $\textsf{BooleanEval}$) for the secure evaluation of Boolean functions by utilizing a 1-out-of-2 Oblivious Transfer (OT) as a building block. $\textsf{BooleanEval}$ only employs XOR operations as the core computational step, thus making it lightweight and fast. Unlike other lightweight state-of-the-art designs of SBE, $\textsf{BooleanEval}$ avoids the use of additional cryptographic primitives, such as hash functions and commitment schemes to reduce the computational overhead.
Black-Box Timed Commitments from Time-Lock Puzzles
A Timed Commitment (TC) with time parameter $t$ is hiding for time at most $t$, that is, commitments can be force-opened by any third party within time $t$. In addition to various cryptographic assumptions, the security of all known TC schemes relies on the sequentiality assumption of repeated squarings in hidden-order groups. The repeated squaring assumption is therefore a security bottleneck.
In this work, we give a black-box construction of TCs from any time-lock puzzle (TLP) by additionally relying on one-way permutations and collision-resistant hashing.
Currently, TLPs are known from (a) the specific repeated squaring assumption, (b) the general (necessary) assumption on the existence of worst-case non-parallelizing languages and indistinguishability obfuscation, and (c) any iteratively sequential function and the hardness of the circular small-secret LWE problem. The latter admits a plausibly post-quantum secure instantiation.
Hence, thanks to the generality of our transform, we get i) the first TC whose timed security is based on the the existence of non-parallelizing languages and ii) the first TC that is plausibly post-quantum secure.
We first define quasi publicly-verifiable TLPs (QPV-TLPs) and construct them from any standard TLP in a black-box manner without relying on any additional assumptions. Then, we devise a black-box commit-and-prove system to transform any QPV-TLPs into a TC.
A General Quantum Duality for Representations of Groups with Applications to Quantum Money, Lightning, and Fire
Aaronson, Atia, and Susskind [Aaronson et al., 2020] established that efficiently mapping between quantum states $\ket{\psi}$ and $\ket{\phi}$ is computationally equivalent to distinguishing their superpositions $\frac{1}{\sqrt{2}}(|\psi\rangle + |\phi\rangle)$ and $\frac{1}{\sqrt{2}}(|\psi\rangle - |\phi\rangle)$. We generalize this insight into a broader duality principle in quantum computation, wherein manipulating quantum states in one basis is equivalent to extracting their value in a complementary basis. In its most general form, this duality principle states that for a given group, the ability to implement a unitary representation of the group is computationally equivalent to the ability to perform a Fourier subspace extraction from the invariant subspaces corresponding to its irreducible representations.
Building on our duality principle, we present the following applications:
* Quantum money, which captures quantum states that are verifiable but unclonable, and its stronger variant, quantum lightning, have long resisted constructions based on concrete cryptographic assumptions. While (public-key) quantum money has been constructed from indistinguishability obfuscation (iO)—an assumption widely considered too strong—quantum lightning has not been constructed from any such assumptions, with previous attempts based on assumptions that were later broken. We present the first construction of quantum lightning with a rigorous security proof, grounded in a plausible and well-founded cryptographic assumption. We extend Zhandry's construction from Abelian group actions [Zhandry, 2024] to non-Abelian group actions, and eliminate Zhandry's reliance on a black-box model for justifying security. Instead, we prove a direct reduction to a computational assumption—the pre-action security of cryptographic group actions. We show how these group actions can be realized with various instantiations, including with the group actions of the symmetric group implicit in the McEliece cryptosystem.
* We provide an alternative quantum money and lightning construction from one-way homomorphisms, showing that security holds under specific conditions on the homomorphism. Notably, our scheme exhibits the remarkable property that four distinct security notions—quantum lightning security, security against both worst-case cloning and average-case cloning, and security against preparing a specific canonical state—are all equivalent.
* Quantum fire captures the notion of a samplable distribution on quantum states that are efficiently clonable, but not efficiently telegraphable, meaning they cannot be efficiently encoded as classical information. These states can be spread like fire, provided they are kept alive quantumly and do not decohere.
The only previously known construction relied on a unitary quantum oracle, whereas we present the first candidate construction of quantum fire in the plain model.
Fine-Grained Non-Interactive Key-Exchange without Idealized Assumptions
In this paper, we study multi-party non-interactive key exchange (NIKE) in the fine-grained setting. More precisely, we propose three multi-party NIKE schemes in three computation models, namely, the bounded parallel-time, bounded time, and bounded storage models. Their security is based on a very mild assumption (e.g., NC1 ⊊ ⊕L/poly) or even without any complexity assumption. This improves the recent work of Afshar, Couteau, Mahmoody, and Sadeghi (EUROCRYPT 2023) that requires idealized assumptions, such as random oracles or generic groups.
Additionally, we show that all our constructions satisfy a natural desirable property that we refer to as extendability, and we give generic transformations from extendable multi-party NIKE to multi-party identity-based NIKEs in the fine-grained settings.
PriSrv: Privacy-Enhanced and Highly Usable Service Discovery in Wireless Communications
Service discovery is essential in wireless communications. However, existing service discovery protocols provide no or very limited privacy protection for service providers and clients, and they often leak sensitive information (e.g., service type, client’s identity and mobility pattern), which leads to various network-based attacks (e.g., spoofing, man-in-the-middle, identification and tracking). In this paper, we propose a private service discovery protocol, called PriSrv, which allows a service provider and a client to respectively specify a fine-grained authentication policy that the other party must satisfy before a connection is established. PriSrv consists of a private service broadcast phase and an anonymous mutual authentication phase with bilateral control, where the private information of both parties is hidden beyond the fact that a mutual match to the respective authentication policy occurred. As a core component of PriSrv, we introduce the notion of anonymous credential-based matchmaking encryption (ACME), which exerts dual-layer matching in one step to simultaneously achieve bilateral flexible policy control, selective attribute disclosure and multi-show unlinkability. As a building block of ACME, we design a fast anonymous credential (FAC) scheme to provide constant size credentials and efficient show/verification mechanisms, which is suitable for privacy-enhanced and highly usable service discovery in wireless networks. We present a concrete PriSrv protocol that is interoperable with popular wireless communication protocols, such as Wi-Fi Extensible Authentication Protocol (EAP), mDNS, BLE and Airdrop, to offer privacy-enhanced protection. We present formal security proof of our protocol and evaluate its performance on multiple hardware platforms: desktop, laptop, mobile phone and Raspberry Pi. PriSrv accomplishes private discovery and secure connection in less than 0.973 s on the first three platforms, and in less than 2.712 s on Raspberry Pi 4B. We also implement PriSrv into IEEE 802.1X in the real network to demonstrate its practicality.
Is Periodic Pseudo-randomization Sufficient for Beacon Privacy?
In this paper, we investigate whether the privacy mechanism of periodically changing the pseudorandom identities of Bluetooth Low Energy (BLE) beacons is sufficient to ensure privacy.
We consider a new natural privacy notion for BLE broadcasting beacons which we call ``Timed-sequence- indistinguishability'' of beacons. This new privacy definition is stronger than the well-known indistinguishability, since it considers not just the advertisements' content, but also the advertisements' broadcasting times which are observable in the physical world.
We then prove that beacons with periodically changing pseudorandom identities do not achieve timed-sequence- indistinguishability. We do this by presenting a novel privacy attack against BLE beacons, which we call the ``Timer Manipulation Attack.'' This new time-based privacy attack can be executed by merely inserting or reinserting the beacon's battery at the adversary's chosen time. We performed this attack against an actually deployed beacon.
To mitigate the ``Timer Manipulation Attack'' and other attacks associated with periodic signaling, we propose a new countermeasure involving quasi-periodic randomized scheduling of identity changes. We prove that our countermeasure ensures timed-sequence indistinguishability for beacons, thereby enhancing the beacon's privacy. Additionally, we show how to integrate this countermeasure in the attacked system while essentially preserving its feasibility and utility, which is crucial for practical industrial adoption.
New results in Share Conversion, with applications to evolving access structures
We say there is a share conversion from a secret sharing scheme $\Pi$ to another scheme $\Pi'$ implementing the same access structure if each party can locally apply a deterministic function to their share to transform any valid secret sharing under $\Pi$ to a valid (but not necessarily random) secret sharing under $\Pi'$ of the same secret. If such a conversion exists, we say that $\Pi\ge\Pi'$. This notion was introduced by Cramer et al. (TCC'05), where they particularly proved that for any access structure (AS), any linear secret sharing scheme over a given field $\mathbb{F}$, has a conversion from a CNF scheme, and is convertible to a DNF scheme.
In this work, we initiate a systematic study of convertability between secret sharing schemes, and present a number of results with implications to the understanding of the convertibility landscape.
- In the context of linear schemes, we present two key theorems providing necessary conditions for convertibility, proved using linear-algebraic tools. It has several implications, such as the fact that Shamir secret sharing scheme can be neither maximal or minimal. Another implication of it is that for a broad class of access structures, a linear scheme where some party has sufficiently small share complexity, may not be minimal.
- Our second key result is a necessary condition for convertibility to CNF from a broad class of (not necessarily linear) schemes. This result is proved via information-theoretic techniques and implies non-maximality for schemes with share complexity smaller than that of CNF.
We also provide a condition which is both necessary and sufficient for the existence of a share conversion to some linear scheme. The condition is stated as a system of linear equations, such that a conversion exists iff. a solution to the linear system exists. We note that the impossibility results for linear schemes may be viewed as identifying a subset of contradicting equations in the system.
Another contribution of our paper, is in defining and studying share conversion for evolving secret sharing schemes. In such a schemes, recently introduced by Komargodski et al. (IEEE ToIT'18), the number of parties is not bounded apriori, and every party receives a share as it arrives, which never changes in the sequel. Our impossibility results have implications to the evolving setting as well. Interestingly, that unlike the standard setting, there is no maximum or minimum in a broad class of evolving schemes, even without any restriction on the share size.
Finally, we show that, generally, there is no conversion between additive schemes over different fields, however by degrading to statistical security, it may be possible to create convertible schemes.
ABE for Circuits with $\mathsf{poly}(\lambda)$-sized Keys from LWE
We present a key-policy attribute-based encryption (ABE) scheme for circuits based on the Learning With Errors (LWE) assumption whose key size is independent of the circuit depth. Our result constitutes the first improvement for ABE for circuits from LWE in almost a decade, given by Gorbunov, Vaikuntanathan, and Wee (STOC 2013) and Boneh, et al. (EUROCRYPT 2014) -- we reduce the key size in the latter from
$\mathsf{poly}(\mbox{depth},\lambda)$ to $\mathsf{poly}(\lambda)$. The starting point of our construction is a recent ABE scheme of Li, Lin, and Luo (TCC 2022), which achieves $\mathsf{poly}(\lambda)$ key size but requires pairings and generic bilinear groups in addition to LWE; we introduce new lattice techniques to eliminate the additional requirements.
Ciphertext-Policy ABE from Inner-Product FE
The enormous potential of Attribute-Based Encryption (ABE) in the context of IoT has driven researchers to propose pairing-free ABE schemes that are suitable for resource-constrained devices. Unfortunately, many of these schemes turned out to be insecure. This fact seems to reinforce the point of view of some authors according to which instantiating an Identity-Based Encryption (IBE) in plain Decision Diffie-Hellman (DDH) groups is impossible. In this paper, we provide a generic AND gate access structured Ciphertext-Policy ABE (CP-ABE) scheme with secret access policy from Inner-Product Functional Encryption (IPFE). We also propose an instantiation of that generic CP-ABE scheme from the DDH assumption. From our generic CP-ABE scheme we derive an IBE scheme by introducing the concept of Clustered Identity-Based Encryption (CIBE). Our schemes show that it is indeed possible to construct practical and secure IBE and ABE schemes based on the classical DDH assumption.
Construction of quadratic APN functions with coefficients in $\mathbb{F}_2$ in dimensions $10$ and $11$
Yu et al. described an algorithm for conducting computational searches for quadratic APN functions over the finite field $\mathbb{F}_{2^n}$, and used this algorithm to give a classification of all quadratic APN functions with coefficients in $\mathbb{F}_{2}$ for dimensions $n$ up to 9. In this paper, we speed up the running time of that algorithm by a factor of approximately $\frac{n \times 2^n}{n^3}$. Based on this result, we give a complete classification of all quadratic APN functions over $\mathbb{F}_{2^{10}}$ with coefficients in $\mathbb{F}_{2}$. We also perform some partial computations for quadratic APN functions over $\mathbb{F}_{2^{11}}$ with coefficients in $\mathbb{F}_{2}$ , and conjecture that they form 6 CCZ-inequivalent classes which also correspond to known APN functions.
Masking Gaussian Elimination at Arbitrary Order, with Application to Multivariate- and Code-Based PQC
Digital signature schemes based on multivariate- and code-based hard problems are promising alternatives for lattice-based signature schemes, due to their smaller signature size. Hence, several candidates in the ongoing additional standardization for quantum secure digital signature (DS) schemes by the National Institute of Standards and Technology (NIST) rely on such alternate hard problems. Gaussian Elimination (GE) is a critical component in the signing procedure of these schemes. In this paper, we provide a masking scheme for GE with back substitution to defend against first- and higher-order attacks. To the best of our knowledge, this work is the first to analyze and propose masking techniques for multivariate- or code-based DS algorithms.
We propose a masked algorithm for transforming a system of linear equations into row-echelon form. This is realized by introducing techniques for efficiently making leading (pivot) elements one while avoiding costly conversions between Boolean and multiplicative masking at all orders. We also propose a technique for efficient masked back substitution, which eventually enables a secure unmasking of the public output. We evaluate the overhead of our countermeasure for several post-quantum candidates and their different security levels at first-, second-, and third-order, including UOV, MAYO, SNOVA, QR-UOV, and MQ-Sign. Notably, the operational cost of first-, second-, and third-order masked GE is 2.3$\times$ higher, and the randomness cost is 1.2$\times$ higher in MAYO compared to UOV for security levels III and V. In contrast, these costs are similar in UOV and MAYO for one version of level I. We also show detailed performance results for masked GE implementations for all three security versions of UOV on the Arm Cortex-M4 and compare them with unmasked results. Our first-order implementations targeting UOV parameters have overheads of factor 6.5$\times$, 5.9$\times$, and 5.7$\times$ compared to the unprotected implementation for NIST security level I, III, and V.
An efficient collision attack on Castryck-Decru-Smith’s hash function
In 2020, Castryck-Decru-Smith constructed a hash function, using the (2,2)-isogeny graph of superspecial principally polarized abelian surfaces. In their construction, the initial surface was chosen from vertices very "close" to the square of a supersingular elliptic curve with a known endomorphism ring.
In this paper, we introduce an algorithm for detecting a collision on their hash function. Under some heuristic assumptions, the time complexity and space complexity of our algorithm are estimated to be $\widetilde{O}(p^{3/10})$ which are smaller than the complexity $\widetilde{O}(p^{3/2})$ the authors claimed to be necessary to detect a collision, where $p$ is the characteristic of the base field. In particular case where $p$ has a special form, then both the time and space complexities of our algorithm are polynomial in $\log{p}$. We implemented our algorithm in Magma, and succeeded in detecting a collision in 17 hours (using 64 parallel computations) under a parameter setting which the authors had claimed to be 384-bit secure.
zkMarket : Privacy-preserving Digital Data Trade System via Blockchain
In this paper, we introduce zkMarket, a privacy-preserving fair trade system on the blockchain. zkMarket addresses the challenges of transaction privacy and computational efficiency. To ensure transaction privacy, zkMarket is built upon an anonymous transfer protocol. By combining encryption with zero-knowledge succinct non-interactive arguments of knowledge (zk-SNARK), both the seller and the buyer are enabled to trade fairly. Furthermore, by encrypting the decryption key, we make the data registration process more concise and improve the seller's proving time by leveraging commit-and-prove SNARK (CP-SNARK) and our novel pseudorandom generator, the matrix-formed PRG (MatPRG).
Our evaluation demonstrates that zkMarket significantly reduces the computational overhead associated with traditional blockchain solutions while maintaining robust security and privacy. The seller can register 1MB of data in 3.2 seconds, while the buyer can generate the trade transaction in 0.2 seconds, and the seller can finalize the trade in 0.4 seconds.
PANTHER: Private Approximate Nearest Neighbor Search in the Single Server Setting
Approximate nearest neighbor search (ANNS), also known as
vector search, is an important building block for varies applications,
such as databases, biometrics, and machine learning.
In this work, we are interested in the private ANNS problem,
where the client wants to learn (and can only learn) the ANNS
results without revealing the query to the server. Previous private
ANNS works either suffers from high communication
cost (Chen et al., USENIX Security 2020) or works under
a weaker security assumption of two non-colluding servers
(Servan-Schreiber et al., SP 2022). We present Panther, an
efficient private ANNS framework under the single server
setting. Panther achieves its high performance via several
novel co-designs of private information retrieval (PIR), secretsharing,
garbled circuits, and homomorphic encryption. We
made extensive experiments using Panther on four public
datasets, results show that Panther could answer an ANNS
query on 10 million points in 23 seconds with 318 MB of
communication. This is more than 6× faster and 18× more
compact than Chen et al..
Universal Adaptor Signatures from Blackbox Multi-Party Computation
Adaptor signatures (AS) extend the functionality of traditional digital signatures by enabling the generation of a pre-signature tied to an instance of a hard NP relation, which can later be turned (adapted) into a full signature upon revealing a corresponding witness. The recent work by Liu et al. [ASIACRYPT 2024] devised a generic AS scheme that can be used for any NP relation---which here we will refer to as universal adaptor signatures scheme, in short UAS---from any one-way function. However, this generic construction depends on the Karp reduction to the Hamiltonian cycle problem, which adds significant overhead and hinders practical applicability.
In this work, we present an alternative approach to construct universal adaptor signature schemes relying on the multi-party computation in the head (MPCitH) paradigm. This overcomes the reliance on the costly Karp reduction, while inheriting the core property of the MPCitH---which makes it an invaluable tool in efficient cryptographic protocols---namely, that the construction is black-box with respect to the underlying cryptographic primitive (while it remains non-black-box in the relation being proven). Our framework simplifies the design of UAS and enhances their applicability across a wide range of decentralized applications, such as blockchain and privacy-preserving systems. Our results demonstrate that MPCitH-based UAS schemes offer strong security guarantees while making them a promising tool in the design of real-world cryptographic protocols.
Byte-wise equal property of ARADI
ARADI is a low-latency block cipher proposed by the NSA (National Security Agency) in 2024 for memory encryption. Bellini et al. experimentally demonstrated that in specific cubes of 5-round ARADI, the cube sums are byte-wise equal, for example, to 0x9d9dc5c5. This paper modifies the MILP-based division property algorithm to prove this and observes that the rotation amount of 8 in ARADI causes cancellations of monomials, allowing us to extend the byte-wise equal property up to 8 rounds. As a result, we obtained distinguishers for rounds 6 and 7 with lower data complexities of $2^{77}$ and $2^{112}$, respectively, compared to previous methods.
PRIME: Differentially Private Distributed Mean Estimation with Malicious Security
Distributed mean estimation (DME) is a fundamental and important task as it serves as a subroutine in convex optimization, aggregate statistics, and, more generally, federated learning. The inputs for distributed mean estimation (DME) are provided by clients (such as mobile devices), and these inputs often contain sensitive information. Thus, protecting privacy and mitigating the influence of malicious adversaries are critical concerns in DME. A surge of recent works has focused on building multiparty computation (MPC) based protocols tailored for the task of secure aggregation. However, MPC fails to directly address these two issues: (i) the potential manipulation of input by adversaries, and (ii) the leakage of information from the underlying function. This paper presents a novel approach that addresses both these issues. We propose a secure aggregation protocol with a robustness guarantee, effectively protecting the system from "faulty" inputs introduced by malicious clients. Our protocol further ensures differential privacy, so that the underlying function will not leak significant information about individuals.
Notably, this work represents the first comprehensive effort to combine robustness and differential privacy guarantees in the context of DME. In particular, we capture the security of the protocol via a notion of "usefulness" combined with differential privacy inspired by the work of Mironov et al. (CRYPTO 2009) and formally analyze this security guarantee for our protocol.
Improved Attacks for SNOVA by Exploiting Stability under a Group Action
SNOVA is a post-quantum digital signature scheme based on multivariate polynomials. It is a first-round candidate in an ongoing NIST standardization process for post-quantum signatures, where it stands out for its efficiency and compactness. Since its initial submission, there have been several improvements to its security analysis, both on key recovery and forgery attacks. All these works reduce to solving a structured system of quadratic polynomials, which we refer to as SNOVA system.
In this work, we propose a polynomial solving algorithm tailored for SNOVA systems, which exploits the stability of the system under the action of a commutative group of matrices. This new algorithm reduces the complexity to solve SNOVA systems, over generic ones. We show how to adapt the reconciliation and direct attacks in order to profit from the new algorithm. Consequently, we improve the reconciliation attack for all SNOVA parameter sets with speedup factors ranging between $2^3$ and $2^{22}$. Our algorithm also reduces the complexity of the direct attack for several parameter sets. It is particularly effective for the parameters that give the best performance to SNOVA $(l=4)$, and which were not taken below NIST's security threshold by previous attacks. Our attack brings these parameter sets $(l=4)$ below that threshold with speedup factors between $2^{33}$ and $2^{52}$, over the state-of-the-art.
A Closer Look at Falcon
Falcon is a winner of NIST's six-year post-quantum cryptography standardisation competition. Based on the celebrated full-domain-hash framework of Gentry, Peikert and Vaikuntanathan (GPV) (STOC'08), Falcon leverages NTRU lattices to achieve the most compact signatures among lattice-based schemes.
Its security hinges on a Rényi divergence-based argument for Gaussian samplers, a core element of the scheme. However, the GPV proof, which uses statistical distance to argue closeness of distributions, fails when applied naively to Falcon due to parameter choices resulting in statistical distances as large as $2^{-34}$. Additional implementation-driven deviations from the GPV framework further invalidate the original proof, leaving Falcon without a security proof despite its selection for standardisation.
This work takes a closer look at Falcon and demonstrates that introducing a few minor, conservative modifications allows for the first formal proof of the scheme in the random oracle model. At the heart of our analysis lies an adaptation of the GPV framework to work with the Rényi divergence, along with an optimised method for parameter selection under this measure. Furthermore, we obtain a provable version of the GPV framework over NTRU rings. Both these tools may be of independent interest.
Unfortunately, our analysis shows that despite our modification of Falcon-512 and Falcon-1024 we do not achieve strong unforgeability for either scheme. For plain unforgeability we are able to show that our modifications to Falcon-512 barely satisfy the claimed 120-bit security target and for Falcon-1024 we confirm the claimed security level. As such we recommend revisiting falcon and its parameters.
Push-Button Verification for BitVM Implementations
Bitcoin, while being the most prominent blockchain with the largest market capitalization, suffers from scalability and throughput limitations that impede the development of ecosystem projects like Bitcoin Decentralized Finance (BTCFi). Recent advancements in BitVM propose a promising Layer 2 (L2) solution to enhance Bitcoin's scalability by enabling complex computations off-chain with on-chain verification. However, Bitcoin's constrained programming environment—characterized by its non-Turing-complete Script language lacking loops and recursion, and strict block size limits—makes developing complex applications labor-intensive, error-prone, and necessitates manual partitioning of scripts. Under this complex programming model, subtle mistakes could lead to irreversible damage in a trustless environment like Bitcoin. Ensuring the correctness and security of such programs becomes paramount.
To address these challenges, we introduce the first formal verification tool for BitVM implementations. Our approach involves designing a register-based, higher-level domain-specific language (DSL) that abstracts away complex stack operations, allowing developers to reason about program correctness more effectively while preserving the semantics of the original Bitcoin Script. We present a formal computational model capturing the semantics of BitVM execution and Bitcoin Script, providing a foundation for rigorous verification. To efficiently handle large programs and complex constraints arising from unrolled computations that simulate loops, we summarize repetitive "loop-style" computations using loop invariant predicates in our DSL. We leverage a counterexample-guided inductive synthesis (CEGIS) procedure to lift low-level Bitcoin Script into our DSL, facilitating efficient verification without sacrificing accuracy. Evaluated on 98 benchmarks from BitVM's SNARK verifier, our tool successfully verifies 94% of cases within seconds, demonstrating its effectiveness in enhancing the security and reliability of BitVM.
ECPM Cryptanalysis Resource Estimation
Elliptic Curve Point Multiplication (ECPM) is a key component of the Elliptic Curve Cryptography (ECC) hierarchy protocol. However, the specific estimation of resources required for this process remains underexplored despite its significance in the cryptanalysis of ECC algorithms, particularly binary ECC in GF (2𝑚). Given the extensive use of ECC algorithms in various security protocols and devices, it is essential to conduct this examination to gain valuable insights into its cryptanalysis, specifically in terms of providing precise resource estimations, which serve as a solid basis for further investigation in solving the Elliptic Curve Discrete Logarithm Problem. Expanding on several significant prior research, in this work, we refer to as ECPM cryptanalysis, we estimate quantum resources, including qubits, gates, and circuit depth, by integrating point addition (PA) and point-doubling (PD) into the ECPM scheme, culminating in a Shor’s algorithm-based binary ECC cryptanalysis circuit. Focusing on optimizing depth, we elaborate on and implement the most efficient PD circuit and incorporate optimized Karatsuba multiplication and FLT-based inversion algorithms for PA and PD operations. Compared to the latest PA-only circuits, our preliminary results showcase significant resource optimization for various ECPM implementations, including single-step ECPM, ECPM with combined or selective PA/PD utilization, and total−step ECPM (2𝑛 PD+2 PA).
Critical Round in Multi-Round Proofs: Compositions and Transformation to Trapdoor Commitments
In many multi-round public-coin interactive proof systems, challenges in different rounds serve different roles, but a formulation that actively utilizes this aspect has not been studied extensively. In this paper, we propose new notions called critical-round special honest verifier zero-knowledge and critical-round special soundness. Our notions are simple, intuitive, easy to apply, and capture several practical multi-round proof protocols including, but not limited to, those from the MPC-in-the-Head paradigm.
We demonstrate the usefulness of these notions with two fundamental applications where three-round protocols are known to be useful, but multi-round ones generally fail. First, we show that critical-round proofs yield trapdoor commitment schemes. This result also enables the instantiation of post-quantum secure adaptor signatures and threshold ring signatures from MPCitH, resolving open questions in (Haque and Scafuro, PKC 2020) and in (Liu et al., ASIACRYPT 2024). Second, we show that critical-round proofs can be securely composed using the Cramer-Schoenmakers-Damgård method. This solves an open question posed by Abe et al. in CRYPTO 2024.
Overall, these results shed new light on the potential of multi-round proofs in both theoretical and practical cryptographic protocol design
Compact and Tightly Secure (Anonymous) IBE from Module LWE in the QROM
We present a new compact and tightly secure (anonymous) identity-based encryption (IBE) scheme based on structured lattices. This is the first IBE scheme that is (asymptotically) as compact as the most practical NTRU-based schemes and tightly secure under the module learning with errors (MLWE) assumption, known as the standard lattice assumption, in the (quantum) random oracle model. In particular, our IBE scheme is the most compact lattice-based scheme (except for NTRU-based schemes). We design our IBE scheme by instantiating the framework of Gentry, Peikert, and Vaikuntanathan (STOC`08) using the compact trapdoor proposed by Yu, Jia, and Wang (CRYPTO'23). The tightness of our IBE scheme is achieved by extending the proof technique of Katsumata et al. (ASIACRYPT'18, JoC'21) to the hermit normal form setting. To achieve this, we developed some new results on module lattices that may be of independent interest.
Fully Homomorphic Encryption with Efficient Public Verification
We present an efficient Publicly Verifiable Fully Homomorphic Encryption scheme that, along with being able to evaluate arbitrary boolean circuits over ciphertexts, also generates a succinct proof of correct homomorphic computation. Our scheme is based on FHEW proposed by Ducas and Micciancio (Eurocrypt'15), and we incorporate the GINX homomorphic accumulator (Eurocrypt'16) for improved bootstrapping efficiency. In order to generate the proof efficiently, we generalize the widely used Rank-1 Constraint System (R1CS) to the ring setting and obtain Ring R1CS, to natively express homomorphic computation in FHEW.
In particular, we develop techniques to efficiently express in our Ring R1CS the "non-arithmetic" operations, such as gadget decomposition and modulus switching used in the FHEW construction. We further construct a SNARG for Ring R1CS instances, by translating the Ring R1CS instance into a sum-check protocol over polynomials, and then compiling it into a succinct non-interactive proof by incorporating the lattice-based polynomial commitment scheme of Cini, Malavolta, Nguyen, and Wee (Crypto'24). Putting together, our Publicly Verifiable FHE scheme relies on standard hardness assumptions about lattice problems such that it generates a succinct proof of homomorphic computation of circuit $C$ in time $O(|C|^2\cdot poly(\lambda))$ and of size $O(\log^2{|C|}\cdot poly(\lambda))$. Besides, our scheme achieves the recently proposed IND-SA (indistinguishability under semi-active attack) security by Walter (EPrint 2024/1207) that exactly captures client data privacy when a homomorphic computation can be verified.
Quantum Black-Box Separations: Succinct Non-Interactive Arguments from Falsifiable Assumptions
In their seminal work, Gentry and Wichs (STOC'11) established an impossibility result for the task of constructing an adaptively-sound SNARG via black-box reduction from a falsifiable assumption.
An exciting set of recent SNARG constructions demonstrated that, if one adopts a weaker but still quite meaningful notion of adaptive soundness, then impossibility no longer holds (Waters-Wu, Waters-Zhandry, Mathialagan-Peters-Vaikunthanathan ePrint'24). These fascinating new results raise an intriguing possibility: is there a way to remove this slight weakening of adaptive soundness, thereby completely circumventing the Gentry-Wichs impossibility?
A natural route to closing this gap would be to use a quantum black-box reduction, i.e., a reduction that can query the SNARG adversary on superpositions of inputs. This would take advantage of the fact that Gentry-Wichs only consider classical reductions. In this work, we show that this approach cannot succeed. Specifically, we extend the Gentry-Wichs impossibility result to quantum black-box reductions, and thereby establish an important limit on the power of such reductions.
Homomorphic Matrix Operations under Bicyclic Encoding
Homomorphically encrypted matrix operations are extensively used in various privacy-preserving applications. Consequently, reducing the cost of encrypted matrix operations is a crucial topic on which numerous studies have been conducted. In this paper, we introduce a novel matrix encoding method, named bicyclic encoding, under which we propose two new algorithms BMM-I and BMM-II for encrypted matrix multiplication. BMM-II outperforms the stat-of-the-art algorithms in theory, while BMM-I, combined with the segmented strategy, performs well in practice, particularly for matrices with high dimensions. Another noteworthy advantage of bicyclic encoding is that it allows for transposing an encrypted matrix entirely free. A comprehensive experimental study based on our proof-of-concept implementation shows that each algorithm introduced in this paper has specific scenarios outperforming existing algorithms, achieving speedups ranging from 2x to 38x.
Resilience-Optimal Lightweight High-threshold Asynchronous Verifiable Secret Sharing
Shoup and Smart (SS24) recently introduced a lightweight asynchronous verifiable secret sharing (AVSS) protocol with optimal resilience directly from cryptographic hash functions (JoC 2024), offering plausible quantum resilience and computational efficiency. However, SS24 AVSS only achieves standard secrecy to keep the secret confidential against $n/3$ corrupted parties \textit{if no honest party publishes its share}. In contrast, from ``heavyweight'' public-key cryptography, one can realize so-called \textit{high-threshold} asynchronous verifiable secret sharing (HAVSS), with a stronger \textit{high-threshold} secrecy to tolerate $n/3$ corrupted parties and additional leaked shares from $n/3$ honest parties. This raises the following question: can we bridge the remaining gap to design an efficient HAVSS using only lightweight cryptography?
We answer the question in the affirmative by presenting a lightweight HAVSS with optimal resilience. When executing across $n$ parties to share a secret, it attains a worst-case communication complexity of $\Tilde{\bigO}(\lambda n^3)$ (where $\lambda$ is the cryptographic security parameter) and realizes high-threshold secrecy to tolerate a fully asynchronous adversary that can control $t= \lfloor \frac{n-1}{3} \rfloor$ malicious parties and also learn $t$ additional secret shares from some honest parties.
The (worst-case) communication complexity of our lightweight HAVSS protocol matches that of SS24 AVSS---the state-of-the-art lightweight AVSS without high-threshold secrecy.
Notably, our design is a direct and concretely efficient reduction to hash functions in the random oracle model, without extra setup assumptions like CRS/PKI or heavy intermediate steps like hash-based zk-STARK.
Somewhat Homomorphic Encryption from Linear Homomorphism and Sparse LPN
We construct somewhat homomorphic encryption schemes from the learning sparse parities with noise (sparse LPN) problem, along with an assumption that implies linearly homomorphic encryption (e.g., the decisional Diffie-Hellman or decisional composite residuosity assumptions). Our resulting schemes support an a-priori bounded number of homomorphic operations: $O(\log \lambda/\log \log \lambda)$ multiplications followed by poly($\lambda$) additions, where $\lambda \in \mathbb{N}$ is a security parameter. These schemes have compact ciphertexts: after homomorphic evaluation, the bit-length of each ciphertext is a fixed polynomial in the security parameter $\lambda$, independent of the number of homomorphic operations applied to it. This gives the first somewhat homomorphic encryption schemes that can evaluate the class of bounded-degree polynomials with a bounded number of monomials without relying on lattice assumptions or bilinear maps.
Much like in the Gentry-Sahai-Waters fully homomorphic encryption scheme, ciphertexts in our scheme are matrices, homomorphic addition is matrix addition, and homomorphic multiplication is matrix multiplication. Moreover, when encrypting many messages at once and performing many homomorphic evaluations at once, the bit-length of ciphertexts in some of our schemes (before and after homomorphic evaluation) can be arbitrarily close to the bit-length of the plaintexts. The main limitation of our schemes is that they require a large evaluation key, whose size scales with the complexity of the homomorphic computation performed, though this key can be re-used across any polynomial number of encryptions and evaluations.
A Forgery Attack on a Code-based Signature Scheme
With the advent of quantum computers, the security of cryptographic primitives, including digital signature schemes, has been compromised. To deal with this issue, some signature schemes have been introduced to resist against these computers. These schemes are known as post-quantum signature schemes. One group of these schemes is based on the hard problems of coding theory, called code-based cryptographic schemes. Several code-based signature schemes are inspired by the McEliece encryption scheme using three non-singular, parity-check, and permutation matrices as the only components of the private keys, and their product as the public key. In this paper, we focus on the analysis of a class of such signature schemes. For this purpose, we first prove that the linear relationships between the columns of the parity-check/generator matrix appear in the public key matrix, and by exploiting this feature we perform a forgery attack on one of the signature schemes of this class as an evidence. The complexity of this attack is of O(n^4).
A comprehensive analysis of Regev's quantum algorithm
Public key cryptography can be based on integer factorization and
the discrete logarithm problem (DLP), applicable in multiplicative groups and
elliptic curves. Regev’s recent quantum algorithm was initially designed for the
factorization and was later extended to the DLP in the multiplicative group.
In this article, we further extend the algorithm to address the DLP for elliptic
curves. Notably, based on celebrated conjectures in Number Theory, Regev’s
algorithm is asymptotically faster than Shor’s algorithm for elliptic curves.
Our analysis covers all cases where Regev’s algorithm can be applied. We
examine the general framework of Regev’s algorithm and offer a geometric
description of its parameters. This preliminary analysis enables us to certify
the success of the algorithm on a particular instance before running it.
In the case of integer factorization, we demonstrate that there exists an in-
finite family of RSA moduli for which the algorithm always fails. On the other
hand, when the parameters align with the Gaussian heuristics, we prove that
Regev’s algorithm succeeds. By noting that the algorithm naturally adapts
to the multidimensional DLP, we proved that it succeeds for a certain range
of parameters.
On the Sample Complexity of Linear Code Equivalence for all Code Rates
In parallel with the standardization of lattice-based cryptosystems, the research community in Post-quantum Cryptography focused on non-lattice-based hard problems for constructing public-key cryptographic primitives. The Linear Code Equivalence (LCE) Problem has gained attention regarding its practical applications and cryptanalysis.
Recent advancements, including the LESS signature scheme and its candidacy in the NIST standardization for additional signatures, supported LCE as a foundation for post-quantum cryptographic primitives. However, recent cryptanalytic results have revealed vulnerabilities in LCE-based constructions when multiple related public keys are available for one specific code rate. In this work, we generalize the proposed attacks to cover all code rates. We show that the complexity of recovering the private key from multiple public keys is significantly reduced for any code rate scenario. Thus, we advise against constructing specific cryptographic primitives using LCE.
$\mathsf{Graphiti}$: Secure Graph Computation Made More Scalable
Privacy-preserving graph analysis allows performing computations on graphs that store sensitive information while ensuring all the information about the topology of the graph, as well as data associated with the nodes and edges, remains hidden. The current work addresses this problem by designing a highly scalable framework, $\mathsf{Graphiti}$, that allows securely realising any graph algorithm. $\mathsf{Graphiti}$ relies on the technique of secure multiparty computation (MPC) to design a generic framework that improves over the state-of-the-art framework of GraphSC by Araki et al. (CCS'21). The key technical contribution is that $\mathsf{Graphiti}$ has round complexity independent of the graph size, which in turn allows attaining the desired scalability. Specifically, this is achieved by (i) decoupling the $\mathsf{Scatter}$ primitive of GraphSC into separate operations of $\mathsf{Propagate}$ and $\mathsf{ApplyE}$, (ii) designing a novel constant-round approach to realise $\mathsf{Propagate}$, as well as (iii) designing a novel constant-round approach to realise the $\mathsf{Gather}$ primitive of GraphSC by leveraging the linearity of the aggregation operation. We benchmark the performance of $\mathsf{Graphiti}$ for the application of contact tracing via BFS for 10 hops and observe that it takes less than 2 minutes when computing over a graph of size $10^7$. Concretely it improves over the state-of-the-art up to a factor of $1034\times$ in online run time. Similar to GraphSC by Araki et al., since $\mathsf{Graphiti}$ relies on a secure protocol for shuffle, we additionally design a shuffle protocol secure against a semi-honest adversary in the 2-party with a helper setting. Given the versatility of shuffle protocol, the designed solution is of independent interest. Hence, we also benchmark the performance of the designed shuffle where we observe improvements of up to $1.83\times$ in online run time when considering an input vector of size $10^7$, in comparison to the state-of-the-art in the considered setting.
Exponential sums in linear cryptanalysis
It is shown how bounds on exponential sums derived from modern algebraic geometry, and l-adic cohomology specifically, can be used to upper bound the absolute correlations of linear approximations for cryptographic constructions of low algebraic degree. This is illustrated by applying results of Deligne, Denef and Loeser, and Rojas-León, to obtain correlation bounds for a generalization of the Butterfly construction, three-round Feistel ciphers, and a generalization of the Flystel construction. For each of these constructions, bounds obtained using other methods are significantly weaker. In the case of the Flystel construction, our bounds resolve a conjecture by the designers.
Correlation bounds of this type are relevant for the development of security arguments against linear cryptanalysis, especially in the weak-key setting or for primitives that do not involve a key. Since the methods used in this paper are applicable to constructions defined over arbitrary finite fields, the results are also relevant for arithmetization-oriented primitives such as Anemoi, which uses S-boxes based on the Flystel construction.
PQNTRU: Acceleration of NTRU-based Schemes via Customized Post-Quantum Processor
Post-quantum cryptography (PQC) has rapidly evolved in response to the emergence of quantum computers, with the US National Institute of Standards and Technology (NIST) selecting four finalist algorithms for PQC standardization in 2022, including the Falcon digital signature scheme. The latest round of digital signature schemes introduced Hawk, both based on the NTRU lattice, offering compact signatures, fast generation, and verification suitable for deployment on resource-constrained Internet-of-Things (IoT) devices. Despite the popularity of Crystal-Dilithium and Crystal-Kyber, research on NTRU-based schemes has been limited due to their complex algorithms and operations. Falcon and Hawk's performance remains constrained by the lack of parallel execution in crucial operations like the Number Theoretic Transform (NTT) and Fast Fourier Transform (FFT), with data dependency being a significant bottleneck. This paper enhances NTRU-based schemes Falcon and Hawk through hardware/software co-design on a customized Single-Instruction-Multiple-Data (SIMD) processor, proposing new SIMD hardware units and instructions to expedite these schemes along with software optimizations to boost performance. Our NTT optimization includes a novel layer merging technique for SIMD architecture to reduce memory accesses, and the use of modular algorithms (Signed Montgomery and Improved Plantard) targets various modulus data widths to enhance performance. We explore applying layer merging to accelerate fixed-point FFT at the SIMD instruction level and devise a dual-issue parser to streamline assembly code organization to maximize dual-issue utilization. A System-on-chip (SoC) architecture is devised to improve the practical application of the processor in real-world scenarios. Evaluation on 28 nm technology and FPGA platform shows that our design and optimizations can increase the performance of Hawk signature generation and verification by over 7 times.
HTCNN: High-Throughput Batch CNN Inference with Homomorphic Encryption for Edge Computing
Homomorphic Encryption (HE) technology allows for processing encrypted data, breaking through data isolation barriers and providing a promising solution for privacy-preserving computation. The integration of HE technology into Convolutional Neural Network (CNN) inference shows potential in addressing privacy issues in identity verification, medical imaging diagnosis, and various other applications. The CKKS HE algorithm stands out as a popular option for homomorphic CNN inference due to its capability to handle real number computations. However, challenges such as computational delays and resource overhead present significant obstacles to the practical implementation of homomorphic CNN inference, largely due to the complex nature of HE operations. In addition, current methods for speeding up homomorphic CNN inference primarily address individual images or large batches of input images, lacking a solution for efficiently processing a moderate number of input images with fast homomorphic inference capabilities, which is more suitable for edge computing applications. In response to these challenges, we introduce a novel leveled homomorphic CNN inference scheme aimed at reducing latency and improving throughput using the CKKS scheme. Our proposed inference strategy involves mapping multiple inputs to a set of ciphertext by exploiting the sliding window properties of convolutions to utilize CKKS's inherent Single-Instruction-Multiple-Data (SIMD) capability. To mitigate the delay associated with homomorphic CNN inference, we introduce optimization techniques, including mask-weight merging, rotation multiplexing, stride convolution segmentation, and folding rotations. The efficacy of our homomorphic inference scheme is demonstrated through evaluations carried out on the MNIST and CIFAR-10 datasets. Specifically, results from the MNIST dataset on a single CPU thread show that inference for 163 images can be completed in 10.4 seconds with an accuracy of 98.9%, which is a 6.9 times throughput improvement over state-of-the-art works. Comparative analysis with existing methodologies highlights the superior performance of our proposed inference scheme in terms of latency, throughput, communication overhead, and memory utilization.
DEEP Commitments and Their Applications
This note studies a method of committing to a polynomial in a way that allows executions of low degree tests such as FRI to be batched and even deferred. In particular, it achieves (unlimited-depth) aggregation for STARKs.
Offline-Online Indifferentiability of Cryptographic Systems
The indifferentiability framework has become a standard methodology that enables us to study the security of cryptographic constructions in idealized models of computation. Unfortunately, while indifferentiability provides strong guarantees whenever the security of a construction is captured by a ``single-stage'' security game, it may generally provide no meaningful guarantees when the security is captured by a ``multi-stage'' one. In particular, the indifferentiability framework does not capture offline-online games, where the adversary can perform an extensive offline computation to later speed up the online phase. Such security games are extremely common, both in practice and in theory. Over the past decade, there has been numerous attempts to meaningfully extend the indifferentiability framework to offline-online games, however, they all ultimately met with little success.
In this work, our contribution is threefold. First, we propose an extension of the classical indifferentiability framework, we refer to as *offline-online-indifferentiability*, that applies in the context of attackers with an expensive offline phase (á la Ghoshal and Tessaro, CRYPTO '23). Second, we show that our notion lends itself to a natural and meaningful composition theorem for offline-online security games. Lastly, as our main technical contribution, we analyze the offline-online-indifferentiability of two classical variants of the Merkle-Damg\aa rd hashing mechanism, one where the key is fed only to the first block in the chain and the other where the key is fed to each block in the chain. For both constructions, we prove a *tight* bound on their offline-online-indifferentiability (i.e., an upper bound and an attack that matches it). Notably, our bound for the second variant shows that the construction satisfies *optimal* offline-online-indifferentiability.
Robust Double Auctions for Resource Allocation
In a zero-knowledge proof market, we have two sides. On one side, bidders with proofs of different sizes and some private value to have this proof computed. On the other side, we have distributors (also called sellers) which have compute available to process the proofs by the bidders, and these distributors have a certain private cost to process these proofs (dependent on the size). More broadly, this setting applies to any online resource allocation where we have bidders who desire a certain amount of a resource and distributors that can provide this resource. In this work, we study how to devise double auctions for this setting which are truthful for users, weak group strategy proof, weak budget balanced, computationally efficient, and achieve a good approximation of the maximum welfare possible by the set of bids. We denote such auctions as $\textit{robust}$.
Revisiting the “improving the security of multi-party quantum key agreement with five- qubit Brown states”
In 2018 Cai et al. proposed a multi-party quantum key agreement with five-qubit Brown states. They confirmed the security of their proposed scheme. However, Elhadad, Ahmed, et al. found the scheme cannot resist the collusion attack launched by legal participants. They suggested a modification and declared that their improved version is capable of resisting this type of attack. Nevertheless, after analysis, we found that the collusion attack still exists. Subsequently, we proposed a straightforward modification to prevent the attack. After analysis, we conclude that our modification meets the required security and collusion attack requirements, which are very important in the quantum key agreement scheme.