Papers updated in last 31 days (354 results)
Thunderbolt: A Formally Verified Protocol for Off-Chain Bitcoin Transfers
We present Bitcoin Thunderbolt, a novel off-chain protocol for asynchronous, secure transfer of Bitcoin UTXOs between uncoordinated users. Unlike prior solutions such as payment channels or the Lightning Network, Bitcoin Thunderbolt requires no prior trust, direct interaction, or continuous connectivity between sender and receiver. At its core, Bitcoin Thunderbolt employs a Byzantine fault-tolerant committee to manage threshold Schnorr signatures, enabling secure ownership delegation and on-chain finalization.
Our design supports recursive, off-chain UTXO transfers using tweakable, verifiable signature components. The protocol tolerates up to $f$ malicious nodes in a $3f+1$ committee and ensures correctness, consistency, and one-time spendability under asynchronous network conditions.
We formally verify Bitcoin Thunderbolt’s key security properties, namely, unforgeability, ownership soundness, and liveness—using the Tamarin prover. Our results demonstrate that Thunderbolt provides robust, scalable, and non-interactive off-chain Bitcoin transfers, significantly expanding the practical utility of Bitcoin for decentralized applications.
New Approaches for Estimating the Bias of Differential-Linear Distinguishers (Full Version)
Differential-linear cryptanalysis was introduced by Langford and Hellman in 1994 and has been extensively studied since then. In 2019, Bar-On et al. presented the Differential-Linear Connectivity Table (DLCT), which connects the differential part and the linear part, thus an attacked cipher is divided to 3 subciphers: the differential part, the DLCT part, and the linear part.
In this paper, we firstly present an accurate mathematical formula which establishes a relation between differential-linear and truncated differential cryptanalysis. Using the formula, the bias estimate of a differential-linear distinguisher can be converted to the probability calculations of a series of truncated differentials. Then, we propose a novel and natural concept, the TDT, which can be used to accelerate the calculation of the probabilities of truncated differentials. Based on the formula and the TDT, we propose two novel approaches for estimating the bias of a differential-linear distinguisher. We demonstrate the accuracy and efficiency of our new approaches by applying them to 5 symmetric-key primitives: Ascon, Serpent, KNOT, AES, and CLEFIA. For Ascon and Serpent, we update the best known differential-linear distinguishers. For KNOT AES, and CLEFIA, for the first time we give the theoretical differential-linear biases for different rounds.
UTRA: Universe Token Reusability Attack and Verifiable Delegatable Order-Revealing Encryption
As datasets grow, users increasingly rely on cloud services for data storage and processing. Consequently, concerns regarding data protection and the practical use of encrypted data have emerged as significant challenges. One promising solution is order-revealing encryption (ORE), which enables efficient operations on encrypted numerical data. To support distributed environments with different users, delegatable ORE (DORE) extends this functionality to multi-client settings, enabling order comparisons between ciphertexts encrypted under different secret keys. However, Hahn et al. proposed a token forgery attack against DORE with a threat model and introduced the secure DORE (SEDORE) scheme as a countermeasure. Despite this enhancement, we claim that SEDORE remains vulnerable under the same threat model.
In this paper, we present a novel Universal Token Reusability Attack, which exposes a critical vulnerability in SEDORE with the identical threat model. To mitigate this, we introduce the concept of verifiable delegatable order-revealing encryption (VDORE), along with a formal definition of token unforgeability. Building on this, we design a new scheme, Token Unforgeable DORE ($\mathsf{TUDORE}$), which ensures token unforgeability.
Moreover, $\mathsf{TUDORE}$ achieves 1.5× faster token generation than SEDORE with enhanced security.
Private SCT Auditing, Revisited
In order for a client to securely connect to a server on the web, the client must trust certificate authorities (CAs) only to issue certificates to the legitimate operator of the server. If a certificate is miss-issued, it is possible for an attacker to impersonate the server to the client. The goal of Certificate Transparency (CT) is to log every certificate issued in a manner that allows anyone to audit the logs for miss-issuance. A client can even audit a CT log itself, but this would leak sensitive browsing data to the log operator. As a result, client-side audits are rare in practice.
In this work, we revisit private CT auditing from a real-world perspective. Our study is motivated by recent changes to the CT ecosystem and advancements in Private Information Retrieval (PIR). First, we find that checking for inclusion of Signed Certificate Timestamps (SCTs) in a log — the audit performed by clients — is now possible with PIR in under a second and under 100kb of communication with minor adjustments to the protocol that have been proposed previously. Our results also show how to scale audits by using existing batching techniques and the algebraic structure of the PIR protocols, in particular to obtain certificate hashes by included in the log.
Since PIR protocols are more performant with smaller databases, we also suggest a number of strategies to lower the size of the SCT database for audits. Our key observation is that the web will likely transition to a new model for certificate issuance. While this transition is primarily motivated by the need to adapt the PKI to larger, post-quantum signature schemes, it also removes the need for SCT audits in most cases. We present the first estimates of how this transition may impact SCT auditing, based on data gathered from public CT logs. We find that large scale deployment of the new issuance model may reduce the number of SCT audits needed by a factor of 1,000, making PIR-based auditing practical to deploy.
Leap: A Fast, Lattice-based OPRF With Application to Private Set Intersection
Oblivious pseudorandom functions (OPRFs) are an important primitive in privacy-preserving cryptographic protocols. The growing interest in OPRFs, both in theory and practice, has led to the development of numerous constructions and variations. However, most of these constructions rely on classical assumptions. Potential future quantum attacks may limit the practicality of those OPRFs for real-world applications.
To close this gap, we introduce Leap, a novel OPRF based on heuristic lattice assumptions. Fundamentally, Leap builds upon the Spring [BBL+15] pseudorandom function (PRF), which relies on the learning with rounding assumption, and integrates techniques from multi-party computation, specifically Oblivious Transfer (OT) and Oblivious Linear Evaluation (OLE). With this combination of oblivious protocols, we construct an OPRF that evaluates in less than a millisecond on a modern computer.
Efficiency-wise, our prototype implementation achieves computation times of just 11 microseconds for the client and 750 microseconds for the server, excluding some base OT preprocessing overhead. Moreover, Leap requires an online communication cost of 23 kB per evaluation, where the client only has to send around 380 bytes online. To demonstrate the practical applicability of Leap, we present an efficient private set intersection (PSI) protocol built on top of Leap. This application highlights the potential for the integration of Leap into various privacy-preserving applications: We can compute an unbalanced set intersection with set sizes of 2^24 and 2^15 in under a minute of online time and just over two minutes overall.
SNAIL: Verifiable Computation within 30% of Native Speed
SNAIL (Succinct, Non-interactive, Alon-compressed, Instant argument for Layered circuits) turns any depth-\(d\) arithmetic circuit into a non-interactive argument whose prover runs within
\(1 + c(d,k,n)\) of plain circuit execution, where
\(c(d,k,n) = \frac{3\,(k+n+1)}{k\,d + n + 1}\).
For the representative choice \(k = n = 4\) and \(24 \le d \le 32\) this means only 21–28 % overhead.
Core idea:
A constant-round zerocheck based on a difference-driven Alon decomposition compresses all \(d\) layers into one multivariate identity. Fiat–Shamir in the random-oracle model yields a transparent argument whose hot loop uses only field additions and multiplications (no MSMs, FFTs, or Merkle trees).
Cost summary: with \(k = n = 4\) and \(24 \le d \le 32\)
\[
T_{\text{prov}}(S,d)=\Bigl(1+\frac{6.75}{d}\Bigr)S,\qquad
|\pi|=(d^{2}+2d)(4d+1),\qquad
\varepsilon_{\text{sound}}=\frac{4nd}{|\mathbb F|}.
\]
A 2.6-billion-gate trace at \(d = 32\) is proven in \(\approx 40\;\text{ms}\) on an RTX-4090 and verified on a smartphone in \(< 1\;\text{ms}\).
Width scalability:
Proof size depends only on depth, so widening to billions of parallel gates leaves the verifier unchanged—ideal for ultra-wide workloads such as live-audited VM traces.
Security:
The six-round interactive core is statistically sound; the Fiat–Shamir variant is computationally sound in the random-oracle model and needs no trusted setup.
Implications:
SNAIL shows that a prover can run near real-time on commodity GPUs while emitting proofs small enough for mobile light clients, enabling applications such as live-streamed provable gaming, pay-per-cycle cloud nodes, and always-on verifiable CPUs.
Efficient Key Recovery via Correlation Power Analysis on Scloud⁺
Scloud$^{+}$ is a next-generation post-quantum key encapsulation mechanism that combines unstructured-LWE and a ternary key encoding technique to achieve efficient lattice cryptographic operations while eliminating traditional ring structure constraints. Despite its rigorously formalized theoretical security, its practical deployment faces side-channel threats, notably Correlation Power Analysis (CPA) attacks. This paper systematically investigates the physical security of its core ciphertext-key matrix multiplication module by proposing a CPA framework that integrates instruction-level timing analysis. A SoST (Sum of Squared T-values) model, inspired by multi-group Welch's t-test, is used to analyze the Hamming weight leakage during ciphertext loading. At the same time, dynamic sampling windows, combined with processor pipeline modeling, are employed to pinpoint critical leakage intervals. Exploiting the characteristics of ternary keys, an iterative recovery strategy is devised: following a predefined scan order, the candidate set $\{-1, 0, 1\}$ and partial intermediate sums are used to construct a Hamming weight model for hypothesized leakage vectors. Pearson correlation analysis and trace-count stabilization are applied within each dynamic sampling window to determine the optimal estimate for each key element. Experiments targeting 4800 key elements, illustrated through a detailed analysis of the first 32 elements, demonstrate high recovery accuracy with no more than 15 traces per element, indicating high efficiency and stability that can extend to the full key reconstruction. To thwart such CPA attacks, we have further designed and implemented a first‐order arithmetic masking countermeasure that splits the original ternary secret key into two subkeys, thereby expanding the attacker's key hypothesis space and significantly enhancing side‐channel resilience. Our results demonstrate that Scloud$^{+}$ remains vulnerable to side‐channel exploits at the implementation level, highlighting the urgent need to integrate appropriate protections into its standardization process.
Efficient 3PC for Binary Circuits with Application to Maliciously-Secure DNN Inference
In this work, we focus on maliciously secure 3PC for binary circuits with honest majority. While the state-of-the-art (Boyle et al. CCS 2019) has already achieved the same amortized communication as the best-known semi-honest protocol (Araki et al. CCS 2016), they suffer from a large computation overhead: when comparing with the best-known implementation result (Furukawa et al. Eurocrypt 2017) which requires $9\times$ communication cost of Araki et al., the protocol by Boyle et al. is around $4.5\times$ slower than that of Furukawa et al.
In this paper, we design a maliciously secure 3PC protocol that matches the same communication as Araki et al. with comparable concrete efficiency as Furukawa et al. To obtain our result, we manage to apply the distributed zero-knowledge proofs (Boneh et al. Crypto 2019) for verifying computations over $\mathbb{Z}_2$ by using \emph{prime} fields and explore the algebraic structure of prime fields to make the computation of our protocol friendly for native CPU computation.
Experiment results show that our protocol is around $3.5\times$ faster for AES circuits than Boyle et al. We also applied our protocol to the binary part (e.g. comparison and truncation) of secure deep neural network inference, and results show that we could reduce the time cost of achieving malicious security in the binary part by more than $67\%$.
Besides our main contribution, we also find a hidden security issue in many of the current probabilistic truncation protocols, which may be of independent interest.
MicroNova: Folding-based arguments with efficient (on-chain) verification
We describe the design and implementation of MicroNova, a folding-based recursive argument for producing proofs of incremental computations of the form $y = F^{(\ell)}(x)$, where $F$ is a possibly non-deterministic computation (encoded using a constraint system such as R1CS), $x$ is the initial input, $y$ is the output, and $\ell > 0$. The proof of an $\ell$-step computation is produced step-by-step such that the proof size nor the time to verify it depends on $\ell$. The proof at the final iteration is then compressed, to achieve further succinctness in terms of proof size and verification time. Compared to prior folding-based arguments, a distinguishing aspect of MicroNova is the concrete efficiency of the verifier—even in a resource-constrained environment such as Ethereum’s blockchain. In particular, the compressed proof consists of $O(\log{N})$ group elements and it can be verified with $O(\log{N})$ group scalar multiplications and two pairing operations, where $N$ is the number of constraints for a single invocation of $F$. MicroNova requires a universal trusted setup and can employ any existing setup material created for the popular KZG univariate polynomial commitment scheme. Finally, we implement and experimentally evaluate MicroNova. We find that MicroNova’s proofs can be efficiently verified on the Ethereum blockchain with $\approx$2.2M gas. Furthermore, MicroNova’s prover incurs minimal overheads atop its baseline Nova’s prover.
Provably Secure Butterfly Key Expansion from the CRYSTALS Post-Quantum Schemes
Key blinding produces pseudonymous digital identities by
rerandomizing public keys of a digital signature scheme. It provides privacy in decentralized networks. Current key blinding schemes are based on the discrete log assumption. Eaton, Stebila and Stracovsky (LATINCRYPT 2021) proposed the first post-quantum key blinding schemes from lattice assumptions. However, the large public keys and lack of QROM security means they are not ready to replace existing solutions. We present a general framework to build post-quantum signature schemes with key blinding based on the MPC-in-the-Head paradigm. This results in schemes that rely on well-studied symmetric cryptographic primitives and admit short public keys. We prove generic security results in the quantum random oracle model (QROM).
We instantiate our framework with the recent AES-based Helium signature scheme (Kales and Zaverucha, 2022) to obtain an efficient post-quantum key blinding scheme with small keys. Both Helium and the aforementioned lattice-based key blinding schemes were only proven secure in the ROM. This makes our results the first QROM proof of Helium and the first fully quantum-safe public key blinding scheme.
Extended Diffie-Hellman Encryption for Secure and Efficient Real-Time Beacon Notifications
Every computing paradigm involving communication requires new security protocols employing cryptography. For example, the Internet gave rise to TLS/SSL, and Mobile Computing gave rise to End to End Encryption protocols. In this paper, we address an emerging IoT paradigm involving beacons attached to things and security protocols associated with this new configuration.
Specifically, we address the ``beacon notification problem,'' a critical IoT paradigm aims at providing secure and efficient real-time notifications from beacons to their owners. Since the beacon notification problem has not yet been formally defined, we begin by inspecting natural requirements based on the operational setting and establishing correctness, security, and privacy definitions through the use of cryptographic games.
To resolve this problem, we propose a novel cryptographic tool we call XDHIES, which is a considerable extension of available Diffie-Hellman encryption schemes. We then show a new notification protocol built upon XDHIES and we prove that this cryptographic protocol is secure and private and successfully meets all the above problem's requirements.
SoK: On the Security Goals of Key Transparency Systems
Key Transparency (KT) systems have emerged as a critical technology for adding verifiability to the distribution of public keys used in end-to-end encrypted messaging services. Despite substantial academic interest, increased industry adoption, and IETF standardization efforts, KT systems lack a holistic and formalized security model, limiting their resilience to practical threats and constraining future development.
In this paper, we survey the existing KT literature and present the first cryptographically sound formalization of KT as an ideal functionality. Our work clarifies the underlying assumptions, defines core security properties, and highlights potential vulnerabilities in deployed KT systems. We prove in the Universal Composability framework that our concrete protocol achieves KT security as defined by our formalism. Our KT protocol builds on the latest trends in KT design, guided by the formalization.
SoK: 6 Years of Neural Differential Cryptanalysis
At CRYPTO 2019, A. Gohr introduced Neural Differential Cryptanalysis and used deep learning to improve the state-of-the-art cryptanalysis of 11-round SPECK32. As of February 2025, according to Google Scholar, Gohr’s article has been cited 229 times. The variety of targeted cryptographic primitives, techniques, settings, and evaluation methodologies that appear in these follow-up works grants a careful systematization of knowledge, which we provide in this paper. More specifically, we propose a taxonomy of these 229 publications and systematically review the 66 papers focusing on neural differential distinguishers, pointing out promising directions. We then highlight future challenges in the field, particularly the need for improved comparability of neural distinguishers and advancements in scaling.
AsyRand: asynchronous distributed randomness beacon with reconfiguration
Distributed randomness beacon protocols, which continuously generate publicly verifiable randomness values, are crucial for many applications. Recently, there have been many approaches, such as Hydrand (S\&P'20), SPURT (S\&P'22), OptRand (NDSS'23) and GRandLine (CCS'24), based on publicly verifiable secret sharing (PVSS) to implement beacon protocols. However, two key challenges remain unresolved: asynchrony and reconfiguration. In this paper, we propose the $\mathsf{AsyRand}$ beacon protocol to address these challenges. We incorporate a producer-consumer model to decouple the distribution and reconstruction of PVSS secrets. Parties continuously produce and distribute new PVSS commitments, which are the encrypted shares and the proofs. Meanwhile, all parties store received commitments using first-in-first-out queues and collectively consume each commitment to recover the corresponding secret for beacon generation. To achieve asynchronous consensus, we employ reliable broadcast for distribution and apply $t$-validated asynchronous Byzantine agreement for reconstruction. To achieve reconfiguration, honest parties can collectively remove a faulty party if his queue remains empty for an extended duration, and a new party can join the system using reliable broadcast. We also introduce a novel PVSS scheme based on Sigma protocol and Fiat-Shamir heuristic, which is of independent interest. Consequently, $\mathsf{AsyRand}$ maintains state-of-the-art complexity with $O(n^2)$ communication complexity, $O(n)$ computation complexity, and $O(n)$ verification complexity while achieving asynchrony and reconfiguration. Experimental results highlight the performance of $\mathsf{AsyRand}$ compared to existing works.
Reduce and Prange: Revisiting Prange's ISD for Solving LPN/RSD over Large Fields
The Learning Parity with Noise (LPN) problem and its regular noise variant are fundamental to many cryptographic primitives. While recent proposals extend these problems to larger fields to enhance cryptographic applications, the security implications of LPN over large fields remain understudied. This gap has facilitated more effective attacks, potentially compromising the security of LPN-based primitives over large fields.
In this paper, we present an improved algorithm for solving LPN over large fields. Our method enhances the classical Gaussian elimination attack, specifically Prange's information set decoding algorithm, by iteratively applying Gaussian elimination to reduced-size matrices rather than the full-size matrix.
We call this the ``Reduce and Prange (RP)" algorithm and demonstrate its effectiveness for both LPN and its variant with regular noise over large finite fields (e.g., $\mathbb{F}_{2^{128}}$). Building on the RP algorithm, we develop a modified algebraic algorithm for LPN with regular noise over $\mathbb{F}_q$ with $q \geq 2$, extending the work of Briaud and Øygard (Eurocrypt'23). Furthermore, we demonstrate that Scalable Multiparty Garbling (CCS'23) fails to meet their claimed security levels.
For LPN over large fields (e.g., 128-bit field size), our algorithm reduces bit-security by up to 9 bits relative to Liu et al. (Eurocrypt'24). In the case of LPN with regular noise, our RP algorithm surpasses both Liu et al. and Briaud and Øygard (Eurocrypt'23) under specific parameter settings, achieving up to 16-bit reduction in security.
One More Motivation to Use Evaluation Tools, This Time for Hardware Multiplicative Masking of AES
Safeguarding cryptographic implementations against the increasing threat of Side-Channel Analysis (SCA) attacks is essential. Masking, a countermeasure that randomizes intermediate values, is a cornerstone of such defenses. In particular, SCA-secure implementation of AES, the most-widely used encryption standard, can employ Boolean masking as well as multiplicative masking due to its underlying Galois field operations. However, multiplicative masking is susceptible to vulnerabilities, including the zero-value problem, which has been identified right after theintroduction of multiplicative masking. At CHES 2018, De Meyer et al. proposed a hardware-based approach to manage these challenges and implemented multiplicative masking for AES, incorporating a Kronecker delta function and randomness optimization.
In this work, we evaluate their design using the PROLEAD evaluation tool under the glitch- and transition-extended probing model. Our findings reveal a critical vulnerability in their first- and second-order implementation of the Kronecker delta function, stemming from the employed randomness optimization. This leakage compromises the security of their presented masked AES Sbox. After pinpointing the source of such a leakage, we propose an alternative randomness optimization for the first-order design to address this issue, and demonstrate its effectiveness through rigorous evaluations by means of PROLEAD.
Towards Optimal Parallel Broadcast under a Dishonest Majority
The parallel broadcast (PBC) problem generalizes the classic Byzantine broadcast problem to the setting where all $n$ nodes broadcast a message and deliver $O(n)$ messages. PBC arises naturally in many settings including multi-party computation. The state-of-the-art PBC protocol, TrustedPBC, is due to Tsimos, Loss, and Papamanthou (CRYPTO 2022), which is secure under an adaptive adversary assuming $f < (1 - \epsilon)n$, where $f$ is the number of Byzantine failures and $\epsilon \in (0,1)$. TrustedPBC focuses on single-bit inputs and achieves $\tilde{O}(n^2 \kappa^4)$ communication and $O(\kappa\log n)$ rounds.
In this work, we propose three PBC protocols for $L$-bit messages, for any size $L$, that significantly improve TrustedPBC. First, we propose a new extension protocol that uses a $\kappa$-bit PBC as a black box and achieves i) communication complexity of $O(L n^2 + n^3\kappa+\mathcal{P}(\kappa))$, where $\mathcal{P}(\kappa)$ is the communication complexity of the $\kappa$-bit PBC, and ii) round complexity same as the $\kappa$-bit PBC. By comparison, the state-of-the-art extension protocol for regular broadcast (Nayak et al., DISC 2020) incurs $O(n)$ additional rounds of communication. Next, we propose a protocol that is secure against a static adversary, for $\kappa$-bit messages with $O(n^2 \kappa^{1+K} + n\kappa^3 + \kappa^4)$ communication and $O(\kappa)$ round complexity, where $K$ is an arbitrarily small constant such that $0<K<1$. Finally, we propose an adaptively-secure protocol for $\kappa$-bit messages with $\tilde{O}(n^2\kappa^2 + n\kappa^3)$ communication overhead and $O(\kappa \log{n})$ round complexity. Notably, our latter two protocols are $\tilde{O}(\kappa^{2 - K})$ and $O(\kappa^2)$ times more communication-efficient, respectively, than the state-of-the-art protocols while achieving the same round complexity.
Discrete Gaussians Modulo Sub-Lattices: New Leftover Hash Lemmas for Discrete Gaussians
The Leftover Hash Lemma (LHL) is a powerful tool for extracting randomness from an entropic distribution, with numerous applications in cryptography. LHLs for discrete Gaussians have been explored in both integer settings by Gentry et al. (GPV, STOC'08) and algebraic ring settings by Lyubashevsky et al. (LPR, Eurocrypt'13). However, the existing LHLs for discrete Gaussians have two main limitations: they require the Gaussian parameter to be larger than certain smoothing parameters, and they cannot handle cases where fixed and arbitrary information is leaked.
In this work, we present new LHLs for discrete Gaussians in both integer and ring settings. Our results show that the Gaussian parameter can be improved by a factor of $\omega(\sqrt{\log\lambda}) $ and $ O(\sqrt{N}) $ compared to the regularity lemmas of GPV and LPR, respectively, under similar parameter choices such as the dimension and ring. Furthermore, our new LHLs can be applied to leaked discrete Gaussians, and the result can be used to establish asymptotic hardness of the extended MLWE assumptions, addressing an open question in recent works by Lyubashevsky et al. (LNP, Crypto'22). Our central techniques involve new fine-grained analyses of the min-entropy in discrete Gaussians modulo sublattices and should be of interest.
Quantum pseudoresources imply cryptography
While one-way functions (OWFs) serve as the minimal assumption for computational cryptography in the classical setting, in quantum cryptography, we have even weaker cryptographic assumptions such as pseudo-random states, and EFI pairs, among others. Moreover, the minimal assumption for computational quantum cryptography remains an open question. Recently, it has been shown that pseudoentanglement is necessary for the existence of quantum cryptography (Goulão and Elkouss 2024), but no cryptographic construction has been built from it.
In this work, we study the cryptographic usefulness of quantum pseudoresources —a pair of families of quantum states that exhibit a gap in their resource content yet remain computationally indistinguishable. We show that quantum pseudoresources imply a variant of EFI pairs, which we call EPFI pairs, and that these are equivalent to quantum commitments and thus EFI pairs. Our results suggest that, just as randomness is fundamental to classical cryptography, quantum resources may play a similarly crucial role in the quantum setting.
Finally, we focus on the specific case of entanglement, analyzing different definitions of pseudoentanglement and their implications for constructing EPFI pairs. Moreover, we propose a new cryptographic functionality that is intrinsically dependent on entanglement as a resource.
Boomy: Batch Opening Of Multivariate polYnomial commitment
We present Boomy, a multivariate polynomial commitment scheme enabling the proof of the evaluation of multiple points, i.e., batch opening. Boomy is the natural extension of two popular protocols: the univariate polynomial commitment scheme of Kate, Zaverucha and Goldberg~\cite{AC:KatZavGol10} and its multivariate counterpart from Papamanthou, Shi and Tamassia~\cite{papamanthou2013signatures}. Our construction is proven secure under the selective security model. In this paper, we present Boomy's complexity and the applications on which it can have a significant impact. In fact, Boomy is perfectly suited to tackling blockchain data availability problems, shrinking existing challenges. We also present special lower-complexity cases that occur frequently in practical situations.
ABLE: Optimizing Mixed Arithmetic and Boolean Garbled Circuit
Privacy and security have become critical priorities in many scenarios. Privacy-preserving computation (PPC) is a powerful solution that allows functions to be computed directly on encrypted data. Garbled circuit (GC) is a key PPC technology that enables secure, confidential computing. GC comes in two forms: Boolean GC supports all operations by expressing functions as logic circuits; arithmetic GC is a newer technique to efficiently compute a set of arithmetic operations like addition and multiplication. Mixed GC combines both Boolean and arithmetic GC, in an attempt to optimize performance by computing each function in its most natural form. However, this introduces additional costly conversions between the two forms and it remains unclear if and when the efficiency gains of arithmetic GC outweigh these conversion costs.
In this paper, we present Arithmetic Boolean Logic Exchange for Garbled Circuit, the first real implementation of mixed GC. ABLE profiles the performance of Boolean and arithmetic GC operations along with their conversions. We assess not only communication but also computation latency, a crucial factor in evaluating the end-to-end runtime of GC. Based on these insights, we propose a method to determine whether it is more efficient to use general Boolean GC or mixed GC for a given application. Rather than implementing both approaches to identify the better solution for each unique case, our method enables users to select the most suitable GC realization early in the design process. This method evaluates whether the benefits of transitioning operations from Boolean to arithmetic GC offset the associated conversion costs. We apply this approach to a neural network task as a case study. We propose an optimization to reduce the number of primes in arithmetic GC, cutting communication and compute latency by up to 14.1% and 15.7%, respectively. Additionally, we optimize mixed GC conversions with a row reduction technique, achieving a 48.6%reduction in garbled table size for bit-decomposition and a 50%reduction for bit-composition operation. Our code is open-sourced for community use.
A bound on the quantum value of all compiled nonlocal games
A cryptographic compiler introduced by Kalai, Lombardi, Vaikuntanathan, and Yang (STOC’23) converts any nonlocal game into an interactive protocol with a single computa- tionally bounded prover. Although the compiler is known to be sound in the case of classical provers and complete in the quantum case, quantum soundness has so far only been established for special classes of games.
In this work, we establish a quantum soundness result for all compiled two-player nonlocal games. In particular, we prove that the quantum commuting operator value of the underlying nonlocal game is an upper bound on the quantum value of the compiled game. Our result employs techniques from operator algebras in a computational and cryptographic setting to establish information-theoretic objects in the asymptotic limit of the security parameter. It further relies on a sequential characterization of quantum commuting operator correlations, which may be of independent interest.
The Sponge is Quantum Indifferentiable
The sponge is a cryptographic construction that turns a public permutation into a hash function. When instantiated with the Keccak permutation, the sponge forms the NIST SHA-3 standard. SHA-3 is a core component of most post-quantum public-key cryptography schemes slated for worldwide adoption.
While one can consider many security properties for the sponge, the ultimate one is \emph{indifferentiability from a random oracle}, or simply \emph{indifferentiability}. The sponge was proved indifferentiable against classical adversaries by Bertoni et al. in 2008. Despite significant efforts in the years since, little is known about sponge security against quantum adversaries, even for simple properties like preimage or collision resistance beyond a single round. This is primarily due to the lack of a satisfactory quantum analog of the lazy sampling technique for permutations.
In this work, we develop a specialized technique that overcomes this barrier in the case of the sponge. We prove that the sponge is in fact indifferentiable from a random oracle against quantum adversaries. Our result establishes that the domain extension technique behind SHA-3 is secure in the post-quantum setting. Our indifferentiability bound for the sponge is a loose $O(\mathsf{poly}(q) 2^{-\min(r, c)/4})$, but we also give bounds on preimage and collision resistance that are tighter.
Limits on the Power of Indistinguishability Obfuscation and Functional Encryption
Recent breakthroughs in cryptography have positioned indistinguishability obfuscation as a ``central hub'' for almost all known cryptographic tasks, and as an extremely powerful building block for new cryptographic tasks resolving long-standing and foundational open problems. However, constructions based on indistinguishability obfuscation almost always rely on non-black-box techniques, and thus the extent to which it can be used as a building block has been completely unexplored so far.
We present a framework for proving meaningful negative results on the power of indistinguishability obfuscation. By considering indistinguishability obfuscation for oracle-aided circuits, we capture the common techniques that have been used so far in constructions based on indistinguishability obfuscation. These include, in particular, non-black-box techniques such as the punctured programming approach of Sahai and Waters (STOC '14) and its variants, as well as sub-exponential security assumptions.
Within our framework we prove the first negative results on the power of indistinguishability obfuscation and of the tightly related notion of functional encryption. Our results are as follows:
-- There is no fully black-box construction of a collision-resistant function family from an indistinguishability obfuscator for oracle-aided circuits.
-- There is no fully black-box construction of a key-agreement protocol with perfect completeness from a private-key functional encryption scheme for oracle-aided circuits.
Specifically, we prove that any such potential constructions must suffer from an exponential security loss, and thus our results cannot be circumvented using sub-exponential security assumptions. Our framework captures constructions that may rely on a wide variety of primitives in a non-black-box manner (e.g., obfuscating or generating a functional key for a function that uses the evaluation circuit of a puncturable pseudorandom function), and we only assume that the underlying indistinguishability obfuscator or functional encryption scheme themselves are used in a black-box manner.
Random Number Generation from Pulsars
Pulsars exhibit signals with precise inter-arrival times that are on the order of milliseconds to seconds, depending on the individual pulsar. There are subtle variations in the timing of pulsar signals. We show that these variations can serve as a natural entropy source for the creation of Random Number Generators (RNGs). We also explore the effects of using randomness extractors to increase the entropy of random bits extracted from Pulsar timing data. To evaluate the quality of the Pulsar RNG, we model its entropy as a $k$-source and use well-known cryptographic results to show its closeness to a theoretically ideal uniformly random source. To remain consistent with prior work, we also show that the Pulsar RNG passes well-known statistical tests such as the NIST test suite.
Tetris! Traceable Extendable Threshold Ring Signatures and More
Traceable ring signatures enhance ring signatures by adding an accountability layer. Specifically, if a party signs two different messages within the protocol, their identity is revealed. Another desirable feature is $\textit{extendability}$. In particular, $\textit{extendable threshold}$ ring signatures (ETRS) allow to $\textit{non-interactively}$ update already finalized signatures by enlarging the ring or the set of signers.
Combining traceability and extendability in a single scheme is unexplored and would offer a new tool for privacy-preserving voting schemes in scenarios where the voters are not known in advance.
In this paper, we show how to reconcile both properties by introducing and constructing a new cryptographic primitive called Tetris.
Notably, our Tetris construction simultaneously achieves strong anonymity and linear-size signatures, which is the main technical challenge in existing techniques.
To solve this challenge, we develop a new approach to traceability that leads to several conceptual and technical contributions. Among those, we introduce and construct, based on Groth-Sahai proofs, $\textit{extendable}$ shuffle arguments that can be $\textit{non-interactively}$ updated by several provers.
On the Linear Distinguishing Attack against ZUC-256 Stream Cipher
At FSE 2020, a linear distinguishing attack is presented against the ZUC-256 stream cipher based on the $32$-bit word with a data/time complexity of about $2^{236.38}$. In this paper, we re-evaluate the complexity of this attack and discuss the applicability of such a distinguishing attack in 5G application scenarios, where each keystream frame is limited to $20000$, and up to $2^{32}$ bits. To assure a high success probability close to $1$, it is shown that the precise time complexity of the distinguishing attack is $2^{253.93}$ basic operations with a data complexity of $2^{241.38}$ bits keystream, which is far beyond the keystream length limit in 5G application settings in the single-frame setting. Besides, we also consider the multiple-frame scenario where a long keystream could be formed by concatenating many short keystream frames generated from different (Key, IV) pairs. We show that even in such a strong model of distinguishing attacks, the reported bias will not exist in 5G application scenarios and the linear distinguishing attack will not work due to the fact that the long linear combination relation derived from the polynomial multiple of the LFSR in ZUC-256 over $\mbox{GF}(2^{31}-1)$, which has been verified in experiments. It is concluded that the ZUC-256 stream cipher offers the full $256$-bit security in 5G application scenarios.
An Addendum to the ZUC-256 Stream Cipher
ZUC-256 is a stream cipher, together with AES-256 and SNOW-V, proposed as the core primitive in future set of 3GPP confidentiality and integrity algorithms for the upcoming 5G applications which offer the 256-bit security. \\
While the original initialization scheme of ZUC-256 can work with a 256-bit key and an IV of length up to 184 bits, we describe a new initialization scheme of ZUC-256 that supports an IV of the exact 128 bits in this paper. Compared to the original initialization scheme, this new key/IV setup algorithm avoids the division of the whole key/IV byte and provides a simple and natural-looking initialization scheme for ZUC-256.
GKR for Boolean Circuits with Sub-linear RAM Operations
Succinct Non-Interactive Arguments of Knowledge (SNARKs) provide a powerful cryptographic framework enabling short, quickly verifiable proofs for computational statements.
Existing SNARKs primarily target computations represented as arithmetic circuits. However, they become inefficient when handling binary operations, as each gates operates on a few bits while still incurring the cost of multiple field operations per gate. This inefficiency stems, in part, from their inability to capture the word-level operations that are typical in real-world programs. To better reflect this computational pattern, we shift our attention to data-parallel boolean circuits, which serve as a natural abstraction of word-level operations by allowing parallel munipulation of multiple bits. To precisely characterize the prover's overheads in our scheme, we adopt the word RAM model, which aligns with the nature of modern programming languages. Under this model, we propose a novel approach to achieve SNARKs with only sub-linear prover overhead for proving boolean circuits.
Specifically, we present an optimized GKR protocol for boolean circuits that captures the word-level operations. To achieve this, we pack multiple bits as a single univariate polynomial, and exploiting the binary nature of circuit values to enable precomputation to accelerate the sumcheck process. This optimization leads to a highly efficient prover requiring only sub-linear RAM operations. Furthermore, we introduce a sub-linear polynomial commitment scheme designed specifically for binary polynomials, which ensures efficient commitments with minimal computational overhead.
Comprehensive evaluations reveal that our scheme achieves both theoretical efficiency and substantial practical performance gains. For instance, in proving randomly generated Boolean circuits with $2^{30}$ gates, proof generation with our optimized GKR protocol completes in just 5.38 seconds, yielding a $223\times$ speedup over LogUp (Haböck, ePrint 2022), the most efficient known scheme for lookup arguments. Furthermore, in an end-to-end benchmark over the Keccak-$f$ task, our scheme achieves a $106\times$ speedup compared to Binius (Diamond et al., EUROCRYPT 2025), the state-of-the-art work for boolean circuits.
Private Information Retrieval based on Homomorphic Encryption, Revisited
Private information retrieval (PIR) enables a client to retrieve data from a server while preserving the confidentiality of the client's query. When PIR is instantiated with fully homomorphic encryption (FHE), the protocol becomes non-interactive, requiring only a query-answer exchange, and it achieves asymptotically optimal communication and computation complexity. Although several FHE-based PIR protocols have been practically implemented with the desired properties, there has been little detailed comparison among them. As a result, it remains unclear which protocol is most efficient in practice with respect to various aspects such as performance and scalability.
In this paper, we revisit existing protocols by categorizing them into two different structures in order to analyze the advantages and disadvantages of each class in detail, with a focus on practical implementations. Furthermore, we introduce and compare various homomorphic algorithms that can be utilized for query optimization, discussing the strengths and limitations of each. Finally, with the goal of identifying the most efficient protocol in terms of computational cost and memory usage, based on database size. Additionally, we address common misconceptions that may lead to inefficient choices in real-world deployment scenarios and offer the best solutions.
Consequently, our analysis and experimental results demonstrate that the less-explored design achieves a 90% reduction in communication cost and an 8× decrease in computational overhead compared to the other one, challenging the common misconception.
Tailorable codes for lattice-based KEMs with applications to compact ML-KEM instantiations
Compared to elliptic curve cryptography, a primary drawback of lattice-based schemes is the larger size of their public keys and ciphertexts. A common procedure for compressing these objects consists essentially of dropping some of their least significant bits. Albeit effective for compression, there is a limit to the number of bits to be dropped before we get a noticeable decryption failure rate (DFR), which is a security concern. To address this issue, this paper presents a family of error-correction codes that, by allowing an increased number of dropped bits while preserving a negligible DFR, can be used for both ciphertext and public-key compression in modern lattice-based schemes. To showcase the impact and practicality of our proposal, we use the highly optimized ML-KEM, a post-quantum lattice-based scheme recently standardized by NIST. We provide detailed procedures for tailoring our codes to ML-KEM's specific noise distributions, and show how to analyze the DFR without independence assumptions on the noise coefficients. Among our results, we achieve between 4% and 8% ciphertext compression for ML-KEM. Alternatively, we obtain 8% shorter public keys compared to the current standard. We also present isochronous implementations of the decoding procedure, achieving negligible performance impact in the full ML-KEM decapsulation even when considering optimized implementations for AVX2, Cortex-M4, and Cortex-A53.
Securing Nested Attestation of Confidential Serverless Computing without Intra-Enclave Isolation
Confidential Computing-as-a-Service has gained significant attention in recent years, driven by rapid advances in Trusted Execution Environment (TEE) technology. Among various architectures, confidential serverless computing has emerged as a promising model. A common approach to designing confidential serverless computing involves decoupling the client workload from the initial enclave image and dynamically provisioning the workload at runtime. This enables both offloading the costly enclave bootstrapping and maintaining a fixed reference measurement for attestation. This decoupling necessitates nested attestation, where the client’s workload is attested via a custom attestation module embedded in a platform-attested enclave established at boot time. The challenge in designing nested attestation, however, is to distinguish fresh enclaves from the used ones to prevent enclave reuse. Specifically, a previously used enclave may be compromised and its attestation module tampered with. If the enclave is reused for another workload, it could bypass runtime attestation, allowing unverified code to execute.
In this paper, we present a novel approach to securing nested attestation for confidential serverless computing on Intel Software Guard Extensions (Intel SGX). Unlike prior works, our approach does not rely on intra-enclave isolation techniques to sandbox the client’s workload. Instead, we leverage the Key Separation and Sharing (KSS) feature of Intel SGX to track and prevent enclave reuse based on its immutable IDs. We develop a prototype system for confidential serverless computing for Python workloads, incorporating our nested attestation scheme, and present an empirical evaluation of its performance.
We believe our scheme unveils a unique yet previously overlooked role of KSS---post-compromise identification, with broader implications beyond serverless computing. As a concrete example, we demonstrate how KSS can be leveraged to implement an enclave-binding TPM on Intel SGX, which is of independent interest.
Public-Key Quantum Fire and Key-Fire From Classical Oracles
Quantum fire was recently formalized by Bostanci, Nehoran and Zhandry (STOC’25). This notion considers a distribution of quantum states that can be efficiently cloned, but cannot be converted into a classical string. Previously, work of Nehoran and Zhandry (ITCS’24) showed how to construct quantum fire relative to an inefficient unitary oracle. Later, the work of Bostanci, Nehoran, Zhandry gave a candidate construction based on group action assumptions, and proved the correctness of their scheme; however, even in the classical oracle model they only conjectured the security, and no security proof was given.
In this work, we give the first construction of public-key quantum fire relative to a classical oracle, and prove its security unconditionally. This gives the first classical oracle seperation between the two fundamental principles of quantum mechanics that are equivalent in the information-theoretic setting: no-cloning and no-telegraphing.
Going further, we introduce a stronger notion called quantum key-fire where the clonable fire states can be used to run a functionality (such as a signing or decryption key), and prove a secure construction relative to a classical oracle. As an application of this notion, we get the first public-key encryption scheme whose secret key is clonable but satisfies unbounded leakage-resilience (Cakan, Goyal, Liu-Zhang, Ribeiro [TCC’24]), relative to a classical oracle. Unbounded leakage-resilience is closely related to, and can be seen as a generalization of the notion of no-telegraphing.
For all of our constructions, the oracles can be made efficient (i.e. polynomial time), assuming the existence of post-quantum one-way functions
Side-Channel Analysis Revisited and Evaluated
Side-channel analysis recovers a secret by exploiting the key-dependent characteristics of the leakages. Practical techniques, such as Distance-of-Means analysis (DoM), Kolmogorov-Smirnov analysis (KSA) and Cramér-von Mises analysis (CvMA), provide valuable insights about the secret from the indirect perspectives of statistical moment and cumulative distribution function (CDF) respectively, circumventing the direct and costly estimation of leakage probability densities and therefore enabling wider applicability in practice. Though both the perspectives are informative, their relationships in the context of side-channel analysis remain unclear. In other words, the fundamental questions of "which one is better?'' and ``why and under what circumstances?" leave as open problems. In this paper, we introduce the probability-probability (PP) plot in statistics as a common framework for explaining the mathematical foundations of CDF-based techniques, which facilitates an intuitive understanding of different variant strategies. Then, inspired by the growth pattern of the PP curve, we propose a novel distinguisher based on the famous Mann-Kendall test, where measurements are managed with ordinality and nominality. This goodness-of-fit test checks whether a key-dependent binary sequence originates from a random binomial distribution, by efficiently searching potential label clusters. Finally, we explore the symmetry and dual counterpart of CDF in mathematics, introducing the quantile-quantile (QQ) plot and develop an interesting technique based on the inverse cumulative distribution function (ICDF). We present a general discussion of its bridging role, regarding detail capture as well as signal-to-noise ratio (SNR). On this basis, we establish the relationships among moment-based, ICDF-based, and CDF-based techniques, which additionally allows for bounding the security level of the CDF-based techniques using well-established metrics that are originally proposed for evaluating the traditional moment-based family. Experiments across various settings validate our theoretical findings and demonstrate the effectiveness of the two proposed distinguishers.
On the BUFF Security of ECDSA with Key Recovery
In the usual syntax of digital signatures, the verification algorithm takes a verification key in addition to a signature and a message, whereas in ECDSA with key recovery, which is used in Ethereum, no verification key is input to the verification algorithm. Instead, a verification key is recovered from a signature and a message. In this paper, we explore BUFF security of ECDSA with key recovery (KR-ECDSA), where BUFF stands for Beyond UnForgeability Features (Cremers et al., IEEE S&P 2021). As a result, we show that KR-ECDSA provides BUFF security, except weak non-resignability (wNR). We pay attention to that the verification algorithm of KR-ECDSA takes an Ethereum address addr as input, which is defined as the rightmost 160-bits of the Keccak-256 hash of the corresponding ECDSA verification key, and checks the hash value of the recovered verification key is equal to addr. Our security analysis shows that this procedure is mandatory to provide BUFF security. We also discuss whether wNR is mandatory in Ethereum or not. To clarify the above equality check is mandatory to provide BUFF security in KR-ECDSA, we show that the original ECDSA does not provide any BUFF security. As a by-product of the analysis, we show that one of our BUFF attacks also works against the Aumayr et al.'s ECDSA-based adaptor signature scheme (ASIACRYPT 2021) and the Qin et al.'s blind adaptor signature scheme (IEEE S&P 2023), which is based on the Aumayr et al.'s scheme. We emphasize that the attack is positioned outside of their security models.
Privacy and Security in Distributed Data Markets
Data markets play a pivotal role in modern industries by facilitating the exchange of data for predictive modeling, targeted marketing, and research. However, as data becomes a valuable commodity, privacy and security concerns have grown, particularly regarding the personal information of individuals. This tutorial explores privacy and security issues when integrating different data sources in data market platforms. As motivation for the importance of enforcing privacy requirements, we discuss attacks on data markets focusing on membership inference and reconstruction attacks. We also discuss security vulnerabilities in decentralized data marketplaces, including adversarial manipulations by buyers or sellers. We provide an overview of privacy and security mechanisms designed to mitigate these risks. In order to enforce the least amount of trust for buyers and sellers, we focus on distributed protocols. Finally, we conclude with opportunities for future research on understanding and mitigating privacy and security concerns in distributed data markets.
Time-Space Tradeoffs of Truncation with Preprocessing
Truncation of cryptographic outputs is a technique that was recently introduced in Baldimtsi et al. [BCCK22]. The general idea is to try out many inputs to some cryptographic algorithm until the output (e.g. a public-key or some hash value) falls into some sparse set and thus can be compressed: by trying out an expected $2^k$ different inputs one will find an output that starts with $k$ zeros.
Using such truncation one can for example save substantial gas fees on Blockchains where storing values is very expensive. While [BCCK22] show that truncation preserves the security of the underlying primitive, they only consider a setting without preprocessing. In this work we show that lower bounds on the time-space tradeoff for inverting random functions and permutations also hold with truncation, except for parameters ranges where the bound fails to hold for ''trivial'' reasons.
Concretely, it's known that any algorithm that inverts a random function or permutation with range $N$ making $T$ queries and using $S$ bits of auxiliary input must satisfy $S\cdot T\ge N\log N$.
This lower bound no longer holds in the truncated setting where one must only invert a challenge from a range of size $N/2^k$, as now one can simply save the replies to all
$N/2^k$ challenges, which requires $S=\log N\cdot N /2^k$ bits and allows to invert with $T=1$ query.
We show that with truncation, whenever $S$ is somewhat smaller than the $\log N\cdot N /2^k$ bits required to store the entire truncated function table, the known $S\cdot T\ge N\log N$ lower bound applies.
One-Step Schnorr Threshold Identification
Threshold zero-knowledge protocols have not been widely adopted,
presumably due to the relevant network overhead,
complicated certification processes
and thus limited interoperability chances.
In this work, we propose $\mathsf{OSST}$,
a Schnorr-based threshold identification scheme
that is both non-interactive and non-reliant on the public shares.
Given a $(n, t)$-shared secret $x$,
the proposed protocol allows
any $t^* \ge t$ (but no less) shareholders to collectively prove
that their secret keys combine to $x$ in the sense of interpolation
without revealing any information beyond that.
On the one side, the provers do not need to engage in
multi-party computations,
sending their packets to the verifier asynchronously.
On the other side, the verification operation
involves the combined public key $y \equiv g ^ x$ alone,
meaning that the verifier does not need to
have validated and registered the individual member identities.
The protocol
can be cheaply adopted in permissionless and dynamic environments,
where no certification processes or infrastructure support
are available, and be easily integrated
with existing verifiers by means of pure software extension.
No publicly trusted setup is required beyond
the assumption that $x$ has been
distributed by means of Shamir's secret sharing
(or equivalent distribution scheme)
and that the public counterpart has been advertised correctly;
in particular, the protocol is intended to be
secure against impersonation attacks
in the plain public-key model.
We provide evidence that this has good chances to
hold true by giving a formal security proof
in the random oracle model under the
one-more discrete-logarithm hardness
($\mathsf{OMDL}$) assumption.
Mastic: Private Weighted Heavy-Hitters and Attribute-Based Metrics
Insight into user experience and behavior is critical to the success of large software systems and web services. Gaining such insights, while preserving user privacy, is a significant challenge. Recent advancements in multi-party computation have made it practical to securely compute aggregates over secret shared data. Two such protocols have emerged as candidates for standardization at the IETF: Prio (NSDI 2017) for general-purpose statistics; and Poplar (IEEE S&P 2021) for heavy hitters, where the goal is to compute the most popular inputs held by users without learning the inputs themselves. While each of these protocols is well-suited to certain applications, there remain a number of use cases identified by IETF for which neither Prio nor Poplar is practical.
We introduce Mastic, a protocol for the following functionality: each of a large number of clients holds an input (e.g., a URL) and its corresponding weight (e.g., page load time); for a given candidate input (or prefix), a small number of non-colluding servers wish to securely aggregate the weights of clients that hold that input (or some input with that prefix), without learning the weights or which client holds which input. This functionality makes two new classes of applications possible. The first is a natural generalization of heavy hitters we call weighted heavy-hitters. The second is an enhancement of Prio-style metrics we call attribute-based metrics in which aggregates are grouped by hierarchical user attributes (e.g., their geographic location or software version). We demonstrate Mastic's practicality for these applications with a real-world example of each. We also compare our protocol with Prio and Poplar on a wide area network. Overall, we report over one order of magnitude performance improvement over Poplar for plain heavy-hitters and $1.5-2\times$ improvement over Prio for attribute-based metrics.
Gold OPRF: Post-Quantum Oblivious Power-Residue PRF
We propose plausible post-quantum (PQ) oblivious pseudorandom functions (OPRFs) based on the Power-Residue PRF (Damgård CRYPTO’88), a generalization of the Legendre PRF. For security parameter $\lambda$, we consider the PRF $\mathsf{Gold}_k(x)$ that maps an integer $x$ modulo a public prime $p = 2^\lambda\cdot g + 1$ to the element $(k + x)^g \bmod p$, where $g$ is public and $\log g \approx 2\lambda$.
At the core of our constructions are efficient novel methods for evaluating $\mathsf{Gold}$ within two-party computation ($\mathsf{2PC}\text{-}\mathsf{Gold}$), achieving different security requirements. Here, the server $\mathcal{P}_s$ holds the PRF key $k$ whereas the client $\mathcal{P}_c$ holds the PRF input $x$, and they jointly evaluate $\mathsf{Gold}$ in 2PC. $\mathsf{2PC}\text{-}\mathsf{Gold}$ uses standard Vector Oblivious Linear Evaluation (VOLE) correlations and is information-theoretic and constant-round in the (V)OLE-hybrid model. We show:
• For a semi-honest $\mathcal{P}_s$ and a malicious $\mathcal{P}_c$: a $\mathsf{2PC}\text{-}\mathsf{Gold}$ that just uses a single (V)OLE correlation, and has a communication complexity of $3$ field elements ($2$ field elements if we only require a uniformly sampled key) and a computational complexity of $\mathcal{O}(\lambda)$ field operations. We refer to this as half-malicious security.
• For malicious $\mathcal{P}_s$ and $\mathcal{P}_c$: a $\mathsf{2PC}\text{-}\mathsf{Gold}$ that just uses $\frac{\lambda}{4} + \mathcal{O}(1)$ VOLE correlations, and has a communication complexity of $\frac{\lambda}{4} + \mathcal{O}(1)$ field elements and a computational complexity of $\mathcal{O}(\lambda)$ field operations.
These constructions support additional features and extensions, e.g., batched evaluations with better amortized costs where $\mathcal{P}_c$ repeatedly evaluates the PRF under the same key.
Furthermore, we extend $\mathsf{2PC}\text{-}\mathsf{Gold}$ to Verifiable OPRFs and use the methodology from Beullens et al. (ePrint’24) to obtain strong OPRF security in the universally composable setting.
All the protocols are efficient in practice. We implemented $\mathsf{2PC}\text{-}\mathsf{Gold}$—with (PQ) VOLEs—and benchmarked them. For example, our half-malicious (resp. malicious) $n$-batched PQ OPRFs incur about $100$B (resp. $1.9$KB) of amortized communication for $\lambda = 128$.
Dot-Product Proofs and Their Applications
A dot-product proof (DPP) is a simple probabilistic proof system in which the input statement $\mathbf{x}$ and the proof $\boldsymbol{\pi}$ are vectors over a finite field $\mathbb{F}$, and the proof is verified by making a single dot-product query $\langle \mathbf{q},(\mathbf{x} \| \boldsymbol{\pi}) \rangle$ jointly to $\mathbf{x}$ and $\boldsymbol{\pi}$. A DPP can be viewed as a 1-query fully linear PCP. We study the feasibility and efficiency of DPPs, obtaining the following results:
- Small-field DPP. For any finite field $\mathbb{F}$ and Boolean circuit $C$ of size $S$, there is a DPP for proving that there exists $\mathbf{w}$ such that $C(\mathbf{x}, \mathbf{w})=1$ with a proof $\boldsymbol{\pi}$ of length $S\cdot\mathsf{poly}(|\mathbb{F}|)$ and soundness error $\varepsilon=O(1 / \sqrt{|\mathbb{F}|})$. We show this error to be asymptotically optimal. In particular, and in contrast to the best known PCPs, there exist strictly linear-length DPPs over constant-size fields.
- Large-field DPP. If $|\mathbb{F}|\ge\mathsf{poly}(S/\varepsilon)$, there is a similar DPP with soundness error $\varepsilon$ and proof length $O(S)$ (in field elements).
The above results do not rely on the PCP theorem and their proofs are considerably simpler. We apply our DPP constructions toward two kinds of applications.
- Hardness of approximation. We obtain a simple proof for the NP-hardness of approximating MAXLIN (with dense instances) over any finite field $\mathbb{F}$ up to some constant factor $c>1$, independent of $\mathbb{F}$. Unlike previous PCP-based proofs, our proof yields exponential-time hardness under the exponential time hypothesis (ETH).
- Succinct arguments. We improve the concrete efficiency of succinct interactive arguments in the generic group model using input-independent preprocessing. In particular, the communication is comparable to sending two group elements and the verifier's computation is dominated by a single group exponentiation. We also show how to use DPPs together with linear-only encryption to construct succinct commit-and-prove arguments.
On pairing-friendly 2-cycles and SNARK-friendly 2-chains of elliptic curves containing a curve from a prime-order family
Cryptographic protocols such as zk-SNARKs use 2-cycles of~elliptic curves for efficiency, often relying on pairing computations. However, 2-cycles of~pairing-friendly curves are hard to find, and the only known cases consist of~an MNT4 and an MNT6 curve. In this work, we prove that a~2-cycle containing an MNT3, Freeman, or BN curve cannot be pairing-friendly. Thus we cannot hope to find new pairing-friendly 2-cycles using the current methods.
Furthermore, we show that there are no SNARK-friendly 2-chains of elliptic curves from combinations of MNT, Freeman and BN curves of reasonable size, except for (MNT4, MNT6).
Breaking BASS
We provide several attacks on the BASS signature scheme introduced by Grigoriev, Ilmer, Ovchinnikov and Shpilrain in 2023. We lay out a trivial forgery attack which generates signatures passing the scheme's probabilistic signature verification with high probability. Generating these forgeries is faster than generating signatures honestly. Moreover, we describe a key-only attack which allows us to recover an equivalent private key from a signer's public key. The time complexity of this recovery is asymptotically the same as that of signing messages.
An Optimized Instantiation of Post-Quantum MQTT protocol on 8-bit AVR Sensor Nodes
Since the selection of the National Institute of Standards and Technology (NIST) Post-Quantum Cryptography (PQC) standardization algorithms, research on integrating PQC into security protocols such as TLS/SSL, IPSec, and DNSSEC has been actively pursued. However, PQC migration for Internet of Things (IoT) communication protocols remains largely unexplored. Embedded devices in IoT environments have limited computational power and memory, making it crucial to optimize PQC algorithms for efficient computation and minimal memory usage when deploying them on low-spec IoT devices. In this paper, we introduce KEM-MQTT, a lightweight and efficient Key Encapsulation Mechanism (KEM) for the Message Queuing Telemetry Transport (MQTT) protocol, widely used in IoT environments. Our approach applies the NIST KEM algorithm Crystals-Kyber (Kyber) while leveraging MQTT’s characteristics and sensor node constraints. To enhance efficiency, we address certificate verification issues and adopt KEMTLS to eliminate the need for Post-Quantum Digital Signatures Algorithm (PQC-DSA) in mutual authentication. As a result, KEM-MQTT retains its lightweight properties while maintaining the security guarantees of TLS 1.3. We identify inefficiencies in existing Kyber implementations on 8-bit AVR microcontrollers (MCUs), which are highly resource-constrained. To address this, we propose novel implementation techniques that optimize Kyber for AVR, focusing on high-speed execution, reduced memory consumption, and secure implementation, including Signed LookUp-Table (LUT) Reduction. Our optimized Kyber achieves performance gains of 81%,75%, and 85% in the KeyGen, Encaps, and DeCaps processes, respectively, compared to the reference implementation. With approximately 3 KB of stack usage, our Kyber implementation surpasses all state-of-the-art Elliptic Curve Diffie-Hellman (ECDH) implementations. Finally, in KEM-MQTT using Kyber-512, an 8-bit AVR device completes the handshake preparation process in 4.32 seconds, excluding the physical transmission and reception times.
Learning from Functionality Outputs: Private Join and Compute in the Real World
Private Join and Compute (PJC) is a two-party protocol recently proposed by Google for various use-cases, including ad conversion (Asiacrypt 2021) and which generalizes their deployed private set intersection sum (PSI-SUM) protocol (EuroS&P 2020).
PJC allows two parties, each holding a key-value database, to privately evaluate the inner product of the values whose keys lie in the intersection. While the functionality output is not typically considered in the security model of the MPC literature, it may pose real-world privacy risks, thus raising concerns about the potential deployment of protocols like PJC.
In this work, we analyze the risks associated with the PJC functionality output. We consider an adversary that is a participating party of PJC and describe four practical attacks that break the other party's input privacy, and which are able to recover both membership of keys in the intersection and their associated values. Our attacks consider the privacy threats associated with deployment and highlight the need to include the functionality output as part of the MPC security model.
Towards Lightweight CKKS: On Client Cost Efficiency
The large key size for fully homomorphic encryption (FHE) requires substantial costs to generate and transmit the keys. This has been problematic for FHE clients who want to delegate the computation, as they often have limited power. A recent work, Lee-Lee-Kim-No [Asiacrypt 2023], partly solved this problem by suggesting a hierarchical key management system. However, the overall key size was still several gigabytes for real-world applications, and it is barely satisfactory for mobile phones and IoT devices.
In this work, we propose new key management systems, KG+ and BTS+, which reduce the client's cost for FHE on top of Lee-Lee-Kim-No. The KG+system significantly reduces the key size without any compromise in the efficiency of homomorphic computation compared to Lee-Lee-Kim-No. The BTS+ system further reduces the key size, while it compromises only the granularity of the homomorphic computation.
In our new systems, the client generates and sends ``transmission keys'' with size-optimal parameters, and the server generates ``evaluation keys'' with computation-optimal parameters. For this purpose, we introduce a new ring-switching technique for keys to bridge keys with different parameters.
Using the new ring-switching technique, a client can transmit the transmission keys in extension rings that can generate FHE keys in the computation-efficient subring. By decoupling the rings of FHE keys during transmission and computation, we significantly reduce the communication cost for transferring FHE keys.
We provide concrete CKKS FHE parameters that the client's keys are $325$--$609$ MB and $285$ MB, by using KG+ and BTS+, respectively. Note that all parameters generate keys for CKKS with ring degree $2^{16}$, which is a conventional choice for CKKS applications to privacy-preserving machine learning. These are $3.09$--$4.37$ and $3.51$--$9.30$ times lower than Lee-Lee-Kim-No, respectively. For real-world applications, the server requires more evaluation keys for faster homomorphic computation. For the secure ResNet-20 inference, the parameters for KG+ and BTS+ result in client key sizes of $325$--$609$ MB and $285$ MB, respectively. These are $3.95$--$5.73\times$ and $4.53$--$12.25\times$ smaller than Lee-Lee-Kim-No.
Packed Sumcheck over Fields of Small Characteristic with Application to Verifiable FHE
Verifiable computation over encrypted data is gaining increasing attention, and using SNARKs to provide proofs for FHE operations has emerged as a promising approach. However, the mismatch between FHE's typically small prime fields and SNARKs' larger field requirements creates verifiable FHE challenges. In this work, we construct a packed sumcheck algorithm specifically designed for small fields. This approach leverages folding and repetition techniques to maintain security without field expansion, with all operations performed on the base domain. For a domain $\mathbb{F}_p$ requiring $k$-fold expansion, our sumcheck protocol operates with $(\log k + l)$ variables, where each sumcheck statement consists of $d$ multiplied multilinear polynomial statements. The prover can complete the computation in $O(kd^2 + k^{1.807}d) \cdot 2^l$ modular multiplications over $\mathbb{F}_p$.
By exploiting the highly repetitive computational structure in bit-wise FHE bootstrapping operations, we decompose the process into a series of vector operations. Building upon the packed sumcheck technique along with the Brakedown (CRYPTO 2023) and Binius (EUROCRYPT 2025) commitment schemes, we construct an efficient proof system for these vector operations, ultimately yielding a proof system for bit-wise FHE. Our system achieves linear prover time while performing all computations on the base field, resulting in significant improvements in prover efficiency.
SNARKs for Stateful Computations on Authenticated Data
We present a new generalization of (zk-)SNARKs specifically designed for the application domain of safety-critical control systems. These need to be protected against adversarial tampering as well as non-malicious but unintended system failures due to random faults in components. Our SNARKs combine two additional features at the same time. Besides the verification of correct computation, they also allow, first, the verification of input data authenticity. Specifically, a verifier can confirm that the input to the computation originated from a trusted source. Second, our SNARKs support verification of stateful computations across multiple rounds, ensuring that the output of the current round correctly depends on the internal state of the previous round. Our focus is on concrete practicality, so we abstain from arithmetizing hash functions or signatures in our SNARKs. Rather, we modify the internals of an existing SNARK to extend its functionality. We implement and benchmark our new SNARKs in a sample scenario of a real-time high-integrity flight control system.
With our construction, prover runtime improves significantly over the baseline by a factor of 90. Verification time increases by 36%, but is less than comparable approaches that do not arithmetize hash functions or signatures.
Succinct Homomorphic MACs from Groups and Applications
Homomorphic message authentication codes (HMACs) allow users to authenticate data using a shared secret key, while supporting computation over authenticated data. Given data $(m_1, \ldots, m_n)$ and their tags $(\sigma_1, \ldots, \sigma_n)$, anyone can evaluate a circuit $C$ on the data and tags to produce a succinct tag authenticating the output $C(m_1, \ldots, m_n)$. Importantly, tags remain succinc -- of size polynomial in the security parameter $\lambda$ -- regardless of the size of $C$. This work introduces an enhanced variant of HMACs called algebraic HMAC (aHMAC), in which all tags (input and output) take the form $\Delta \cdot m + K$, as in standard information-theoretic MACs.
We construct an aHMAC from group-based assumptions, including variants of the DDH and DCR assumptions, and use it to obtain group-based constructions of several cryptographic primitives:
- Succinct CDS for circuits. For any $P: [N]^k \to [N]$ represented by circuit, we obtain a Conditional Disclosure of Secrets protocol with poly$(\lambda, k, \log N )$ communication.
- Succinct PSM for simple programs. For any $P:[N]^k \to [N]$ represented by a truth-table or shallow branching program, we obtain a Private Simultaneous Messages protocol or a garbling scheme with poly$(\lambda, k, \log N )$ communication.
- Constrained PRFs for circuits. We obtain the first group-based constrained pseudorandom functions for general circuits, improving over a previous construction for NC1 circuits.
Prior to our work, these applications could only be obtained from lattice assumptions or indistinguishability obfuscation.
Practical Zero-Knowledge PIOP for Maliciously Secure Multiparty Homomorphic Encryption
Homomorphic encryption (HE) is a foundational technology in privacy-enhancing cryptography, enabling computation over encrypted data. Recently, generalized HE primitives designed for multi-party applications, such as multi-party HE (MPHE), have garnered significant research interest. While constructing secure multi-party protocols from MPHE in the semi-honest model is straightforward, achieving malicious security remains challenging as it requires zero-knowledge arguments of knowledge (ZKAoKs) for MPHE ciphertexts and public keys.
In this work, we design practical ZKAoKs for MPHE that validate the well-formedness of public keys and ciphertexts. Specifically, we develop our ZKAoKs within the polynomial interactive oracle proof (PIOP) framework. To achieve this, we introduce novel optimization techniques that seamlessly integrate constraints for MPHE into the PIOP framework, enabling the design of PIOPs for validating all types of MPHE public keys, including relinearization and automorphism keys. To the best of our knowledge, our construction is the first ZKAoK for MPHE that validates automorphism keys.
We instantiate our PIOP using a lattice-based polynomial commitment scheme (PCS). When compared with the previous state-of-the-art construction, PELTA (CCS' 2023), our implementation achieves a 5.4x reduction in proof size, a 111x speed-up in proof generation, and a 768x improvement in verification time for validating the encryption key. In addition to the encryption key, we provide benchmark results for all types of ZKAoKs required for MPHE, presenting the first concrete performance results in compiling passively secure MPHE-based protocols into maliciously secure ones.
The Hardness of Learning Quantum Circuits and its Cryptographic Applications
We show that concrete hardness assumptions about learning or cloning the output state of a random quantum circuit can be used as the foundation for secure quantum cryptography. In particular, under these assumptions we construct secure one-way state generators (OWSGs), digital signature schemes, quantum bit commitments, and private key encryption schemes. We also discuss evidence for these hardness assumptions by analyzing the best-known quantum learning algorithms, as well as proving black-box lower bounds for cloning and learning given state preparation oracles.
Our random circuit-based constructions provide concrete instantiations of quantum cryptographic primitives whose security do not depend on the existence of one-way functions. The use of random circuits in our constructions also opens the door to NISQ-friendly quantum cryptography. We discuss noise tolerant versions of our OWSG and digital signature constructions which can potentially be implementable on noisy quantum computers connected by a quantum network. On the other hand, they are still secure against noiseless quantum adversaries, raising the intriguing possibility of a useful implementation of an end-to-end cryptographic protocol on near-term quantum computers. Finally, our explorations suggest that the rich interconnections between learning theory and cryptography in classical theoretical computer science also extend to the quantum setting.
On the Estonian Internet Voting System, IVXV, SoK and Suggestions
The Estonian i-voting experience is probably the richest to analyze; a country that is considered a pioneer in digitizing both the government and private sector since 2001, and hence digital voting in 2005, yet there are still some complaints submitted, critics and remarks to consider about the IVXV system. In this paper, we introduce a Systemization of Knowledge of the Estonian IVXV i-voting system and propose some added security enhancements. The presented SoK includes applications implemented by election observers in 2023 & 2024 elections, which, to our knowledge, has never been mentioned and/or analyzed in the academia before. The paper also updates the general knowledge about an extra right given to auditors (but not observers) in the June 2024 European election, recent improvements, and recent complaints. Finally, we discuss the current system status in 2024 EP elections, propose our own suggestions to some remaining vulnerabilities, then raise the inevitable question of the approaching quantum threat.
Shark: Actively Secure Inference using Function Secret Sharing
We consider the problem of actively secure two-party machine-learning inference in the preprocessing model, where the parties obtain (input-independent) correlated randomness in an offline phase that they can then use to run an efficient protocol in the (input-dependent) online phase. In this setting, the state-of-the-art is the work of Escudero et al. (Crypto 2020); unfortunately, that protocol requires a large amount of correlated randomness, extensive communication, and many rounds of interaction, which leads to poor performance.
In this work, we show protocols for this setting based on function secret sharing (FSS) that beat the state-of-the-art in all parameters: they use less correlated randomness and fewer rounds, and require lower communication and computation. We achieve this in part by allowing for a mix of boolean and arithmetic values in FSS-based protocols (something not done in prior work), as well as by relying on “interactive FSS,” a generalization of FSS we introduce. To demonstrate the effectiveness of our approach we build SHARK—the first FSS- based system for actively secure inference—which outperforms the state-of-the-art by up to 2300×.
MatriGear: Accelerating Authenticated Matrix Triple Generation with Scalable Prime Fields via Optimized HE Packing
The SPDZ protocol family is a popular choice for secure multi-party computation (MPC) in a dishonest majority setting with active adversaries. Over the past decade, a series of studies have focused on improving its offline phase, where special additive shares called authenticated triples are generated. However, to accommodate recent demands for matrix operations in secure machine learning and big integer arithmetic in distributed RSA key generation, updates to the offline phase are required.
In this work, we propose a new protocol for the SPDZ offline phase, MatriGear, which improves upon the previous state-of-the-art construction, TopGear (Baum et al., SAC '19), and its variant for matrix triples (Chen et al., Asiacrypt '20). Our protocol aims to achieve a speedup in matrix triple generation and support for larger prime fields up to 4096 bits in size. To achieve this, we devise a variant of the BFV scheme and a new homomorphic matrix multiplication algorithm optimized for our purpose.
As a result, our protocol achieves about 3.6x speedup for generating scalar triples in a 1024-bit prime field and about 34x speedup for generating 128x128 matrix triples. In addition, we reduce the size of evaluation keys from 27.4 GB to 0.22 GB and the communication cost for MAC key generation from 816 MB to 16.6 MB.
Arbigraph: Verifiable Turing-Complete Execution Delegation
Dependence on online infrastructure is rapidly growing as services like online payments and insurance replace traditional options, while others, like social networks, offer new capabilities.
The centralized service operators wield unilateral authority over user conflicts, content moderation, and access to essential services.
In the context of payments, blockchains provide a decentralized alternative.
They also enable decentralized execution of stateful programs called smart contracts.
But those lack the contextual understanding and interpretative capabilities that would enable reasoning about complex scenarios.
Advancements in machine learning (ML) are raising interest in actually-smart contracts, but blockchain computation constraints prohibit direct ML inference execution.
While many projects deploy computation delegation mechanisms, they lack Turing-completeness, prohibit parallel computation, or suffer from high overhead.
We present Arbigraph, a blockchain-based execution delegation protocol.
Like previous optimistic solutions, the parties submit their computation results, allowing a smart contract to arbitrate in case of dispute.
But Arbigraph employs a novel dual-graph data structure and takes advantage of the nature of the dispute process to achieve Turing completeness, constant-time memory access, and parallel execution.
We formalize the problem and show that Arbigraph guarantees completeness, soundness, and progress.
Experiments on LLM inference as well as matrix multiplication, which is at the core of ML inference, demonstrate that parallelization speedup grows linearly with matrix dimensions.
We demonstrate Arbigraph's practical cost with a deployment on the Avalanche blockchain.
Arbigraph thus enables decentralized, context-aware decision-making and unlocks unprecedented use cases for blockchains.
Signature-Free Atomic Broadcast with Optimal $O(n^2)$ Messages and $O(1)$ Expected Time
Byzantine atomic broadcast (ABC) is at the heart of permissioned blockchains and various multi-party computation protocols. We resolve a long-standing open problem in ABC, presenting the first information-theoretic (IT) and signature-free asynchronous ABC protocol that achieves optimal $O(n^2)$ messages and $O(1)$ expected time. Our ABC protocol adopts a new design, relying on a reduction from---perhaps surprisingly---a somewhat neglected primitive called multivalued Byzantine agreement (MBA).
Updatable Signature with Public Tokens
The Updatable Signature (US) allows valid signatures to be updated by an update token without accessing the newly generated signing key. Cini et al. (PKC'21) formally defined this signature and gave several constructions. However, their security model requires the secrecy of the update token, which is only applicable in some specific scenarios, such as software verification in the trusted App Store. In Web3, information is usually shared via a public blockchain, and decentralized private computation is expensive. In addition, one can use the same token to update both the signing key and signatures and all signatures can be updated with a single token. The adversarial signature generated by an adversary might also be updated. Therefore, this work explores the (im)possibility of constructing an Updatable Signature with public tokens (USpt), the tokens of which are signature-dependent. Specifically, we define the updatable signature with public tokens and present its security model. Then, we present a concrete USpt scheme based on the Boneh–Lynn–Shacham signature. This variant introduces a limitation for the signer who must maintain a dataset about its signed messages or hashes of them, which is applicable in our applications.
Exploring Key-Recovery-Friendly Differential Distinguishers for SM4 and Their Performance in Differential Attacks (Full Version)
In this paper, we focus on SM4, a widely used and standardized Chinese block cipher. After revisiting the previously proposed optimal 19-round differential characteristic, we observe that its applicability in differential attacks is limited by a reduced pre-sieving probability, causing the time complexity to exceed that of brute force. To overcome this issue, we employ an automated search approach to identify more promising optimal 19-round differential characteristics. By translating key properties relevant to key recovery into Boolean expressions, we uncover three structural properties common to all optimal 19-round characteristics. While these properties dictate the overall probability of the resulting 19-round distinguishers, their varying pre-sieving probabilities influence their practical effectiveness in differential attacks. Using Boolean encodings, we identify four representative key-recovery-friendly differential characteristics. We then conduct an in-depth analysis of one such characteristic and demonstrate that, when evaluated under both the hypothesis testing paradigm and the key ranking paradigm, the proposed attack requires slightly more data than existing 23-round attacks. Nonetheless, it achieves lower time and memory complexities and ensures a higher success probability, offering a valuable new avenue for differential cryptanalysis of SM4. We believe our findings enhance the understanding of SM4's differential structure and provide a solid foundation for future research on advanced key-recovery techniques that leverage these newly identified structural properties and differential characteristics.
RevoLUT : Rust Efficient Versatile Oblivious Look-Up-Tables
In this paper we present RevoLUT, a library implemented in Rust that reimagines the use of Look-Up-Tables (LUT) beyond their conventional role in function encoding, as commonly used in TFHE's programmable boostrapping. Instead, RevoLUT leverages LUTs as first class objects, enabling efficient oblivious operations such as array access, elements sorting and permutation directly within the table. This approach supports oblivious algortithm, providing a secure, privacy-preserving solution for handling sensitive data in various applications.
A non-comparison oblivious sort and its application to private k-NN
In this paper, we introduce an adaptation of the counting sort algorithm that leverages the data obliviousness of the algorithm to enable the sorting of encrypted data using Fully Homomorphic Encryption (FHE). Our approach represents the first known sorting algorithm for encrypted data that does not rely on comparisons. The implementation takes advantage of some basic operations on TFHE's Look-Up-Tables (LUT). We have integrated these operations into RevoLUT, a comprehensive open-source library built on tfhe-rs. We demonstrate the effectiveness of our Blind Counting Sort algorithm by developing a top-k selection algorithm and applying it to privacy-preserving k-Nearest Neighbors classification. This proves to be approximately 4 times faster than state-of-the-art methods.
Building a BBB Pseudorandom Permutation using Lai-Massey Networks
In spite of being a popular technique for designing block ciphers, Lai-Massey networks have received considerably less attention from a security analysis point of view than Feistel networks and Substitution-Permutation networks. In this paper we study the beyond-birthday-bound (BBB) security of Lai-Massey networks with independent random round functions against chosen-plaintext adversaries. Concretely, we show that five rounds are necessary and sufficient to achieve BBB security.
LOHEN: Layer-wise Optimizations for Neural Network Inferences over Encrypted Data with High Performance or Accuracy
Fully Homomorphic Encryption (FHE) presents unique challenges in programming due to the contrast between traditional and FHE language paradigms. A key challenge is selecting ciphertext configurations (CCs) to achieve the desired level of security, performance, and accuracy simultaneously. Finding the design point satisfying the goal is often labor-intensive (probably impossible), for which reason previous works settle down to a reasonable CC that brings acceptable performance. When FHE is applied to neural networks (NNs), we have observed that the distinct layered architecture of NN models opens the door for a performance improvement by using layer-wise CCs, because a globally chosen CC may not be the best possible CC for every layer individually. This paper introduces LOHEN, a technique crafted to attain high performance of NN inference by enabling to use layer-wise CC efficiently. Empowered with a cryptographic gadget that allows switching between arbitrary CCs, LOHEN allocates layer-wise CCs for individual layers tailored to their structural properties, while minimizing the increased overhead incurred by CC switching with its capability to replace costly FHE operations. LOHEN can also be engineered to attain higher accuracy, yet deliver higher performance compared to state-of-the-art studies, by additionally adopting the multi-scheme techniques in a layer-wise manner. Moreover, the developers using LOHEN are given the capability of customizing the selection policy to adjust the desired levels of performance and accuracy, subject to their demands. Our evaluation shows that LOHEN improves the NN inference performance in both of these cases when compared to the state-of-the-art. When used to improve the CKKS-only inference, LOHEN improves the NN inference performance of various NNs 1.08--2.88x. LOHEN also improves the performance of mixed-scheme NN inference by 1.34--1.75x without accuracy loss. These two results along with other empirical analyses, advocate that LOHEN can widely help improve the performance of NN inference over FHE.
Juggernaut: Efficient Crypto-Agnostic Byzantine Agreement
It is well known that a trusted setup allows one to solve the Byzantine agreement problem in the presence of $t<n/2$ corruptions, bypassing the setup-free $t<n/3$ barrier. Alas, the overwhelming majority of protocols in the literature have the caveat that their security crucially hinges on the security of the cryptography and setup, to the point where if the cryptography is broken, even a single corrupted party can violate the security of the protocol. Thus these protocols provide higher corruption resilience ($n/2$ instead of $n/3$) for the price of increased assumptions. Is this trade-off necessary?
We further the study of crypto-agnostic Byzantine agreement among $n$ parties that answers this question in the negative. Specifically, let $t_s$ and $t_i$ denote two parameters such that (1) $2t_i + t_s < n$, and (2) $t_i \leq t_s < n/2$. Crypto-agnostic Byzantine agreement ensures agreement among honest parties if (1) the adversary is computationally bounded and corrupts up to $t_s$ parties, or (2) the adversary is computationally unbounded and corrupts up to $t_i$ parties, and is moreover given all secrets of all parties established during the setup. We propose a compiler that transforms any pair of resilience-optimal Byzantine agreement protocols in the authenticated and information-theoretic setting into one that is crypto-agnostic. Our compiler has several attractive qualities, including using only $O(\lambda n^2)$ bits over the two underlying Byzantine agreement protocols, and preserving round and communication complexity in the authenticated setting. In particular, our results improve the state-of-the-art in bit complexity by at least two factors of $n$ and provide either early stopping (deterministic) or expected constant round complexity (randomized). We therefore provide fallback security for authenticated Byzantine agreement for free for $t_i \leq n/4$.
Threshold FHE with Efficient Asynchronous Decryption
A Threshold Fully Homomorphic Encryption (ThFHE) scheme enables the generation of a global public key and secret key shares for multiple parties, allowing any threshold of these parties to collaboratively decrypt a ciphertext without revealing their individual secret keys. By leveraging the homomorphic properties of FHE, this scheme supports the distributed computation of arbitrary functions across multiple parties. As distributed execution of cryptographic tasks
becomes popular, the demand for ThFHE schemes grows accordingly. We identify three major challenges with existing solutions. (i) They often take unrealistic assumptions with regards to the network model, assuming the threshold of parties to participate in decryption is known a-priori, available throughout multiple communication rounds, and is consistent between parties. (ii) They incur a super-linear overhead on the underlying FHE public parameters. Both issues pose challenges on scaling with the number of parties. (iii) The require heavyweight Zero-Knowledge Proofs (ZKPs) during decryption, thereby introducing a significant computational overhead in order to tolerate malicious behavior.
In this work, we introduce a \thfhe scheme that faces the above three challenges simultaneously, and is designed to scale with the number of parties N.
Our scheme operates within the well-established asynchronous communication model. At the same time, upon decryption, the ciphertext only incurs a linear 3/4N + t additive overhead on the ciphertext modulus size. Additionally, when allowed to rely on none Post Quantum (PQ)-secure additively homomorphic encryption schemes, we provide a method with an O(1) overhead, independent of N. Lastly, we propose a preprocessing technique, that allows the parties to batch and preprocess all necessary ZKPs in an offline phase, before the encrypted inputs and evaluation circuit are determined. In turn, this enables the system to effectively manage traffic spikes, by exploiting idle periods to preform the ZKPs.
We build on a ring-based FHE scheme, specifically using the BGV scheme for clarity and concreteness. Nonetheless, the techniques also apply to BFV, CKKS, and TFHE schemes.
Accelerating Hash-Based Polynomial Commitment Schemes with Linear Prover Time
Zero-knowledge proofs (ZKPs) are cryptographic protocols that enable one party to prove the validity of a statement without revealing any information beyond its truth. A central building block in many ZKPs are polynomial commitment schemes (PCS) where constructions with \textit{linear-time provers} are especially attractive. Two such examples are Brakedown and its extension Orion which enable linear-time and quantum-resistant proving by leveraging linear-time encodable Spielman codes. However, these PCS operate over large datasets, creating significant computational bottlenecks. For example, committing to and proving a degree $2^{28}$ polynomial requires around 1.1 GB of data while taking 463 seconds on a high-end server CPU.
This work addresses the performance bottleneck in Orion-like PCS by optimizing their most critical operations: Spielman encoding and Merkle commitments. These operations involve Gigabytes of data and suffer from random off-chip memory access patterns that drastically reduce off-chip bandwidth. We resolve this issue and introduce \textit{inverted expander graphs} to eliminate random writes and reduce off-chip memory accesses by over 50\%. Additionally, we propose an on-the-fly graph sampling method that avoids streaming large auxiliary data by generating expander graphs dynamically on-chip. We also provide a formal security proof for our proposed graph transformation. Beyond encoding, we accelerate Merkle Tree construction over large data sets through a scalable multi-pass SHA3 pipeline. Finally, we reutilize existing hardware components used in commitment to accelerate the so-called proximity and consistency checks during proof generation.
Building upon these concepts, we present the first hardware architecture for PCS -- with linear prover time -- on a Xilinx Alveo U280 FPGA. In addition, we discuss the practical challenges of manually partitioning, placing, and routing our large-scale architecture to efficiently map it to the multi-SLR and HBM-equipped FPGA. The final implementation achieves a speedup of two orders of magnitude for full proof generation, covering commitment and proving steps. When combined with Virgo as an outer CP-SNARK protocol, our accelerator reduces end-to-end latency by up to 3.85$\times$ --- close to the theoretical maximum of 3.9$\times$.
Fast Plaintext-Ciphertext Matrix Multiplication from Additively Homomorphic Encryption
Plaintext-ciphertext matrix multiplication (PC-MM) is an indispensable tool in privacy-preserving computations such as secure machine learning and encrypted signal processing. While there are many established algorithms for plaintext-plaintext matrix multiplication, efficiently computing plaintext-ciphertext (and ciphertext-ciphertext) matrix multiplication is an active area of research which has received a lot of attention. Recent literature have explored various techniques for privacy-preserving matrix multiplication using fully homomorphic encryption (FHE) schemes with ciphertext packing and Single Instruction Multiple Data (SIMD) processing. On the other hand, there hasn't been any attempt to speed up PC-MM using unpacked additively homomorphic encryption (AHE) schemes beyond the schoolbook method and Strassen's algorithm for matrix multiplication. In this work, we propose an efficient PC-MM from unpacked AHE, which applies Cussen's compression-reconstruction algorithm for plaintext-plaintext matrix multiplication in the encrypted setting. We experimentally validate our proposed technique using a concrete instantiation with the additively homomorphic elliptic curve ElGamal encryption scheme and its software implementation on a Raspberry Pi 5 edge computing platform. Our proposed approach achieves up to an order of magnitude speedup compared to state-of-the-art for large matrices with relatively small element bit-widths. Extensive measurement results demonstrate that our fast PC-MM is an excellent candidate for efficient privacy-preserving computation even in resource-constrained environments.
REGKYC: Supporting Privacy and Compliance Enforcement for KYC in Blockchains
Know Your Customer (KYC) is a core component of the Anti-Money Laundering (AML) framework, designed to prevent illicit activities within financial systems. However, enforcing KYC and AML on blockchains remains challenging due to difficulties in establishing accountability and preserving user privacy. This study proposes REGKYC, a privacy-preserving Attribute-Based Access Control (ABAC) framework that balances user privacy with externally mandated KYC and AML requirements. REGKYC leverages a structured ABAC model to support the flexible verification of KYC attributes and the enforcement of compliance policies, providing benefits to multiple stakeholders. First, it enables legitimate users to meet compliance requirements while preserving the privacy of their on-chain activities. Second, it empowers Crypto-asset Service Providers (CASPs) to tailor compliance policies to operational needs, ensuring adaptability to evolving regulations. Finally, it enhances regulatory accountability by enabling authorized deanonymization of malicious actors. We hope this work inspires future research to harmonize user privacy and regulatory compliance in blockchain systems.
Efficient Permutation Correlations and Batched Random Access for Two-Party Computation
In this work we formalize the notion of a two-party permutation correlation $(A, B), (C, \pi)$ s.t. $\pi(A)=B+C$ for a random permutation $\pi$ of $n$ elements and vectors $A,B,C\in \mathbb{F}^n$. This correlation can be viewed as an abstraction and generalization of the Chase et al. (Asiacrypt 2020) share translation protocol. We give a systematization of knowledge for how such a permutation correlation can be derandomized to allow the parties to perform a wide range of oblivious permutations of secret-shared data. This systematization immediately enables the translation of various popular honest-majority protocols to be efficiently instantiated in the two-party setting, e.g. collaborative filtering, sorting, database joins, graph algorithms, and many more.
We give two novel protocols for efficiently generating a random permutation correlation. The first uses MPC-friendly PRFs to generate a correlation of $n$ elements, each of size $\ell=\log|\mathbb{F}|$ bits, with $O(n\ell)$ bit-OTs, time, communication, and only 3 rounds including setup. Similar asymptotics previously required relatively expensive public-key cryptography, e.g. Paillier or LWE. Our protocol implementation for $n=2^{20},\ell=128$ requires just 7 seconds & $\sim2\ell n$ OTs for communication, a respective 40 & $1.1\times$ improvement on the LWE solution of Juvekar at al. (CCS 2018). The second protocol is based on pseudo-random correlation generators and achieves an overhead that is sublinear in the string length $\ell$, i.e. the communication and number of OTs is $O(n\log \ell)$. The overhead of the latter protocol has larger hidden constants, and therefore is more efficient only when long strings are permuted, e.g. in graph algorithms.
Finally, we present a suite of highly efficient protocols based on permutations for performing various batched random access operations. These include the ability to extract a hidden subset of a secret-shared list. More generally, we give ORAM-like protocols for obliviously reading and writing from a list in a batched manner. We argue that this suite of batched random access protocols should be a first class primitive in the MPC practitioner's toolbox.
Security of the Ascon Authenticated Encryption Mode in the Presence of Quantum Adversaries
We examine the post-quantum security of the Ascon authenticated encryption (AE) mode. In spite of comprehensive research of Ascon's classical security, the potential impact of quantum adversaries on Ascon has not yet been explored much. We investigate the generic security of the Ascon AE mode in the setting where the adversary owns a quantum computer to improve its attack, while the adversarial encryption or decryption queries are still classical. In this so-called Q1 model, Ascon achieves security up to approximately $\min\{2^{c/3},2^{k/2}\}$ evaluations, where $c$ is the capacity, $k$ the key size, and the adversary is block-wise adaptive but restricted to one forgery attempt. Our technique is based on applying the semi-classical one-way to hiding (O2H) lemma, and on tailoring the puncture set to the Ascon mode.
Additionally, we discuss different parameter choices for Ascon and compare our results to generic quantum attacks, such as Grover-based key search and state recovery.
FINAL bootstrap acceleration on FPGA using DSP-free constant-multiplier NTTs
This work showcases Quatorze-bis, a state-of-the-art Number Theoretic Transform circuit for TFHE-like cryptosystems on FPGAs. It contains a novel modular multiplication design for modular multiplication with a constant for a constant modulus. This modular multiplication design does not require any DSP units or any dedicated multiplier unit, nor does it require extra logic when compared to the state-of-the-art modular multipliers. Furthermore, we present an implementation of a constant multiplier Number Theoretic Transform design for TFHE-like schemes. Lastly, we use this Number Theoretic Transform design to implement a FINAL hardware accelerator for the AMD Alveo U55c which improves the Throughput metric of TFHE-like cryptosystems on FPGAs by a factor 9.28x over Li et al.'s NFP CHES 2024 accelerator and by 10-25% over the absolute state-of-the-art design FPT while using one third of FPTs DSPs.
Proofs of Useful Work from Arbitrary Matrix Multiplication
We revisit the longstanding open problem of implementing Nakamoto's proof-of-work (PoW) consensus based on a real-world computational task $T(x)$ (as opposed to artificial random hashing), in a truly permissionless setting where the miner itself chooses the input $x$. The challenge in designing such a Proof-of-Useful-Work (PoUW) protocol, is using the native computation of $T(x)$ to produce a PoW certificate with prescribed hardness and with negligible computational overhead over the worst-case complexity of $T(\cdot)$ -- This ensures malicious miners cannot ``game the system" by fooling the verifier to accept with higher probability compared to honest miners (while using similar computational resources). Indeed, obtaining a PoUW with $O(1)$-factor overhead is trivial for any task $T$, but also useless.
Our main result is a PoUW for the task of Matrix Multiplication $\mathsf{MatMul}(A,B)$ of arbitrary matrices with $1+o(1)$ multiplicative overhead compared to na\"ive $\mathsf{MatMul}$ (even in the presence of Fast Matrix Multiplication-style algorithms, which are currently impractical). We conjecture that our protocol has optimal security in the sense that a malicious prover cannot obtain any significant advantage over an honest prover. This conjecture is based on reducing hardness of our protocol to the task of solving a batch of low-rank random linear equations which is of independent interest.
Since $\mathsf{MatMul}$s are the bottleneck of AI compute as well as countless industry-scale applications, this primitive suggests a concrete design of a new L1 base-layer protocol, which nearly eliminates the energy-waste of Bitcoin mining -- allowing GPU consumers to reduce their AI training and inference costs by ``re-using" it for blockchain consensus, in exchange for block rewards (2-for-1). This blockchain is currently under construction.
Keyed-Verification Anonymous Credentials with Highly Efficient Partial Disclosure
An anonymous credential (AC) system with partial disclosure allows users to prove possession of a credential issued by an issuer while selectively disclosing a subset of their attributes to a verifier in a privacy-preserving manner. In keyed-verification AC (KVAC) systems, the issuer and verifier share a secret key. Existing KVAC schemes rely on computationally expensive zero-knowledge proofs during credential presentation, with the presentation size growing linearly with the number of attributes. In this work, we propose two highly efficient KVAC constructions that eliminate the need for zero-knowledge proofs during the credential presentation and achieve constant-size presentations.
Our first construction adapts the approach of Fuchsbauer et al. (JoC'19), which achieved constant-size credential presentation in a publicly verifiable setting using their proposed structure-preserving signatures on equivalence classes (SPS-EQ) and set commitment schemes, to the KVAC setting. We introduce structure-preserving message authentication codes on equivalence classes (SP-MAC-EQ) and designated-verifier set commitments (DVSC), resulting in a KVAC system with constant-size credentials (2 group elements) and presentations (5 group elements). To avoid the bilinear groups and pairing operations required by SP-MAC-EQ, our second construction uses a homomorphic MAC with a simplified DVSC. While this sacrifices constant-size credentials ($n+2$ group elements, where $n$ is the number of attributes), it retains constant-size presentations (2 group elements) in a pairingless setting.
We formally prove the security of both constructions and provide open-source implementation results demonstrating their practicality. We extensively benchmarked our KVAC protocols and, additionally, bechmarked the efficiency of our SP-MAC-EQ scheme against the original SPS-EQ scheme, showcasing significant performance improvements.
Publicly Verifiable Generalized Secret Sharing Schemes and Their Applications
Generalized secret sharing (GSS) enables flexible access control in distributed systems by allowing secrets to be shared across arbitrary monotone access structures. However, its adoption in transparent and trustless environments is hindered due to the reliance on trusted participants and secure communication channels. This reliance restricts GSS's ability to provide flexible control in the presence of adversaries. In this paper, we propose publicly verifiable generalized secret sharing (PVGSS), a scheme that allows to distribute a secret using generalized access structures while enabling non-interactive public verifiability of participant honesty. PVGSS scheme offers significant potential to advance the fields of blockchain and applied cryptography, such as attribute-based cryptosystem, fine-grained MPC and on-chain secret escrow. Intuitively, we first build two GSS schemes by leveraging recursive Shamir secret sharing and linear secret sharing scheme. Then, we encrypt GSS shares and generate the corresponding non-interactive zero-knowledge proofs. Further, we innovatively adopt the GSS reconstruction algorithm to prove that all the encrypted shares are bound to the dealer's secret with only $O(n)$ verification complexity. To exemplify the practical applicability, we implement a decentralized exchange (DEX) protocol, where fairness and accountable arbitration are considered. Our benchmarks on the BN128 curve demonstrate the computational efficiency of PVGSS schemes, while Ethereum gas cost analysis confirms the viability of the DEX implementation.
Bitcoin-Enhanced Proof-of-Stake Security: Possibilities and Impossibilities
Bitcoin is the most secure blockchain in the world, supported by the immense hash power of its Proof-of-Work miners. Proof-of-Stake chains are energy-efficient, have fast finality but face several security issues: susceptibility to non-slashable long-range safety attacks, low liveness resilience and difficulty to bootstrap from low token valuation. We show that these security issues are inherent in any PoS chain without an external trusted source, and propose a new protocol, Babylon, where an off-the-shelf PoS protocol checkpoints onto Bitcoin to resolve these issues. An impossibility result justifies the optimality of Babylon. A use case of Babylon is to reduce the stake withdrawal delay: our experimental results show that this delay can be reduced from weeks in existing PoS chains to less than 5 hours using Babylon, at a transaction cost of less than 10K USD per annum for posting the checkpoints onto Bitcoin.
Combining Outputs of a Random Permutation: New Constructions and Tight Security Bounds by Fourier Analysis
We consider constructions that combine outputs of a single permutation $\pi:\{0,1\}^n \rightarrow \{0,1\}^n$ using a public function. These are popular constructions for achieving security beyond the birthday bound when implementing a pseudorandom function using a block cipher (i.e., a pseudorandom permutation). One of the best-known constructions (denoted SXoP$[2,n]$) XORs the outputs of 2 domain-separated calls to $\pi$.
Modeling $\pi$ as a uniformly chosen permutation, several previous works proved a tight information-theoretic indistinguishability bound for SXoP$[2,n]$ of about $q/2^{n}$, where $q$ is the number of queries. However, tight bounds are unknown for the generalized variant (denoted SXoP$[r,n]$) which XORs the outputs of $r \geq 2$ domain-separated calls to a uniform permutation.
In this paper, we obtain two results. Our first result improves the known bounds for SXoP$[r,n]$ for all (constant) $r \geq 3$ (assuming $q \leq O(2^n/r)$ is not too large) in both the single-user and multi-user settings. In particular, for $r=3$, our bound is about $\sqrt{u}q_{\max}/2^{2.5n}$ (where $u$ is the number of users and $q_{\max}$ is the maximal number of queries per user), improving the best-known previous result by a factor of at least $2^n$.
For odd $r$, our bounds are tight for $q > 2^{n/2}$, as they match known attacks. For even $r$, we prove that our single-user bounds are tight by providing matching attacks.
Our second and main result is divided into two parts. First, we devise a family of constructions that output $n$ bits by efficiently combining outputs of 2 calls to a permutation on $\{0,1\}^n$, and achieve multi-user security of about $\sqrt{u} q_{\max}/2^{1.5n}$. Then, inspired by the CENC construction of Iwata [FSE'06], we further extend this family to output $2n$ bits by efficiently combining outputs of 3 calls to a permutation on $\{0,1\}^n$. The extended construction has similar multi-user security of $\sqrt{u} q_{\max}/2^{1.5n}$.
The new single-user ($u=1$) bounds of $q/2^{1.5n}$ for both families should be contrasted with the previously best-known bounds of $q/2^n$, obtained by the comparable constructions of SXoP$[2,n]$ and CENC.
All of our bounds are proved by Fourier analysis, extending the provable security toolkit in this domain in multiple ways.
Strong keys for tensor isomorphism cryptography
Sampling a non degenerate (that is, invertible) square matrix over a finite field is easy, draw a random square matrix and discard if the determinant is zero. We address the problem in higher dimensions, and sample non degenerate boundary format tensors, which generalise square matrices. Testing degeneracy is conjectured to be hard in more than two dimensions, precluding the "draw a random tensor and discard if degenerate'' recipe. The difficulty is in computing hyperdeterminants, higher dimensional analogues of determinants. Instead, we start with a structured random non degenerate tensor and scramble it by infusing more randomness while still preserving non degeneracy. We propose two kinds of scrambling. The first is multiplication in each dimension by random invertible matrices, which preserves dimension and format. Assuming pseudo randomness of this action, which also underlies tensor isomorphism based cryptography, our samples are computationally indistinguishable from uniform non degenerate tensors. The second scrambling employs tensor convolution (that generalises multiplication by matrices) and can increase dimension. Inspired by hyperdeterminant multiplicativity, we devise a recursive sampler that uses tensor convolution to reduce the problem from arbitrary to three dimensions.
Our sampling is a candidate solution for drawing public keys in tensor isomorphism based cryptography, since non degenerate tensors elude recent weak key attacks targeting public key tensors either containing geometric structures such as "triangles" or being deficient in tensor rank. To accommodate our sampling, tensor isomorphism based schemes need to be instantiated in boundary formats such as (2k+1) x (k+1) x (k+1), away from the more familiar k x k x k cubic formats. Our sampling (along with the recent tensor trapdoor one-way functions) makes an enticing case to transition tensor isomorphism cryptography to boundary formats tensors, which are true analogues of square matrices.
Post Quantum Cryptography (PQC) Signatures Without Trapdoors
Some of our current public key methods use a trap door to implement digital signature methods. This includes the RSA method, which uses Fermat's little theorem to support the creation and verification of a digital signature. The problem with a back-door is that the actual trap-door method could, in the end, be discovered. With the rise of PQC (Post Quantum Cryptography), we will see a range of methods that will not use trap doors and provide stronger proof of security. In this case, we use hash-based signatures (as used with SPHINCS+) and Fiat Shamir signatures using Zero Knowledge Proofs (as used with Dilithium).
Myco: Unlocking Polylogarithmic Accesses in Metadata-Private Messaging
As billions of people rely on end-to-end encrypted messaging, the exposure of metadata, such as communication timing and participant relationships, continues to deanonymize users. Asynchronous metadata-hiding solutions with strong cryptographic guarantees have historically been bottlenecked by quadratic $O(N^2)$ server computation in the number of users $N$ due to reliance on private information retrieval (PIR). We present Myco, a metadata-private messaging system that preserves strong cryptographic guarantees while achieving $O(N \log^2 N)$ efficiency. To achieve this, we depart from PIR and instead introduce an oblivious data structure through which senders and receivers privately communicate. To unlink reads and writes, we instantiate Myco in an asymmetric two-server distributed-trust model where clients write messages to one server tasked with obliviously transmitting these messages to another server, from which clients read. Myco achieves throughput improvements of up to 302x over multi-server and 2,219x over single-server state-of-the-art systems based on PIR.
The Role of Quantum Computing in Enhancing Encryption Security: A Review
This paper examines how quantum computing enhances the encryption system. It studies the relationship between cryptography and quantum physics. The paper considers the historical progression of encryption techniques paying attention to the altering nature of security challenges. Moreover, it analyzes the basic principles of quantum computing, describing its theoretical concept and its advantages over classical systems in terms of potential performance. Also, it focuses on an in-depth analysis of Grovers’ Algorithm showing its unique searching capability and its implications for encryption schemes. The paper also reviews the limitations of Grover’s Algorithm that could make it vulnerable to attacks and how to make it safer. Also, the methods of quantum computing that create strong encryption are briefly outlined. Overall, in the quest for secure systems of communication in the era of quantum, the paper aims at the futuristic paradigm shift by considering the emergence of Quantum-Powered Encryption Systems. It answers key questions about this subject through both, qualitative and quantitative analysis. The provided scientific report supplements the existing body of knowledge about the relationship between quantum computers and encryption systems and sets the foundation for better and more secure encryption for the digital world.
Breaking ECDSA with Two Affinely Related Nonces
The security of the Elliptic Curve Digital Signature Algorithm (ECDSA) depends on the uniqueness and secrecy of the nonce, which is used in each signature. While it is well understood that nonce $k$ reuse across two distinct messages can leak the private key, we show that even if a distinct value is used for $k_2$, where an affine relationship exists in the form of: \(k_m = a \cdot k_n + b\), we can also recover the private key. Our method requires only two signatures (even over the same message) and relies purely on algebra, with no need for lattice reduction or brute-force search(if the relationship, or offset, is known). To our knowledge, this is the first closed-form derivation of the ECDSA private key from only two signatures over the same message, under a known affine relationship between nonces.
Breaking verifiability and vote privacy in CHVote
Abstract. CHVote is one of the two main electronic voting systems developed in the context of political elections in Switzerland, where the regulation requires a specific setting and specific trust assumptions. We show that actually, CHVote fails to achieve vote secrecy and individual verifiability (here, recorded-as-intended), as soon as one of the online components is dishonest, contradicting the security claims of CHVote. In total, we found 9 attacks or variants against CHVote, 2 of them being based on a bug in the reference implementation. We confirmed our findings through a proof-of-concept implementation of our attacks.
Grafting: Decoupled Scale Factors and Modulus in RNS-CKKS
The CKKS Fully Homomorphic Encryption (FHE) scheme enables approximate arithmetic on encrypted complex numbers for a desired precision. Most implementations use RNS with carefully chosen parameters to balance precision, efficiency, and security. However, a key limitation in RNS-CKKS is the rigid coupling between the scale factor, which determines numerical precision, and the modulus, which ensures security. Since these parameters serve distinct roles—one governing arithmetic correctness and the other defining cryptographic structure—this dependency imposes design constraints, such as a lack of suitable NTT primes and limited precision flexibility, ultimately leading to inefficiencies.
We propose Grafting, a novel approach to decouple scale factors from the modulus by introducing (universal) sprouts, reusable modulus factors that optimize word-sized packing while allowing flexible rescaling. With the universal property, sprouts allow rescaling by arbitrary bit-lengths and key-switching at any modulus bit-length without requiring additional key-switching keys. Decoupling the scale factor from the modulus in Grafting yields significant efficiency gains: (1) Optimized RNS packing by decomposing the modulus into machine word-sized components, accelerating computations and reducing the ciphertext and encryption/evaluation key sizes; and (2) A freely adjustable scale factor independent of the modulus, unifying the ring structure across applications and reducing modulus consumption through adaptive scalings.
Our experiments demonstrate that Grafting improves performance across standard SHE/FHE parameter sets for ring dimensions $2^{14}$-$2^{16}$ by up to $1.83\times$ and $2.01\times$ for key-switchings and multiplications, respectively, and up to $1.92\times$ for bootstrapping. Grafting also reduces public key and ciphertext sizes by up to $62\%$ without compressions, maintaining the same number of public keys as before. As an application, we showcase the CKKS gate bootstrapping for bits (Bae et al.; Eurocrypt'24), achieving $1.89\times$ speed-up due to the reduced number of RNS factors. Finally, we revisit the homomorphic comparison (Cheon et al.; Asiacrypt'20), evaluating it with carefully chosen scale factors for each iteration, reporting up to $204$-bit fewer modulus consumption ($27\%$ reduction) in the standard parameter set, without precision loss.
Reducing Honest Re-Encryption Attack to Chosen Ciphertext Attack
Proxy re-encryption (PRE) schemes allow a delegator to designate a proxy to re-encrypt its ciphertext into a ciphertext that the delegatee can decrypt. In contrast, the proxy gains nothing helpful from this transformation. This decryption-power transfer is proper in applications of encrypted email forwarding, key escrow, and publish/subscribe systems.
The security notions for PRE are inherited from the standard public key encryption (PKE) schemes, i.e., indistinguishability under chosen-plaintext attacks (CPA) and security under chosen-ciphertext attacks (CCA). A recently popular notion, indistinguishability under honest re-encryption attacks (HRA), was proposed by Cohen in 2019, indicating that CPA security is insufficient for PRE because some CPA-secure PRE leaks the secret key of the delegator. Many post-quantum secure PRE schemes have recently been designed under the HRA security model.
However, HRA security differs from traditional CCA security, and there is no known reduction between them. The existing results show they appear to be incompatible. This paper aims to bridge those two security notions via reductions. In addition, we found that many existing HRA-secure schemes are vulnerable to collusion. We provide a generic transformation from a CPA-secure PRE to a collusion-resistant and CPA-secure PRE. This transformation also applies to HRA-secure PREs.
Priv-PFL: A Privacy-Preserving and Efficient Personalized Federated Learning Approach
Federated Learning (FL) allows clients to engage in learning without revealing their raw data. However, traditional FL focuses on developing a single global model for all clients, limiting their ability to have personalized models tailored to their specific needs. Personalized FL (PFL) enables clients to obtain their customized models, either with or without a central party. Current PFL research includes mechanisms to detect poisoning attacks, in which a couple of malicious nodes try to manipulate training convergence by submitting misleading data. However, these detection approaches often overlook privacy concerns, as they require clients to share their models with all other clients.
This paper extends BALANCE, a personalized poisoning detection mechanism based on client models and their expectations. Our method enhances both security and privacy by ensuring clients are not required to share their model data with other clients. By leveraging server-assisted PFL and Fully Homomorphic Encryption (FHE), we enable a central party to identify unpoisoned clients from the perspective of individual clients and train personalized models securely. Additionally, we introduce an efficient personalized client selection algorithm that prevents redundant checks and ensures the inheritance of unpoisoned clients.
Two Party Secret Shared Joins
We present concrete techniques for adapting the protocols of Mohassel et al (CCS 2020) and Badrinarayanan et al (CCS 2022)
for compute SQL-like querying operations on secret shared database tables to the two party setting. The afore mentioned protocols are presented in a generic setting with access to certain idealized functionalities, e.g. secret shared permutations. However, they only instantiate their protocols in the honest majority three party setting due to other settings being considered too inefficient. We show that this is no longer the case. In particular, the recent work of Peceny et al. (eprint 2024) gives a concretely efficient two party permutation protocol. Additionally, we give a new and highly efficient protocol for evaluating the strong PRF recently proposed by Alamati et al. (Crypto 2024). Building on these advancements, along with a variety of protocol improvements and significant cryptographic engineering, our open source implementation demonstrate concretely efficient two party SQL-like querying functionality on secret shared data.
We focus on the two party setting with secret shared input and output tables. The first protocol $\Pi_\textsf{Join-OO}$ is designed for the setting where the join keys are unique, similar to Private Set Intersection (PSI) except that the inputs and output are secret shared. This protocol is constant round and $O(n)$ running time. The secret protocol $\Pi_\textsf{Join-OM}$ allows one of the tables to contain repeating join keys. Our instantiations achieves $O(n\log n)$ running time and $O(\log n)$ rounds of interaction.
Hermes: Efficient and Secure Multi-Writer Encrypted Database
Searchable encryption (SE) enables privacy-preserving keyword search on encrypted data. Public-key SE (PKSE) supports multi-user searches but suffers from high search latency due to expensive public-key operations. Symmetric SE (SSE) offers a sublinear search but is mainly limited to single-user settings. Recently, hybrid SE (HSE) has combined SSE and PKSE to achieve the best of both worlds, including multi-writer encrypted search functionalities, forward privacy, and sublinear search with respect to database size. Despite its advantages, HSE inherits critical security limitations, such as susceptibility to dictionary attacks, and still incurs significant overhead for search access control verification, requiring costly public-key operation invocations (i.e., pairing) across all authorized keywords. Additionally, its search access control component must be rebuilt periodically for forward privacy, imposing substantial writer overhead.
In this paper, we propose Hermes, a new HSE scheme that addresses the aforementioned security issues in prior HSE designs while maintaining minimal search complexity and user efficiency at the same time. Hermes enables multi-writer encrypted search functionalities and offers forward privacy along with resilience to dictionary attacks. To achieve this, we develop a new identity-based encryption scheme with hidden identity and key-aggregate properties, which could be of independent interest. We also design novel partitioning and epoch encoding techniques in Hermes to minimize search complexity and offer low user overhead in maintaining forward privacy. We conducted intensive experiments to assess and compare the performance of Hermes and its counterpart on commodity hardware. Experimental results showed that Hermes performs search one to two orders of magnitude faster than the state-of-the-art HSE while offering stronger security guarantees to prevent dictionary and injection attacks.
Optimizing Final Exponentiation for Pairing-Friendly Elliptic Curves with Odd Embedding Degrees Divisible by 3
In pairing-based cryptography, the final exponentiation with a large fixed exponent is crucial for ensuring unique outputs in both Tate and optimal ate pairings. While significant strides have been made in optimizing elliptic curves with even embedding degrees, progress remains limited for curves with odd embedding degrees, especially those divisible by $3$. This paper introduces novel techniques to optimize the computation of the final exponentiation for the optimal ate pairing on such curves. The first technique leverages the structure of certain existing seeds to enable the use of cyclotomic cubing and extends this concept to generate new seeds with similar characteristics.
The second technique focuses on producing new sparse ternary representation seeds to utilize cyclotomic cubing as a replacement for squaring. These approaches result in performance improvements of up to $19.3\%$ in the computation of the final exponentiation for the optimal ate pairing on $BLS15$ and $BLS27$ curves.
Fherret: Proof of FHE Correct-and-Honest Evaluation with Circuit Privacy from MPCitH
The major Fully Homomorphic Encryption (FHE) schemes guarantee the privacy of the encrypted message only in the honest-but-curious setting, when the server follows the protocol without deviating. However, various attacks in the literature show that an actively malicious server can recover sensitive information by executing incorrect functions, tampering with ciphertexts, or observing the client’s reaction during decryption.
Existing integrity solutions for FHE schemes either fail to guarantee circuit privacy, exposing the server's computations to the client, or introduce significant computational overhead on the prover by requiring proofs of FHE operations on ciphertexts.
In this work, we present Fherret, a novel scheme leveraging the MPC-in-the-Head (MPCitH) paradigm to provide a proof of correct-and-honest homomorphic evaluation while preserving circuit privacy. This proof guarantees that the client can safely decrypt the ciphertext obtained from the server without being susceptible to reaction-based attacks, such as verification and decryption oracle attacks. Additionally, this proof guarantees that the server’s evaluation maintains correctness, thereby protecting the client from $\mathsf{IND}\text{-}\mathsf{CPA}^{\mathsf{D}}$-style attacks.
Our solution achieves a prover overhead of $4\lambda$ homomorphic evaluations of random functions from the function space $\mathcal{F}$, while retaining a competitive verifier overhead of $2 \lambda$ homomorphic evaluations and a communication size proportional to $\sqrt{2\lambda}$ times the size of a function from $\mathcal{F}$.
Furthermore, Fherret is inherently parallelizable, achieving a parallel computation overhead similar to a homomorphic evaluation of a random function from $\mathcal{F}$ for both the prover and the verifier.
Projective Space Stern Decoding and Application to SDitH
We show that here standard decoding algorithms for generic linear codes over a finite field can speeded up by a factor which is essentially the size of the finite field by reducing it to a low weight codeword problem and working in the relevant projective space. We apply this technique to SDitH and demonstrate that the parameters of the original submission fail to meet the security requirements set by NIST. However, the updated version, released after the discovery of this attack, is no longer challenged by our attack.
Threshold (Fully) Homomorphic Encryption
This document is a preliminary version of what is intended to be submitted to NIST by Zama as part of their threshold call. The document also serves as partial documentation of the protocols used in the Zama MPC system for threshold TFHE.
However, note that the Zama software includes many optimizations built on top of the simple specifications given here. In particular the TFHE parameters given here are larger than those used by the Zama software. This is because the Zama TFHE library contains optimizations which are beyond the scope of this document. Thus the parameters given in this document are compatible with the description of TFHE given here, and take no account of the extra optimizations in the Zama software.
Also note that we describe more protocols than that provided in the Zama software. In particular this document describes BGV and BFV threshold implementations, MPC-in-the-Head based proofs of correct encryption.
We present mechanisms to perform robust threshold key generation and decryption for Fully Homomorphic Encryption schemes such as BGV, BFV and TFHE, in the case of super honest majority, t < n/3, or t < n/4, in the presence of malicious adversaries.
The main mechanism for threshold decryptions follow the noise flooding principle, which we argue is sufficient for BGV and BFV. For TFHE a more subtle technique is needed to apply noise flooding, since TFHE parameters are small. To deal with all three FHE scheme, and obtain a unified framework for all such schemes, we are led to consider secret sharing over Galois Rings and not just finite fields.
We consider two sets of threshold profiles, depending on whether binomial(n,t) is big or small. In the small case we obtain for all schemes an asynchronous protocol for robust threshold decryption, and we obtain a robust synchronous protocol for threshold key generation; both with t < n/3. For the large case we only support TFHE, and our protocols require an “offline phase” which requires synchronous networks and can “only” tolerate t < n/4.
The threshold key generation operation, and the above mentioned offline phase, require access to a generic offline MPC functionality over arbitrary Galois Rings. This functionality is fully specified here. Finally, we present Zero-Knowledge proof techniques for proving the valid encryption of an FHE ciphertext. These proofs are important in a number of application contexts.
Mind the Grammar: Side-Channel Analysis driven by Grammatical Evolution
Deep learning-based side-channel analysis is an extremely powerful option for profiling side-channel attacks. However, to perform well, one needs to select the neural network model and training time hyperparameters carefully. While many works investigated these aspects, random search could still be considered the current state-of-the-art. Unfortunately, random search has drawbacks, since the chances of finding a good architecture significantly drop when considering more complex targets.
In this paper, we propose a novel neural architecture search approach for SCA based on grammatical evolution - SCAGE. We define a custom SCA grammar that allows us to find well-performing and potentially unconventional architectures. We conduct experiments on four datasets, considering both synchronized and desynchronized versions, as well as using feature intervals or raw traces. Our results show SCAGE to perform extremely well in all settings, outperforming random search and related works in most of the considered scenarios.
SoK: PQC PAKEs - Cryptographic Primitives, Design and Security
Password Authenticated Key Exchange (PAKE) establishes secure communication channels using relatively short, often human memorable, passwords for authentication. The currently standardized PAKEs however rely on classical asymmetric (public key) cryptography. Thus, these classical PAKEs may become insecure, should the expected quantum threat become a reality. Despite the growing interest in realizing quantum-safe PAKEs, they did not receive much attention from the ongoing Post-Quantum Cryptography (PQC) integration efforts. Thus, there is a significant gap in awareness compared to PQC primitives subject to the official governmental and institutional standardization processes. In this work, we provide a comprehensive overview of the existing PQC PAKEs focusing on their design rationales, authentication methods and asymmetric key agreement primitives. Further, we classify PQC PAKEs w.r.t. their properties and security assurances. Finally, we address PAKE designs that are still unexplored in the PQC realm and discuss the possibility of their adaptation. Thus, we offer a detailed reference for future work on PQC PAKEs.
A Multi-Differential Approach to Enhance Related-Key Neural Distinguishers
At CRYPTO 2019, Gohr pioneered the integration of differential cryptanalysis with neural networks, demonstrating significant advantages over traditional distinguishers. Subsequently, at Inscrypt 2020, Su et al. proposed the concept of constructing polyhedral differential neural distinguishers by leveraging multiple effective input differences. More recently, at FSE 2024, Bellini et al. introduced a general-purpose tool for automating the training of single-key differential neural distinguishers for various block ciphers. Inspired by this body of work, we aim to extend automated search techniques to related-key differential neural distinguishers, enabling the discovery of effective input differences and key differences for such distinguishers. To achieve this, we employ a genetic optimization algorithm to identify effective differential combinations. To validate the efficacy of our method, we apply it to the Simeck and Simon cipher families, successfully identifying effective differential combinations for the three variants of Simeck and ten variants of Simon. Furthermore, inspired by the concept of polyhedral neural distinguishers, we adopt a novel data format that leverages multiple distinct input differences and key differences to construct positive and negative samples, providing the neural network with a richer set of features. Our approach not only identify high-quality distinguishers for previously unexplored cipher variants but also achieve higher accuracy for related-key differential neural distinguishers compared to the state-of-the-art.
Faster amortized bootstrapping using the incomplete NTT for free
Amortized bootstrapping techniques have been proposed for FHEW/TFHE to efficiently refresh multiple ciphertexts simultaneously within a polynomial modulus. Although recent proposals have very efficient asymptotic complexity, reducing the amortized cost essentially to $\tilde{O}(1)$ FHE multiplications, the practicality of such algorithms still suffers from substantial overhead and high decryption failure rates (DFR). In this study, we improve upon one of the state-of-the-art amortized bootstrapping algorithms (Guimarães et al., ASIACRYPT 2023) for FHEW/TFHE-like schemes by introducing an alternative algorithmic strategy. Specifically, we combine Guimarães et al.'s strategy based on a two-part NTT with an incomplete Number Theoretic Transform (NTT) algorithm. As a result, we demonstrate a 2.12$\times$ speedup compared to the algorithm of Guimarães et al. and a $1.12\times$ improvement over the state-of-the-art (sequential) TFHE-rs while achieving a DFR close to $2^{-32}$ for 7-bit messages, although the DFR is higher for 8-bit messages. We also explore trade-offs between execution time and DFR, identifying parameter sets that improve execution time of Guimarães et al. by $1.41\times$, while simultaneously reducing the DFR by a factor of $2^{-22}$ for 8-bit messages.
PAKE Combiners and Efficient Post-Quantum Instantiations
Much work has been done recently on developing password-authenticated key exchange (PAKE) mechanisms with post-quantum security. However, modern guidance recommends the use of hybrid schemes—schemes which rely on the combined hardness of a post-quantum assumption, e.g., Learning with Errors (LWE), and a more traditional assumption, e.g., decisional Diffie-Hellman. To date, there is no known hybrid PAKE construction, let alone a general method for achieving such.
In this paper, we present two efficient PAKE combiners—algorithms that take two PAKEs satisfying mild assumptions, and output a third PAKE with combined security properties—and prove these combiners secure in the Universal Composability (UC) model. Our sequential combiner, instantiated with efficient existing PAKEs such as CPace (built on Diffie-Hellman-type assumptions) and CAKE (built on lattice assumptions), yields the first known hybrid PAKE.
Mario: Multi-round Multiple-Aggregator Secure Aggregation with Robustness against Malicious Actors
Federated Learning (FL) enables multiple clients to collaboratively train a machine learning model while keeping their data private, eliminating the need for data sharing. Two common approaches to secure aggregation (SA) in FL are the single-aggregator and multiple-aggregator models. This work focuses on improving the multiple-aggregator model.
Existing multiple-aggregator protocols such as Prio (NSDI 2017), Prio+ (SCN 2022), Elsa (S&P 2023) either offer robustness only in the presence of semi-honest servers or provide security without robustness and are limited to two aggregators. We introduce Mario, the first multiple-aggregator Secure Aggregation protocol that is both secure and robust in a malicious setting. Similar to prior work of Prio and Prio+, Mario provides secure aggregation in a setup of $n$ servers and $m$ clients. Unlike previous work, Mario removes the assumption of semi-honest servers, and provides a complete protocol with robustness under malicious clients and malicious servers. Our implementation shows that \system is $3.40\times$ and $283.4\times$ faster than Elsa and Prio+, respecitively.
VRaaS: Verifiable Randomness as a Service on Blockchains
Web3 applications, such as on-chain games, NFT minting, and leader elections necessitate access to unbiased, unpredictable, and publicly verifiable randomness. Despite its broad use cases and huge demand, there is a notable absence of comprehensive treatments of on-chain verifiable randomness services. To bridge this, we offer an extensive formal analysis of on-chain verifiable randomness services.
We present the $first$ formalization of on-chain verifiable randomness in the blockchain setting by introducing the notion of Verifiable Randomness as a Service (VRaaS). We formally define VRaaS using an ideal functionality $\mathcal{F}_{\text{VRaaS}}$ in the Universal Composability model. Our definition not only captures the core features of randomness services, such as unbiasability, unpredictability, and public verifiability, but also accounts for many other crucial nuances pertaining to different entities involved, such as smart contracts.
Within our framework we study a generic design of Verifiable Random Function (VRF)-based randomness service - where the randomness requester provides an input on which the randomness is evaluated as VRF output. We show that it does satisfy our formal VRaaS definition. Furthermore, we show that the generic protocol captures many real-world randomness services like Chainlink VRF and Supra dVRF.
Moreover, we investigate the minimalism of the framework. Towards that, first we show that, the two transactions in-built in our framework are actually $necessary$ for any randomness service to support the essential qualities. We also discover $practical$ $vulnerabilities$ in other designs such as Algorand beacon, Pyth VRF and Band VRF, captured within our framework.
Efficient Foreign-Field Arithmetic in PLONK
PLONK is a prominent universal and updatable zk-SNARK for general circuit satisfiability, which allows a prover to produce a short certificate of the validity of a certain statement/computation. Its expressive model of computation and its highly efficient verifier complexity make PLONK a powerful tool for a wide range of blockchain applications.
Supporting standard cryptographic primitives (such us ECDSA over SECP256k1) or advanced recursive predicates (e.g. incrementally verifiable computation) on a SNARK presents a significant challenge. It requires so-called foreign-field arithmetic (enforcing constraints over algebraic fields that differ from the SNARK native field) which was previously believed to incur an overhead of two or three orders of magnitude.
We build on the techniques by Lubarov and Baylina and observe that, by considering tight bounds on their encoding of foreign-field multiplication, the number of PLONK constraints can be significantly reduced. We show that these techniques also extend to elliptic curve emulation, with an overhead of just one order of magnitude (with respect to its native counterpart). We validate soundness and completeness of our main results in EasyCrypt. Finally, we implement an open-source library with support for foreign-field arithmetic. Our experimental results showcase the generality of our techniques and confirm their suitability for real-world applications.
A Formal Security Analysis of Hyperledger AnonCreds
In an anonymous credential system, users collect credentials from issuers, and can use their credentials to generate privacy-preserving identity proofs that can be shown to third-party verifiers. Since the introduction of anonymous credentials by Chaum in 1985, there has been promising advances with respect to system design, security analysis and real-world implementations of anonymous credential systems.
In this paper, we examine Hyperledger AnonCreds, an anonymous credential system that was introduced in 2017 and is currently undergoing specification. Despite being implemented in deployment-ready identity system platforms, there is no formal security analysis of the Hyperledger AnonCreds protocol. We rectify this, presenting syntax and a security model for, and a first security analysis of, the Hyperledger AnonCreds protocol. In particular, we demonstrate that Hyperledger AnonCreds is correct, and satisfies notions of unforgeability and anonymity. We conclude with a discussion on the implications of our findings, highlighting the importance of rigorous specification efforts to support security evaluation of real-world cryptographic protocols.
BitVM: Quasi-Turing Complete Computation on Bitcoin
A long-standing question in the blockchain community is which class of computations are efficiently expressible in cryptocurrencies with limited scripting languages, such as Bitcoin Script. Such languages expose a reduced trusted computing base, thereby being less prone to hacks and vulnerabilities, but have long been believed to support only limited classes of payments.
In this work, we confute this long-standing belief by showing for the first time that arbitrary computations can be encoded in today's Bitcoin Script without introducing any language modification or additional security assumptions, such as trusted hardware, trusted parties, or committees with an honest majority. We present BitVM, a two-party protocol that realizes a generic virtual machine by combining cryptographic primitives and economic incentives. We conduct a formal analysis of BitVM, characterizing its functionality, system assumptions, and security properties. We further demonstrate the practicality of our approach by implementing a prototype and performing an experimental evaluation: in the optimistic case (i.e., when parties agree), our protocol requires just three on-chain transactions, whereas in the pessimistic case, the number of transactions grows logarithmically with the size of the virtual machine. We exemplify the deployment potential of BitVM by building a Bitcoin-sidechain bridge application. This work not only solves a long-standing theoretical problem, but it also promises a strong practical impact, enabling the development of complex applications in Bitcoin.