Papers updated in last 7 days (70 results)
Leaky McEliece: Secret Key Recovery From Highly Erroneous Side-Channel Information
The McEliece cryptosystem is a strong contender for post-quantum schemes, including key encapsulation for confidentiality of key exchanges in network protocols.
A McEliece secret key is a structured parity check matrix that is transformed via Gaussian elimination into an unstructured public key. We show that this transformation is highly critical with respect to side-channel leakage. We assume leakage of the elementary row operations during Gaussian elimination, motivated by McEliece implementations in the cryptographic libraries Classic McEliece and Botan.
We propose a novel decoding algorithm to reconstruct a secret key from its public key with information from a Gaussian transformation leak. Even if the obtained side-channel leakage is extremely noisy, i.e., each bit is flipped with probability as high as τ ≈ 0.4, we succeed to recover the secret key in a matter of minutes for all proposed (Classic) McEliece instantiations. Remarkably, for high-security McEliece parameters, our attack is more powerful in the sense that it can tolerate even larger τ.
We demonstrate our attack on the constant-time reference implementation of Classic McEliece in a single-trace setting, using an STM32L592 ARM processor.
Our result stresses the necessity of properly protecting highly structured code-based schemes such as McEliece against side-channel leakage.
Cryptanalysis of a nonlinear filter-based stream cipher
It is shown that the stream cipher proposed by Carlet and Sarkar in ePrint report 2025/160 is insecure. More precisely, one bit of the key can be deduced from a few keystream bytes. This property extends to an efficient key-recovery attack. For example, for the proposal with 80 bit keys, a few kilobytes of keystream material are sufficient to recover half of the key.
Bounded Collusion-Resistant Registered Functional Encryption for Circuits
As an emerging primitive, Registered Functional Encryption (RFE) eliminates the key-escrow issue that threatens numerous works for functional encryption, by replacing the trusted authority with a transparent key curator and allowing each user to sample their decryption keys locally. In this work, we present a new black-box approach to construct RFE for all polynomial-sized circuits. It considers adaptive simulation-based security in the bounded collusion model (Gorbunov et al. - CRYPTO'12), where the security can be ensured only if there are no more than Q >= 1 corrupted users and $Q$ is fixed at the setup phase. Unlike earlier works, we do not employ unpractical Indistinguishability Obfuscation (iO). Conversely, it can be extended to support unbounded users, which is previously only known from iO.
Technically, our general compiler exploits garbled circuits and a novel variant of slotted Registered Broadcast Encryption (RBE), namely global slotted RBE. This primitive is similar to slotted RBE, but needs optimally compact public parameters and ciphertext, so as to satisfy the efficiency requirement of the resulting RFE. Then we present two concrete global slotted RBE from pairings and lattices, respectively. With proposed compiler, we hence obtain two bounded collusion-resistant RFE schemes. Here, the first scheme relies on k-Lin assumption, while the second one supports unbounded users under LWE and evasive LWE assumptions.
Optimizing Key Recovery in Impossible Cryptanalysis and Its Automated Tool
Impossible differential (ID) cryptanalysis and impossible boomerang (IB) cryptanalysis are two methods of impossible cryptanalysis against block ciphers. Since the seminal work introduced by Boura et al. in 2014, there have been no substantial advancements in the key recovery process for impossible cryptanalysis, particularly for the IB attack.In this paper, we propose a generic key recovery framework for impossible cryptanalysis that supports arbitrary key-guessing strategies, enabling optimal key recovery attacks. Within the framework, we provide a formal analysis of probabilistic extensions in impossible cryptanalysis for the first time. Besides, for the construction of IB distinguishers, we propose a new method for finding contradictions in multiple rounds.
By incorporating these techniques, we propose an Mixed-Integer Linear Programming (MILP)-based tool for finding full ID and IB attacks. To demonstrate the power of our methods, we applied it to several block ciphers, including SKINNY, SKINNYee, Midori, and Deoxys-BC. Our approach yields a series of optimal results in impossible cryptanalysis, achieving significant improvements in time and memory complexities. Notably, our IB attack on SKINNYee is the first 30-round attack.
New Quantum Cryptanalysis of Binary Elliptic Curves (Extended Version)
This paper improves upon the quantum circuits required for the Shor's attack on binary elliptic curves. We present two types of quantum point addition, taking both qubit count and circuit depth into consideration.
In summary, we propose an in-place point addition that improves upon the work of Banegas et al. from CHES'21, reducing the qubit count – depth product by more than $73\%$ – $81\%$ depending on the variant. Furthermore, we develop an out-of-place point addition by using additional qubits. This method achieves the lowest circuit depth and offers an improvement of over $92\%$ in the qubit count – quantum depth product (for a single step).
To the best of our knowledge, our work improves from all previous works (including the CHES'21 paper by Banegas et al., the IEEE Access'22 paper by Putranto et al., and the CT-RSA'23 paper by Taguchi and Takayasu) in terms of circuit depth and qubit count – depth product.
Equipped with the implementations, we discuss the post-quantum security of the binary elliptic curve cryptography. Under the MAXDEPTH metric (proposed by the US government's NIST), the quantum circuit with the highest depth in our work is $2^{24}$, which is significantly lower than the MAXDEPTH limit of $2^{40}$. For the gate count – full depth product, a metric for estimating quantum attack cost (proposed by NIST), the highest complexity in our work is $2^{60}$ for the curve having degree 571 (which is comparable to AES-256 in terms of classical security), considerably below the post-quantum security level 1 threshold (of the order of $2^{156}$).
Preservation of Speculative Constant-Time by Compilation
Compilers often weaken or even discard software-based countermeasures commonly used to protect programs against side-channel attacks; worse, they may also introduce vulnerabilities that attackers can exploit. The solution to this problem is to develop compilers that preserve such countermeasures. Prior work establishes that (a mildly modified version of) the CompCert and Jasmin formally verified compilers preserve constant-time, an information flow policy that ensures that programs are protected against timing side-channel attacks. However, nothing is known about preservation of speculative constant-time, a strengthening of the constant-time policy that ensures that programs are protected against Spectre-v1 attacks. We first show that preservation of speculative constant-time fails in practice by providing examples of secure programs whose compilation is not speculative constant-time using GCC (GCC -O0 and GCC -O1) and Jasmin. Then, we define a proof-of-concept compiler that distills some of the critical passes of the Jasmin compiler and use the Coq proof assistant to prove that it preserves speculative constant-time. Finally, we patch the Jasmin speculative constant-time type checker and demonstrate that all cryptographic implementations written in Jasmin can be fixed with minimal impact.
Protecting cryptographic code against Spectre-RSB
It is fundamental that executing cryptographic software must not leak secrets through side-channels. For software-visible side-channels, it was long believed that "constant-time" programming would be sufficient as a systematic countermeasure. However, this belief was shattered in 2018 by attacks exploiting speculative execution—so called Spectre attacks.
Recent work shows that language support suffices to protect cryptographic code with minimal overhead against one class of such attacks, Spectre v1, but leaves an open question of whether this result can be extended to also cover other classes of Spectre attacks.
In this paper, we answer this question in the affirmative: We design, validate, implement, and verify an approach to protect cryptographic implementations against all known classes of Spectre attacks—the main challenge in this endeavor is attacks exploiting the return stack buffer, which are known as Spectre-RSB. Our approach combines a new value-dependent information-flow type system that enforces speculative constant-time in an idealized model of transient execution and a compiler transformation that realizes this idealized model on the generated low-level code. Using the Coq proof assistant, we prove that the type system is sound with respect to the idealized semantics and that the compiler transformation preserves speculative constant-time.
We implement our approach in the Jasmin framework for high-assurance cryptography and demonstrate that the overhead incurred by full Spectre protections is below 2% for most cryptographic primitives and reaches only about 5–7% for the more complex post-quantum key-encapsulation mechanism Kyber.
The supersingular endomorphism ring problem given one endomorphism
Given a supersingular elliptic curve $E$ and a non-scalar endomorphism $\alpha$ of $E$, we prove that the endomorphism ring of $E$ can be computed in classical time about $\text{disc}(\mathbb{Z}[\alpha])^{1/4}$ , and in quantum subexponential time, assuming the generalised Riemann hypothesis. Previous results either had higher complexities, or relied on heuristic assumptions.
Along the way, we prove that the Primitivisation problem can be solved in polynomial time (a problem previously believed to be hard), and we prove that the action of smooth ideals on oriented elliptic curves can be computed in polynomial time (previous results of this form required the ideal to be powersmooth, i.e., not divisible by any large prime power).
Following the attacks on SIDH, isogenies in high dimension are a central ingredient of our results.
OBLIVIATOR: Oblivious Parallel Joins and other Operators in Shared Memory Environments
We introduce oblivious parallel operators designed for both non-foreign key and foreign key equi-joins. Obliviousness ensures nothing is revealed about the data besides input/output sizes, even against a strong adversary that can observe memory access patterns.
Our solution achieves this by combining trusted hardware with efficient oblivious primitives for compaction and sorting, and two oblivious algorithms: (i) an oblivious aggregation tree, which can be described as a variation of the parallel prefix sum, customized for trusted hardware, and (ii) a novel algorithm for obliviously expanding the elements of a relation.
In the sequential setting, our oblivious join performs $4.6\times$- $5.14\times$ faster than the prior state-of-the-art solution (Krastnikov et al., VLDB 2020) on data sets of size $n=2^{24}$. In the parallel setting, our algorithm achieves a speedup of up to roughly $16\times$ over the sequential version, when running with 32 threads (becoming up to $80\times$ compared to the sequential algorithm of Krastnikov et al.).
Finally, our oblivious operators can be used independently to support other oblivious relational database queries, such as oblivious selection and oblivious group-by.
Perfect Somewhat Homomorphic Encryption and 2-Party Computation
Two-party computation has been an active area of research since Yao's breakthrough results on garbled circuits. We present secret key additive somewhat homomorphic schemes where the client has perfect privacy (server can be computationally unbounded). Our basic scheme is additive somewhat homomorphic and we extend it to be somewhat homomorphic by supporting multiplication. The server handles circuit multiplication gates by returning the multiplicands to the client which updates the decryption key so that the original ciphertext vector includes the encrypted multiplication gate outputs. We give a 2-party protocol that also incorporates server inputs where the client has perfect privacy. Server privacy holds against a computationally bounded adversary since it depends on the hardness of the subset sum problem. Scaling the 2PC protocol via separate encryption parameters for smaller subcircuits allows the ciphertext size to remain constant as circuit size grows.
Endomorphisms for Faster Cryptography on Elliptic Curves of Moderate CM Discriminants, II
The present article is a natural extension of the previous one about the GLV method of accelerating a (multi-)scalar multiplication on elliptic curves of moderate CM discriminants $D < 0$. In comparison with the first article, much greater magnitudes of $D$ (in absolute value) are achieved, although the base finite fields of the curves have to be pretty large. This becomes feasible by resorting to quite powerful algorithmic tools developed primarily in the context of lattice-based and isogeny-based cryptography. Curiously, pre-quantum cryptography borrows research outcomes obtained when seeking conversely quantum-resistant solutions or attacks on them.
For instance, some $2$-cycle of pairing-friendly MNT curves (with $-D \approx 100{,}000{,}000$, i.e., $\log_2(-D) \approx 26.5$) is relevant for the result of the current article. The given $2$-cycle was generated at one time by Guillevic to provide $\approx 128$ security bits, hence it was close to application in real-world zk-SNARKs. Another more performant MNT $2$-cycle (with slightly smaller security level, but with much larger $D$) was really employed in the protocol Coda (now Mina) until zero-knowledge proof systems on significantly faster pairing-free (or half-pairing) $2$-cycles were invented. It is also shown in the given work that more lollipop curves, recently proposed by Costello and Korpal to replace MNT ones, are now covered by the GLV technique.
Finding a polytope: A practical fault attack against Dilithium
In Dilithium, the rejection sampling step is crucial for the proof of security and correctness of the scheme. However, to our knowledge, there is no attack in the literature that takes advantage of an attacker knowing rejected signatures. The aim of this paper is to create a practical black-box attack against Dilithium with a weakened rejection sampling. We succeed in showing that an adversary with enough rejected signatures can recover Dilithium's secret key in less than half an hour on a desktop computer. There is one possible application for this result: by physically preventing one of the rejection sampling tests from happening, we obtain two fault attacks against Dilithium.
Basic Lattice Cryptography: The concepts behind Kyber (ML-KEM) and Dilithium (ML-DSA)
This tutorial focuses on describing the fundamental mathematical concepts and design decisions used in the two ``main'' lattice schemes standardized by NIST and included in the CNSA 2.0 algorithmic suite. They are the KEM / encryption scheme CRYSTALS-Kyber (ML-KEM) and the signature scheme CRYSTALS-Dilithium (ML-DSA) . In addition, we will also give the main ideas behind other lattice-based KEMs like Frodo and NTRU.
Post-Quantum Security for the Extended Access Control Protocol
The Extended Access Control (EAC) protocol for authenticated key agreement is mainly used to secure connections between machine-readable travel documents (MRTDs) and inspection terminals, but it can also be adopted as a universal solution for attribute-based access control with smart cards. The security of EAC is currently based on the Diffie-Hellman problem, which may not be hard when considering quantum computers.
In this work we present PQ-EAC, a quantum-resistant version of the EAC protocol. We show how to achieve post-quantum confidentiality and authentication without sacrificing real-world usability on smart cards. To ease adoption, we present two main versions of PQ-EAC: One that uses signatures for authentication and one where authentication is facilitated using long-term KEM keys. Both versions can be adapted to achieve forward secrecy and to reduce round complexity. To ensure backwards-compatibility, PQ-EAC can be implemented using only Application Protocol Data Units (APDUs) specified for EAC in standard BSI TR-03110. Merely the protocol messages needed to achieve forward secrecy require an additional APDU not specified in TR-03110. We prove security of all versions in the real-or-random model of Bellare and Rogaway.
To show real-world practicality of PQ-EAC we have implemented a version using signatures on an ARM SC300 security controller, which is typically deployed in MRTDs. We also implemented PQ-EAC on a VISOCORE terminal for border control. We then conducted several experiments to evaluate the performance of PQ-EAC executed between chip and terminal under various real-world conditions. Our results strongly suggest that PQ-EAC is efficient enough for use in border control.
AUCIL: An Inclusion List Design for Rational Parties
The decentralized nature of blockchains is touted to provide censorship resistance. However, in reality, the ability of proposers to completely control the contents of a block makes censorship relatively fragile. To combat this, a notion of inclusion lists has been proposed in the blockchain community. This paper presents the first formal study of inclusion lists. Our inclusion list design leverages multiple proposers to propose transactions and improve censorship resistance. The design has two key components. The first component is a utility-maximizing input list creation mechanism that allows rational proposers to achieve a correlated equilibrium while prioritizing high-value transactions. The second component, AUCIL (auction-based inclusion list), is a mechanism for aggregating the input lists from the proposers to output an inclusion list.
On the Average Random Probing Model
We exhibit a gap between the average random probing model, as defined by Dziembowski et al. at Eurocrypt 2015, and the same model, as defined in the recent paper of Brian et al. at Eurocrypt 2024. Whereas any noisy leakage can be tightly reduced to the former one, we show in this paper that it cannot be tightly reduced to the latter one, unless requiring extra assumptions, e.g., if the noisy leakage is deterministic. As a consequence, the reduction from noisy leakages to random probings — without field size loss — remains unproven.
A Round-Optimal Near-Linear Third-Party Private Set Intersection Protocol
Third-party private set intersection (PSI) enables two parties, each holding a private set to compute their intersection and reveal the result only to an inputless third party. In this paper, we present an efficient round-optimal third-party PSI protocol. Our work is motivated by real-world applications such as contact tracing whereby expedition is essential while concurrently preserving privacy. Our construction only requires $2$ communication rounds and attains a near-linear computational complexity of $O(n^{1+\varepsilon})$ for large dataset size $n$, where $\varepsilon>0$ is any fixed constant. Our improvements stem from algorithmic changes and the incorporation of new techniques along with precise parameter selections to achieve a tight asymptotic bound. Furthermore, we also present a third-party PSI cardinality protocol which has not been explored in prior third-party PSI work. In a third-party PSI cardinality setting, only the third-party obtains the size of the intersection and nothing else. Our construction to achieve the cardinality functionality attains a quasilinear computational complexity for the third-party.
The Nonlinear Filter Model of Stream Cipher Redivivus
The nonlinear filter model is an old and well understood approach to the design of secure stream ciphers. Extensive research over several decades has shown how to attack stream ciphers based on this model and has identified the security properties required of the Boolean function used as the filtering function to resist such attacks. This led to the problem of constructing Boolean functions which provide adequate security and at the same time are efficient to implement. Unfortunately, over the last two decades no good solutions to this problem appeared in the literature. The lack of good solutions has effectively led to nonlinear filter model becoming more or less obsolete. This is a big loss to the cryptographic design toolkit, since the great advantages of the nonlinear filter model are its simplicity, well understood security and the potential to provide low cost solutions for hardware oriented stream ciphers. In this paper we construct balanced functions on an odd number $n\geq 5$ of variables with the following provable properties: linear bias equal to $2^{-\lfloor n/2\rfloor -1}$, algebraic degree equal to $2^{\lfloor \log_2\lfloor n/2\rfloor \rfloor}$, algebraic immunity at least $\lceil (n-1)/4\rceil$, fast algebraic immunity at least $1+\lceil (n-1)/4\rceil $, and can be implemented using $O(n)$ NAND gates. The functions are obtained from a simple modification of the well known class of Maiorana-McFarland bent functions. By appropriately choosing $n$ and the length $L$ of the linear feedback shift register, we show that it is possible to obtain examples of stream ciphers which are $\kappa$-bit secure against known types of attacks for various values of $\kappa$. We provide concrete proposals for $\kappa=80,128,160,192,224$ and $256$. For the $80$-bit, $128$-bit, and the $256$-bit security levels, the circuits for the corresponding stream ciphers require about 1743.5, 2771.5, and 5607.5 NAND gates respectively. For the $80$-bit and the $128$-bit security levels, the gate count estimates compare quite well to the famous ciphers Trivium and Grain-128a respectively, while for the $256$-bit security level, we do not know of any other stream cipher design which has such a low gate count.
Beware of KECCAK: A Practical Fault Injection Attack Scheme Apply to All Phases of ML-KEM and ML-DSA
ML-KEM and ML-DSA are NIST-standardized lattice-based post-quantum cryptographic algorithms. In both algorithms, KECCAK is the designated hash algorithm and extendable-output function. It is extensively used for deriving sensitive information, making it a valuable target for attackers. In the field of fault injection attacks, few works targeted KECCAK, and they have not fully explored its potential as a general component. Consequently, many attacks remain undiscovered. This article systematically analyzes methods for recovering keys, forging signatures, and bypassing verification by utilizing (partially) recovered KECCAK outputs, presenting six attacks against ML-KEM and five attacks against ML-DSA, significantly expanding the capabilities and applicability of attacks through faulting KECCAK. These attacks cover the key generation, encapsulation, decapsulation, signing, and verification phases, making our scheme the first to apply to all phases of ML-KEM and ML-DSA. To support these upper-layer attacks, we propose various customized fault attacks on KECCAK, which manipulate the control flow through loop-abort faults to recover the (partial) output. The proposed attacks are validated on the C implementations of the PQClean library’s ML-KEM and ML-DSA running on embedded devices. Experiments show that loop-abort faults can be induced using electromagnetic fault injection on ARM Cortex-M microprocessors from multiple
series, achieving a success rate of 89.5%. Furthermore, once the fault injection is successful, the proposed attacks can succeed with a probability of 100%.
Practical Electromagnetic Fault Injection on Intel Neural Compute Stick 2
Machine learning (ML) has been widely deployed in various applications, with many applications being in critical infrastructures. One recent paradigm is edge ML, an implementation of ML on embedded devices for Internet-of-Things (IoT) applications. In this work, we have conducted a practical experiment on Intel Neural Compute Stick (NCS) 2, an edge ML device, with regard to fault injection (FI) attacks. More precisely, we have employed electromagnetic fault injection (EMFI) on NCS 2 to evaluate the practicality of the attack on a real target device. We have investigated multiple fault parameters with a low-cost pulse generator, aiming to achieve misclassification at the output of the inference. Our experimental results demonstrated the possibility of achieving practical and repeatable misclassifications.
The Supersingular Isogeny Path and Endomorphism Ring Problems: Unconditional Reductions
In this paper we study several computational problems related to current post-quantum cryptosystems based on isogenies between supersingular elliptic curves. In particular we prove that the supersingular isogeny path and endomorphism ring problems are unconditionally equivalent under polynomial time reductions. We show that access to a factoring oracle is sufficient to solve the Quaternion path problem of KLPT and prove that these problems are equivalent, where previous results either assumed heuristics or the generalised Riemann Hypothesis (GRH). Consequently, given Shor’s quantum algorithm for factorisation, our results yield unconditional quantum polynomial reductions between the isogeny path and EndRing problems. Recently these reductions have become foundational for the security of isogeny-based cryptography
Adaptive Distributional Security: A Framework for Input-Adaptive Cryptography
It is often desirable to break cryptographic primitives into two components: an input-independent offline component, and a cheap online component used when inputs arrive. Security of such online/offline primitives must be proved in the input-adaptive setting: the adversary chooses its input adaptively, based on what it sees in the offline-phase. Proving security in the input-adaptive setting can be difficult, particularly when one wishes to achieve simulation security and avoid idealized objects like a random oracle (RO).
This work proposes a framework for reasoning about input-adaptive primitives: adaptive distributional security (ADS). Roughly, an ADS primitive provides security when it is used with inputs drawn from one of two distributions that are themselves hard to distinguish. ADS is useful as a framework for the following reasons:
- An ADS definition can often circumvent impossibility results imposed on the corresponding simulation-based definition. This allows us to decrease the online-cost of primitives, albeit by using a weaker notion of security.
- With care, one can typically upgrade an ADS-secure object into a simulation-secure object (by increasing cost in the online-phase).
- ADS is robust, in the sense that (1) it enables a form of composition and (2) interesting ADS primitives are highly interconnected in terms of which objects imply which other objects.
- Many useful ADS-secure objects are plausibly secure from straightforward symmetric-key cryptography.
We start by defining the notion of an ADS encryption (ADE) scheme.
A notion of input-adaptive encryption can be easily achieved from RO, and the ADE definition can be understood as capturing the concrete property provided by RO that is sufficient to achieve input-adaptivity. From there, we use ADE to achieve ADS variants of garbled circuits and oblivious transfer, to achieve simulation-secure garbled circuits, oblivious transfer, and two-party computation, and prove interconnectedness of these primitives. In sum, this results in a family of objects with extremely cheap online-cost.
Computing Quaternion Embeddings and Endomorphism rings of Supersingular Oriented Elliptic curves
In this paper, we investigate several computational problems motivated by post-quantum cryptosystems based on isogenies and ideal class group actions on oriented elliptic curves. Our main technical contribution is an efficient algorithm for embedding the ring of integers of an imaginary quadratic field \( K \) into some maximal order of the quaternion algebra \( B_{p,\infty} \) ramified at a prime \( p \) and infinity. Assuming the Generalized Riemann Hypothesis (GRH), our algorithm runs in probabilistic polynomial time, improving upon previous results that relied on heuristics or required the factorization of \( \textnormal{disc}(K) \). Notably, this algorithm may be of independent interest.
Our approach enhances the work of Love and Boneh on computing isogenies between \( M \)-small elliptic curves by eliminating heuristics and improving computational efficiency. Furthermore, given a quadratic order \( \mathfrak{O} \) in \( K \), we show that our algorithm reduces the computational endomorphism ring problem of \( \mathfrak{O} \)-oriented elliptic curves to the Vectorization problem in probabilistic polynomial time, assuming the conductor of \( \mathfrak{O} \) can be efficiently factorized. Previously, the best known result required the full factorization of \( \textnormal{disc}(\mathfrak{O}) \), which may be exponentially large.
Additionally, when the conductor of \( \mathfrak{O} \) can be efficiently factorized, we establish a polynomial-time equivalence between the Quaternion Order Embedding Problem, which asks to embed a quadratic order \( \mathfrak{O} \) into a maximal order in \( B_{p,\infty} \), and computing horizontal isogenies between \( \mathfrak{O} \)-oriented elliptic curves. Leveraging this reduction, we propose a rigorous algorithm, under GRH, that solves the quaternion order embedding problem in time \( \tilde{O}(|\mathrm{disc}(\mathfrak{O})|^{1/2}) \), improving upon previous methods that required higher asymptotic time and relied on several heuristics.
Efficient Permutation Correlations and Batched Random Access for Two-Party Computation
In this work we formalize the notion of a two-party permutation correlation $(A, B), (C, \pi)$ s.t. $\pi(A)=B+C$ for a random permutation $\pi$ of $n$ elements and vectors $A,B,C\in \mathbb{F}^n$. This correlation can be viewed as an abstraction and generalization of the Chase et al. (Asiacrypt 2020) share translation protocol. We give a systematization of knowledge for how such a permutation correlation can be derandomized to allow the parties to perform a wide range of oblivious permutations of secret-shared data. This systematization immediately enables the translation of various popular honest-majority protocols to be efficiently instantiated in the two-party setting, e.g. collaborative filtering, sorting, database joins, graph algorithms, and many more.
We give two novel protocols for efficiently generating a random permutation correlation. The first uses MPC-friendly PRFs to generate a correlation of $n$ elements, each of size $\ell=\log|\mathbb{F}|$ bits, with $O(n\ell)$ bit-OTs, time, communication, and only three rounds. Similar asymptotics previously required relatively expensive public-key cryptography, e.g. Paillier or LWE. Our protocol implementation for $n=2^{20},\ell=128$ requires just 7 seconds & $\sim2\ell n$ bits of communication, a respective 40 & $1.1\times$ improvement on the LWE solution of Juvekar at al. (CCS 2018). The second protocol is based on pseudo-random correlation generators and achieves an overhead that is sublinear in the string length $\ell$, i.e. the communication and number of OTs is $O(n\log \ell)$. The overhead of the latter protocol has larger hidden constants, and therefore is more efficient only when long strings are permuted, e.g. in graph algorithms.
Finally, we present a suite of highly efficient protocols based on permutations for performing various batched random access operations. These include the ability to extract a hidden subset of a secret-shared list. More generally, we give ORAM-like protocols for obliviously reading and writing from a list in a batched manner. We argue that this suite of batched random access protocols should be a first class primitive in the MPC practitioner's toolbox.
Binary Codes for Error Detection and Correction in a Computationally Bounded World
We study error detection and correction in a computationally bounded world, where errors are introduced by an arbitrary $\textit{polynomial-time}$ adversarial channel. Our focus is on $\textit{seeded}$ codes, where the encoding and decoding procedures can share a public random seed, but are otherwise deterministic. We can ask for either $\textit{selective}$ or $\textit{adaptive}$ security, depending on whether the adversary can choose the message being encoded before or after seeing the seed. For large alphabets, a recent construction achieves essentially optimal rate versus error tolerance trade-offs under minimal assumptions, surpassing information-theoretic limits. However, for the binary alphabet, the only prior improvement over information theoretic codes relies on non-standard assumptions justified via the random oracle model. We show the following:
$\textbf{Selective Security under LWE:}$ Under the learning with errors (LWE) assumption, we construct selectively secure codes over the binary alphabet. For error detection, our codes achieve essentially optimal rate $R \approx 1$ and relative error tolerance $\rho \approx \frac{1}{2}$. For error correction, they can uniquely correct $\rho < 1/4$ relative errors with a rate $R$ that essentially matches that of the best list-decodable codes with error tolerance $\rho$. Both cases provide significant improvements over information-theoretic counterparts. The construction relies on a novel form of 2-input correlation intractable hash functions that we construct from LWE.
$\textbf{Adaptive Security via Crypto Dark Matter:}$ Assuming the exponential security of a natural collision-resistant hash function candidate based on the ``crypto dark matter'' approach of mixing linear functions over different moduli, we construct adaptively secure codes over the binary alphabet, for both error detection and correction. They achieve essentially the same trade-offs between error tolerance $\rho$ and rate $R$ as above, with the caveat that for error-correction they only do so for sufficiently small values of $\rho$.
Experimentally studying path-finding problem between conjugates in supersingular isogeny graphs: Optimizing primes and powers to speed-up cycle finding
We study the problem of finding a path between conjugate supersingular elliptic curves over $\mathbb{F}_{p^2}$ for a prime $p$, which is important for cycle finding in supersingular isogeny graphs. We see that for any given $p$, there is some $l$ corresponding to $p$ which accelerates the process of conjugate path-finding. Also, time-wise, the most efficient way of overviewing the graph is seeing $i(=3)$ steps at once. We have outlined methods in which the next vertex of any pseudo-random walk should be chosen to reach conjugate vertex faster. We have experimentally investigated the paths between frobenius conjugates for wide ranges of small prime $l$. We introduce sets to experimentally learn about the structure of the isogeny graphs when short cycles are present.
Polynomial Inversion Algorithms in Constant Time for Post-Quantum Cryptography
The computation of the inverse of a polynomial over a quotient ring or a finite field plays a very important role during the key generation of post-quantum cryptosystems like NTRU, BIKE, and LEDACrypt. It is therefore important that there exist an efficient algorithm capable of running in constant time, to prevent timing side-channel attacks. In this article, we study both constant-time algorithms based on Fermat's Little Theorem and the Extended $GCD$ Algorithm, and provide a detailed comparison in terms of performance. According to our conclusion, we see that the constant-time Extended $GCD$-based Bernstein-Yang's algorithm shows a better performance with 1.76x-3.76x on \texttt{x86} platforms compared to FLT-based methods. Although we report numbers from a software implementation, we additionally provide a short glimpse of some recent results when these two algorithms are implemented on various hardware platforms. Finally, we also explore other exponentiation algorithms that work similarly to the Itoh-Tsuji inversion method. These algorithms perform fewer polynomial multiplications and show a better performance with 1.56x-1.96x on \texttt{x86} platform compared to Itoh-Tsuji inversion method.
BulletCT: Towards More Scalable Ring Confidential Transactions With Transparent Setup
RingCT signatures are essential components of Ring Confidential Transaction (RingCT) schemes on blockchain platforms, enabling anonymous transaction spending and significantly impacting the scalability of these schemes. This paper makes two primary contributions:
We provide the first thorough analysis of a recently developed Any-out-of-N proof in the discrete logarithm (DLOG) setting and the associated RingCT scheme, introduced by ZGSX23 (S&P '23). The proof conceals the number of the secrets to offer greater anonymity than K-out-of-N proofs and uses an efficient "K-Weight" technique for its construction. However, we identify for the first time several limitations of using Any-out-of-N proofs, such as increased transaction sizes, heightened cryptographic complexities and potential security risks. These limitations prevent them from effectively mitigating the longstanding scalability bottleneck.
We then continue to explore the potential of using K-out-of-N proofs to enhance scalability of RingCT schemes. Our primary innovation is a new DLOG-based RingCT signature that integrates a refined "K-Weight"-based K-out-of-N proof and an entirely new tag proof. The latter is the first to efficiently enable the linkability of RingCT signatures derived from the former, effectively resisting double-spending attacks.
Finally, we identify and patch a linkability flaw in ZGSX23's signature. We benchmark our scheme against this patched one to show that our scheme achieves a boost in scalability, marking a promising step forward.
Twist and Shout: Faster memory checking arguments via one-hot addressing and increments
A memory checking argument enables a prover to prove to a verifier that it is correctly processing reads and writes to memory. They are used widely in modern SNARKs, especially in zkVMs, where the prover proves the correct execution of a CPU including the correctness of memory operations.
We describe a new approach for memory checking, which we call the method of one-hot addressing and increments. We instantiate this method via two different families of protocols, called Twist and Shout. Twist supports read/write memories, while Shout targets read-only memories (also known as lookup arguments). Both Shout and Twist have logarithmic verifier costs. Unlike prior works, these protocols do not invoke "grand product" or "grand sum" arguments.
Twist and Shout significantly improve the prover costs of prior works across the full range of realistic memory sizes, from tiny memories (e.g., 32 registers as in RISC-V), to memories that are so large they cannot be explicitly materialized (e.g., structured lookup tables of size $2^{64}$ or larger, which arise in Lasso and the Jolt zkVM). Detailed cost analysis shows that Twist and Shout are well over 10x cheaper for the prover than state-of-the-art memory-checking procedures configured to have logarithmic proof length.
Prior memory-checking procedures can also be configured to have larger proofs. Even then, we estimate that Twist and Shout are at least 2--4x faster for the prover in key applications.
Finally, using Shout, we provide two fast-prover SNARKs for non-uniform constraint systems, both of which achieve minimal commitment costs (the prover commits only to the witness): (1) SpeedySpartan applies to Plonkish constraints, substantially improving the previous state-of-the-art protocol, BabySpartan; and (2)Spartan++ applies to CCS (a generalization of Plonkish and R1CS), improving prover times over the previous state-of-the-art protocol, Spartan, by 6x.
Asymptotic improvements to provable algorithms for the code equivalence problem
We present several new provable algorithms for two variants of the code equivalence problem on linear error-correcting codes, the Linear Code Equivalence Problem (LCE) and the Permutation Code Equivalence Problem (PCE). Specifically, for arbitrary codes of block length $n$ and dimension $k$ over any finite field $\mathbb{F}_q$, we show:
1) A deterministic algorithm running in $2^{n + o(n+q)}$ time for LCE.
2) A randomized algorithm running in $2^{n/2 + o(n+q)}$ time for LCE and PCE.
3) A quantum algorithm running in $2^{n/3 + o(n+q)}$ time for LCE and PCE.
The first algorithm complements the deterministic roughly $2^n$-time algorithm of Babai (SODA 2011) for PCE.
The second two algorithms improve on recent work of Nowakowski (PQCrypto 2025), which gave algorithms with similar running times, but only for code equivalence on \emph{random} codes and only over fields of order $q \geq 7$.
Communication-Optimal Convex Agreement
Byzantine Agreement (BA) allows a set of $n$ parties to agree on a value even when up to $t$ of the parties involved are corrupted.
While previous works have shown that, for $\ell$-bit inputs, BA can be achieved with the optimal communication complexity $O(\ell n)$ for sufficiently large $\ell$, BA only ensures that honest parties agree on a meaningful output when they hold the same input, rendering the primitive inadequate for many real-world applications.
This gave rise to the notion of Convex Agreement (CA), introduced by Vaidya and Garg [PODC'13], which requires the honest parties' outputs to be in the convex hull of the honest inputs. Unfortunately, all existing CA protocols incur a communication complexity of at least $O(\ell n^2)$.
In this work, we introduce the first CA protocol with the optimal communication of $\mathcal{O}(\ell n)$ bits for inputs in $\mathbb{Z}$ of size $\ell = \Omega(\kappa \cdot n \log^2 n)$, where $\kappa$ is the security parameter.
Relativized Succinct Arguments in the ROM Do Not Exist
A relativized succinct argument in the random oracle model (ROM) is a succinct argument in the ROM that can prove/verify the correctness of computations that involve queries to the random oracle. We prove that relativized succinct arguments in the ROM do not exist. The impossibility holds even if the succinct argument is interactive, and even if soundness is computational (rather than statistical).
This impossibility puts on a formal footing the commonly-held belief that succinct arguments require non-relativizing techniques. Moreover, our results stand in sharp contrast with other oracle models, for which a recent line of work has constructed relativized succinct non-interactive arguments (SNARGs). Indeed, relativized SNARGs are a powerful primitive that, e.g., can be used to obtain constructions of IVC (incrementally-verifiable computation) and PCD (proof-carrying data) based on falsifiable cryptographic assumptions. Our results rule out this approach for IVC and PCD in the ROM.
AutoDiVer: Automatically Verifying Differential Characteristics and Learning Key Conditions
Differential cryptanalysis is one of the main methods of cryptanalysis and has been applied to a wide range of ciphers. While it is very successful, it also relies on certain assumptions that do not necessarily hold in practice. One of these is the hypothesis of stochastic equivalence, which states that the probability of a differential characteristic behaves similarly for all keys. Several works have demonstrated examples where this hypothesis is violated, impacting the attack complexity and sometimes even invalidating the investigated prior attacks. Nevertheless, the hypothesis is still typically taken for granted. In this work, we propose AutoDiVer, an automatic tool that allows to thoroughly verify differential characteristics. First, the tool supports calculating the expected probability of differential characteristics while considering the key schedule of the cipher. Second, the tool supports estimating the size of the space of keys for which the characteristic permits valid pairs, and deducing conditions for these keys. AutoDiVer implements a custom SAT modeling approach and takes advantage of a combination of features of advanced SAT solvers, including approximate model counting and clause learning. To show applicability to many different kinds of block ciphers like strongly aligned, weakly aligned, and ARX ciphers, we apply AutoDiVer to GIFT, PRESENT, RECTANGLE, SKINNY, WARP, SPECK, and SPEEDY.
SPADE: Digging into Selective and PArtial DEcryption using Functional Encryption
Functional Encryption (FE) is a cryptographic technique established to guarantee data privacy while allowing the retrieval of specific results from the data.
While traditional decryption methods rely on a secret key disclosing all the data, FE introduces a more subtle approach. The key generation algorithm generates function-specific decryption keys that can be adaptively provided based on policies. Adaptive access control is a good feature for privacy-preserving techniques. Generic schemes have been designed to run basic functions, such as linear regression. However, they often provide a narrow set of outputs, resulting in a lack of thorough analysis. The bottom line is that despite significant research, FE still requires appropriate constructions to unleash its full potential in securely analyzing data and providing more insights. In this article, we introduce SPADE, a novel FE scheme that features multiple users and offers fine-grained access control through partial decryption of the ciphertexts. Unlike existing FE schemes, our construction also supports qualitative data, such as genomics, expanding the applications of privacy-preserving analysis to enable a comprehensive study of the data. SPADE is a significant advancement that balances privacy and data analysis with clear implications in healthcare and finance.
To verify its applicability, we conducted extensive experiments on datasets used in sleep medicine (hypnogram data) and DNA analysis (genomic records).
How to Compress Garbled Circuit Input Labels, Efficiently
Garbled circuits are a foundational primitive in both theory and practice of cryptography. Given $(\hat{C}, K[x])$, where $\hat{C}$ is the garbling of a circuit C and $K[x] = \{K[i, x_i]\}$ are the input labels for an input $x$, anyone can recover $C(x)$, but nothing else about the input $x$. Most research efforts focus on minimizing the size of the garbled circuit $\hat{C}$. In contrast, the work by Applebaum, Ishai, Kushilevitz, and Waters (CRYPTO' 13) initiated the study of minimizing the cost for transferring the input labels $K[x]$. Later improved in a follow-up by Applebaum et al. (STOC' 23), the state-of-the-art techniques allow compressing the input labels to the optimal rate of $1 + o(1)$. That is, each input label can be transferred by essentially sending 1 bit. However, existing solutions are computationally expensive, requiring large numbers of public-key operations (such as RSA exponentiation).
In this work, we present an efficient input label compression technique based on Ring-LWE. We achieve the same optimal rate of $1 + o(1)$, by making use of additional communication in an offline stage (before the input $x$ becomes known), a paradigm that has already been explored in prior works. A novel feature of the offline communication in our scheme is that the information sent is either reusable or compressible using a random oracle, leading to small amortized offline cost $o(|x|)$. We further demonstrate concrete efficiency through an implementation whose online latency out-performs the naive baseline (which sends all of $K[x]$ in the online phase) in a realistic network with a bandwidth of up to 45Mbps. This break-even point could be pushed even further by leveraging the large potential for parallelization of computation.
Finally, we apply our techniques to construct maliciously-secure two-party computation protocols with succinct online communication: The online phase starts once the circuit C becomes known, and requires exchanging only $poly(\lambda)$ bits (independent of $|C|$). After inputs $x_A$, $x_B$ arrive, an additional $|x_A| + |x_B | + poly(\lambda)$ bits need to be sent.
PEReDi: Privacy-Enhanced, Regulated and Distributed Central Bank Digital Currencies
Central Bank Digital Currencies (CBDCs) aspire to offer a digital replacement for physical cash and, as such, must address two fundamental yet conflicting requirements. On the one hand, they should be private to prevent the emergence of a financial “panopticon.” On the other hand, they must be regulation friendly, facilitating threshold-limiting, tracing, and counterparty auditing functionalities necessary for compliance with regulations such as Know Your Customer (KYC), Anti-Money Laundering (AML), and Combating the Financing of Terrorism (CFT), as well as financial stability considerations.
In this work, we propose PEReDi, a new asynchronous model for CBDCs and present an efficient construction that, for the first time, simultaneously addresses these challenges in full. Moreover, recognizing the necessity of avoiding a single point of failure, our construction is distributed to ensure that all its properties remain intact even when a bounded number of entities are corrupted by an adversary. Achieving all the above properties efficiently is technically involved; among others, our construction employs suitable cryptographic tools to thwart man-in-the-middle attacks, introduces a novel traceability mechanism with significant performance gains over previously known techniques, and, perhaps surprisingly, shows how to obviate Byzantine agreement or broadcast from the optimistic execution path of a payment, something that results in an essentially optimal communication pattern and minimal communication overhead. We demonstrate the efficiency of our payment system by presenting detailed computational and communication cost analyses.
Beyond “simple” payments, we also discuss how our scheme can support one-off large transfers while complying with Know Your Transaction (KYT) disclosure requirements. Our CBDC concept is expressed and realized within the Universal Composition (UC) framework, providing a modular and secure way for integration into a broader financial ecosystem.
Snake-eye Resistant PKE from LWE for Oblivious Message Retrieval and Robust Encryption
Oblivious message retrieval (OMR) allows resource-limited recipients to outsource the message retrieval process without revealing which messages are pertinent to which recipient. Its realizations in recent works leave an open problem: can an OMR scheme be both practical and provably secure against spamming attacks from malicious senders (i.e., DoS-resistant) under standard assumptions?
In this paper, we first prove that a prior construction $\mathsf{OMRp2}$ is DoS-resistant under a standard LWE assumption, resolving an open conjecture of prior works. Then, we present $\mathsf{DoS\text{-}PerfOMR}$: a provably DoS-resistant OMR construction that is 12x faster than $\mathsf{OMRp2}$, and (almost) matches the performance of the state-of-the-art OMR scheme that is $\textit{not}$ DoS-resistant (proven by the attacks we show).
To achieve this, we analyze the $\textit{snake-eye resistance}$ property for general PKE schemes (i.e., it is hard to encrypt an identical message under two keys). We construct a new lattice-based PKE scheme: $\mathsf{LWEmongrass}$, that is provably snake-eye resistant and has better efficiency than the PVW scheme underlying $\mathsf{OMRp2}$. We also show that the natural candidates (e.g., RingLWE PKE) are not snake-eye resistant.
Furthermore, we show that a snake-eye resistant PKE scheme implies a robust PKE scheme, thus introducing the first robust lattice-based PKE scheme without relying on the KEM-DEM paradigm and its inherent inefficiencies.
Of independent interest, we introduce two variants of LWE with side information, as components towards proving the properties of $\mathsf{LWEmongrass}$, and reduce standard LWE to them for the parameters of interest.
NodeChain: Cheap Data Integrity Without Consensus
Blockchains enable decentralised applications that withstand Byzantine failures and do not need a central authority. Unfortunately, their massive replication requirements preclude their use on constrained devices.
We propose a novel blockchain-based data structure which forgoes replication without affecting the append-only nature of blockchains, making it suitable for maintaining data integrity over networks of storage-constrained devices. Our solution does not provide consensus, which is not required by our motivating application, namely securely storing sensor data of containers in cargo ships.
We elucidate the practical promise of our technique by following a multi-faceted approach: We (i) formally prove the security of our protocol in the
Universal Composition (UC) setting, as well as (ii) provide a small-scale proof-of-concept implementation, (iii) a performance simulation for large-scale deployments which showcases a reduction in storage of more than $1000$x compared to traditional blockchains, and (iv) a resilience simulation that predicts the practical effects of network jamming attacks.
Worst-Case Lattice Sampler with Truncated Gadgets and Applications
Gadget-based samplers have proven to be a key component of several cryptographic primitives, in particular in the area of privacy-preserving mechanisms. Most constructions today follow the approach introduced by Micciancio and Peikert (MP) yielding preimages whose dimension linearly grows with that of the gadget. To improve performance, some papers have proposed to truncate the gadget but at the cost of an important feature of the MP sampler, namely the ability to invert arbitrary syndromes. Technically speaking, they replace the worst-case MP sampler by an average-case sampler that can only be used in specific contexts. Far from being a mere theoretical restriction, it prevents the main applications of gadget-based samplers from using truncated variants and thus from benefiting from the associated performance gains.
In this paper, we solve this problem by describing a worst-case sampler that still works with truncated gadgets. Its main strength is that it retains the main characteristics of the MP sampler while providing flexibility in the choice of the truncation parameter. As a consequence, it can be used as a plug-in replacement for all applications relying on the MP sampler so far, leading to performance improvements up to 30% as illustrated by several examples in this paper. Our sampler is supported by a thorough security analysis that addresses the hurdles met by previous works and its practicality is demonstrated by a concrete implementation.
Improved Lattice Blind Signatures from Recycled Entropy
Blind signatures represent a class of cryptographic primitives enabling privacy-preserving authentication with several applications such as e-cash or e-voting. It is still a very active area of research, in particular in the post-quantum setting where the history of blind signatures has been hectic. Although it started to shift very recently with the introduction of a few lattice-based constructions, all of the latter give up an important characteristic of blind signatures (size, efficiency, or security under well-known assumptions) to achieve the others. In this paper, we propose another design which revisits the link between the two main procedures of blind signatures, namely issuance and showing, demonstrating that we can significantly alleviate the second one by adapting the former. Concretely, we show that we can harmlessly inject excess randomness in the issuance phase, and then recycle the entropy surplus during showing to decrease the complexity of the zero-knowledge proof which constitutes the main component of the signature. This leads to a blind signature scheme with small sizes, low complexity, and that still relies on well-known lattice assumptions.
A light white-box masking scheme using Dummy Shuffled Secure Multiplication
In white-box cryptography, early encoding-based countermeasures have been broken by the DCA attack, leading to the utilization of masking schemes against a surge of automated attacks. The recent filtering attack from CHES 2024 broke the last viable masking scheme from CHES 2021 resisting both computational and algebraic attacks, raising the need for new countermeasures.
In this work, we perform the first formal study of the combinations of existing countermeasures and demonstrate that applying Dummy Shuffling (EUROCRYPT 2021) then ISW masking (CRYPTO 2003) to a circuit carries algebraic, correlation, and filtering security - necessary conditions to withstand state-of-the-art automated attacks. We also show that applying these two countermeasures in the opposite order leads to a Higher-Order Filtering attack, highlighting the importance of the order of application of the combined countermeasures.
We also propose a new masking scheme called S5, standing for the Semi-Shuffled Secret Sharing Scheme, a scheme merging Dummy Shuffling and ISW in a single countermeasure more efficiently than a direct composition.
Deny Whatever You Want: Dual-Deniable Public-Key Encryption
We introduce an enhanced requirement of deniable public key encryption that we call dual-deniability. It asks that a sender who is coerced should be able to produce fake randomness, which can explain the target ciphertext as the encryption of any alternative message under any valid key she/he desires to deny. Compared with the original notion of deniability (Canetti et al. in CRYPTO ’97, hereafter named message-deniability), this term further provides a shield for the anonymity of the receiver against coercion attacks.
We first give a formal definition of dual-deniability, along with its weak-mode variant. For conceptual understanding, we then show dual-deniability implies semantic security and anonymity against CPA, separates full robustness, and even contradicts key-less or mixed robustness, while is (constructively) implied by key-deniability and full robustness with a minor assumption for bits encryption. As for the availability of dual-deniability, our main scheme is a generic approach from ciphertext-simulatable PKE, where we devise a subtle multi-encryption schema to hide the true message within random masking ciphertexts under plan-ahead setting. Further, we leverage the weak model to present a more efficient scheme having negligible detection probability and constant ciphertext size. Besides, we revisit the notable scheme (Sahai and Waters in STOC ’14) and show it is inherently dual-deniable. Finally, we extend the Boneh-Katz transform to capture CCA security, deriving dual-deniable and CCA-secure PKE from any selectively dual-deniable IBE under multi-TA setting. Overall our work mounts the feasibility of anonymous messaging against coercion attacks.
A Critical Analysis of Deployed Use Cases for Quantum Key Distribution and Comparison with Post-Quantum Cryptography
Quantum Key Distribution (QKD) is currently being discussed as a technology to safeguard communication in a future where quantum computers compromise traditional public-key cryptosystems. In this paper, we conduct a comprehensive security evaluation of QKD-based solutions, focusing on real-world use cases sourced from academic literature and industry reports. We analyze these use cases, assess their security and identify the possible advantages of deploying QKD-based solutions. We further compare QKD-based solutions with Post-Quantum Cryptography (PQC), the alternative approach to achieving security when quantum computers compromise traditional public-key cryptosystems, evaluating their respective suitability for each scenario. Based on this comparative analysis, we critically discuss and comment on which use cases QKD is suited for, considering factors such as implementation complexity, scalability, and long-term security. Our findings contribute to a better understanding of the role QKD could play in future cryptographic infrastructures and offer guidance to decision-makers considering the deployment of QKD.
Improved NTT and CRT-based RNR Blinding for Side-Channel and Fault Resistant Kyber
In this paper, we build upon the blinding methods introduced in recent years to enhance the protection of lattice-based cryptographic schemes against side-channel and fault injection attacks. Specifically, we propose a cost-efficient blinded Number Theoretic Transform (NTT) that impedes the convergence of Soft Analytical Side-Channel Attacks (SASCA), even with limited randomness sampling. Additionally, we extend the blinding mechanism based on the Chinese Remainder Theorem (CRT) and Redundant Number Representation (RNR) introduced by Heiz and Pöppelmann by reducing the randomness sampling overhead and accelerating the verification phase.
These two blinding mechanisms are nicely compatible with each other's and, when combined, provide enhanced resistance against side-channel attacks, both classical and soft analytical, as well as fault injection attacks, while maintaining high performance and low overhead, making the approach well-suited for practical applications, particularly in resource-constrained IoT environments.
On the Atomicity and Efficiency of Blockchain Payment Channels
Payment channels have emerged as a promising solution to address the performance limitations of cryptocurrencies payments, enabling efficient off-chain transactions while maintaining security guarantees. However, existing payment channel protocols, including the widely-deployed Lightning Network and the state-of-the-art Sleepy Channels, suffer from a fundamental vulnerability: non-atomic state transitions create race conditions that can lead to unexpected financial losses. We first formalize current protocols into a common paradigm and prove that this vulnerability is fundamental—any protocol following this paradigm cannot guarantee balance security due to the inherent race conditions in their design. To address this limitation, we propose a novel atomic paradigm for payment channels that ensures atomic state transitions, effectively eliminating race conditions while maintaining all desired functionalities. Based on this paradigm, we introduce Ultraviolet, a secure and efficient payment channel protocol that achieves both atomicity and high performance, while avoiding the introduction of unimplemented Bitcoin features. Ultraviolet reduces the number of required messages per transaction by half compared to existing solutions, while maintaining comparable throughput. We formally prove the security of Ultraviolet under the universal composability framework and demonstrate its practical efficiency through extensive evaluations across multiple regions. This results in a 37% and 52% reduction in latency compared to the Lightning Network and Sleepy Channels, respectively. Regarding throughput, Ultraviolet achieves performance comparable to the Lightning Network and delivers 2× the TPS of Sleepy Channels.
PIE: $p$-adic Encoding for High-Precision Arithmetic in Homomorphic Encryption
A large part of current research in homomorphic encryption (HE) aims towards making HE practical for real-world applications. In any practical HE, an important issue is to convert the application data (type) to the data type suitable for the HE.
The main purpose of this work is to investigate an efficient HE-compatible encoding method that is generic, and can be easily adapted to apply to the HE schemes over integers or polynomials.
$p$-adic number theory provides a way to transform rationals to integers, which makes it a natural candidate for encoding rationals. Although one may use naive number-theoretic techniques to perform rational-to-integer transformations without reference to $p$-adic numbers, we contend that the theory of $p$-adic numbers is the proper lens to view such transformations.
In this work we identify mathematical techniques (supported by $p$-adic number theory) as appropriate tools to construct a generic rational encoder which is compatible with HE. Based on these techniques, we propose a new encoding scheme PIE, that can be easily combined with both AGCD-based and RLWE-based HE to perform high precision arithmetic. After presenting an abstract version of PIE, we show how it can be attached to two well-known HE schemes: the AGCD-based IDGHV scheme and the RLWE-based (modified) Fan-Vercauteren scheme. We also discuss the advantages of our encoding scheme in comparison with previous works.
Cairo – a Turing-complete STARK-friendly CPU architecture
Proof systems allow one party to prove to another party that a certain statement is true. Most existing practical proof systems require that the statement will be represented in terms of polynomial equations over a
finite field. This makes the process of representing a statement that one wishes to prove or verify rather complicated, as this process requires a new set of equations for each statement.
Various approaches to deal with this problem have been proposed.
We present Cairo, a practically-efficient Turing-complete STARK-friendly CPU architecture. We describe a single set of polynomial equations for the statement that the execution of a program on this architecture is valid. Given a statement one wishes to prove, Cairo allows writing a program that describes that statement, instead of writing a set of polynomial equations.
Higher-Order Deterministic Masking with Application to Ascon
Side-channel attacks (SCAs) pose a significant threat to the implementations of lightweight ciphers, particularly in resource-constrained environments where masking—the primary countermeasure—is constrained by tight resource limitations.
This makes it crucial to reduce the resource and randomness requirements of masking schemes. In this work, we investigate an approach to minimize the randomness complexity of masking algorithms. Specifically, we explore the theoretical foundations of
deterministic higher-order masking, which relies solely on offline randomness present in the initial input shares and eliminates the need for online (fresh) randomness during internal computations.
We demonstrate the feasibility of deterministic masking for ciphers such as Ascon, showing that their diffusion layer can act as a refresh subcircuit. This ensures that, up to a threshold number, probes placed in different rounds remain independent. Based on this observation, we propose composition theorems for deterministic masking
schemes. On the practical side, we extend the proof of first- and second-order probing security for Ascon’s protected permutation from a single round to an arbitrary number of rounds
Collision Attacks on Galois/Counter Mode (GCM)
Advanced Encryption Standard in Galois/Counter Mode (AES-GCM) is the most widely used Authenticated Encryption with Associated Data (AEAD) algorithm in the world. In this paper, we analyze the use of GCM with all the Initialization Vector (IV) constructions and lengths approved by NIST SP 800-38D when encrypting multiple plaintexts with the same key. We derive attack complexities in both ciphertext-only and known-plaintext models, with or without nonce hiding, for collision attacks compromising integrity and confidentiality. To facilitate the analysis of GCM with random IVs, we derive a new, simplified equation for near birthday collisions. Our analysis shows that GCM with random IVs provides less than 128 bits of security. When 96-bit IVs are used, as recommended by NIST, the security drops to less than 97 bits. Therefore, we strongly recommend NIST to forbid the use of GCM with 96-bit random nonces.
Improved Differential and Linear Cryptanalysis on Round-Reduced SIMON
SIMON is a lightweight block cipher proposed by the National Security Agency.
According to previous cryptanalytic results on SIMON, differential and linear cryptanalysis are the two most effective attacks on it.
Usually, there are many trails sharing the same input and output differences (resp. masks).
These trails comprise the differential (resp. linear hull) and can be used together when mounting attacks.
In ASIACRYPT 2021, Leurent et al. proposed a matrix-based method on SIMON-like ciphers, where only trails whose active bits stay in a $w$-bit window are considered.
The static window in each round is chosen to be $w$ least significant bits.
They applied this efficient framework on SIMON and SIMECK, and have obtained many better differentials and linear hulls than before. For SIMON, they also found that there seems to be some potential for improvement, which should be further investigated.
In this paper, we dynamically choose window for each round to achieve better distinguishers. Benefiting from these dynamic windows, we can obtain stronger differentials and linear hulls than previously proposed for almost all versions of SIMON.
Finally, we provided the best differential/linear attacks on SIMON48, SIMON64, and SIMON96 in terms of round number, complexity, or success rate.
Discrete Gaussians Modulo Sub-Lattices: New Leftover Hash Lemmas for Discrete Gaussians
The Leftover Hash Lemma (LHL) is a powerful tool for extracting randomness from an entropic distribution, with numerous applications in cryptography. LHLs for discrete Gaussians have been explored in both integer settings by Gentry et al. (GPV, STOC'08) and algebraic ring settings by Lyubashevsky et al. (LPR, Eurocrypt'13). However, the existing LHLs for discrete Gaussians have two main limitations: they require the Gaussian parameter to be larger than certain smoothing parameters, and they cannot handle cases where fixed and arbitrary information is leaked.
In this work, we present new LHLs for discrete Gaussians in both integer and ring settings. Our results show that the Gaussian parameter can be improved by a factor of $\omega(\sqrt{\log\lambda})$ and $O(\sqrt{N})$ compared to the regularity lemmas of GPV and LPR, respectively, under similar parameter choices such as the dimension and ring. Furthermore, our new LHLs can be applied to leaked discrete Gaussians, and the result can be used to establish asymptotic hardness of the extended MLWE assumptions, addressing an open question in recent works by Lyubashevsky et al. (LNP, Crypto'22). Our central techniques involve new fine-grained analyses of the min-entropy in discrete Gaussians modulo sublattices and should be of interest.
On the Power of Sumcheck in Secure Multiparty Computation
Lund et al. (JACM 1992) invented the powerful Sumcheck protocol that has been extensively used in complexity theory and also for designing concretely efficient (zero-knowledge) arguments. In this work, we systematically study Sumcheck in the context of secure multi-party computation (MPC). Our main result is a new generic framework for lifting semi-honest MPC protocols to malicious security, with a {\em constant} multiplicative overhead in {\em both} computation and communication, and even additional logarithmic communication in the best case. In general, our approach applies to any semi-honest linear secret-sharing based MPC secure up to additive attacks, where secret-sharings can be added with an authentication property. At a high-level, our approach has a highly distributive flavor, where the parties jointly emulate a Sumcheck prover that proves the correctness of MPC semi-honest evaluations in zero-knowledge, meanwhile they also jointly emulate a Sumcheck verifier and verify the proof themselves. Along the way, we provide a new angle of view on the {\em fully linear interactive oracle proof} (FLIOP) systems proposed by Boneh et al. (CRYPTO 2019). That is, essentially distributed sumcheck on proving a batch of multiplications is an optimized concrete instantiation of the FLIOP-based approach.
As concrete applications of our techniques, we first consider semi-honest MPC protocols based on Shamir secret-sharing with an honest majority. Suppose $M$-party and circuit size $N$, to achieve malicious security, our approach only introduces additional $10MN+O(M\log{N})$ total computation, communication of reconstructions of $4\log{N}+6$ secret-shared values, $O(\log{N})$ rounds and $O(\log{N})$ correlated randomness. This shows that malicious security with abort in honest majority MPC comes {\em free} in terms of both computation and communication.
We then consider dishonest majority MPC, where the best known semi-honest protocol achieves $2N$ online communication per party and sublinear preprocessing by using programmable pseudorandom correlation generators (PCGs). We realize malicious MPC with $4N+O(\log{N})$ online communication as well as sublinear preprocessing, matching the optimal $2\times$ communication overhead in Hazay et al. (JOC 2024). Our protocol is essentially obtained by using Sumcheck techniques to check authenticated multiplication triple relations, which requires only $N+1$ {\em standard Beaver triples} and $O(\log{N})$ random authenticated shares for $N$ semi-honestly generated authenticated triples. Compared to the FLIOP-based checking mechanism (Boyle et al. CRYPTO 2022) that requires $O(\sqrt{N})$ communication and $O(N^{1.5})$ computation, we do not introduce any cryptographic assumption beyond PCGs, and have $O(N)$ computation.
Partial-guess, Pre-sieve, Greedy-search - New Unified Key Recovery Framework of Impossible Boomerang Attacks: Full-round Attack on ARADI
The impossible boomerang attack is a very powerful attack, and the existing results show that it is more effective than the impossible differential attack in the related-key scenario. However, several limitations persist in the current key recovery process: the division of pre-guess keys is rather coarse; the details of S-boxes are ignored in the differential propagation; the complexity estimation and the key guessing order's determination are relatively rough and primitive. These are the obstacles that prevent the broader application of impossible boomerang attacks. In this paper, we propose a series of improvement measures and overcome these limitations: we propose the flexible partial pre-guess key technique based on directed graphs, which enable selective determination of necessary guessing keys required to generate partial pairs; we propose the pre-sieving technique, which enable the early elimination of impossible quartets using the cipher details; we propose greedy key-guessing strategy, which enable the efficient search of key guessing order and precise complexity evaluation. Moreover, we integrate these techniques and propose a unified key recovery framework of IBAs. Additionally, we apply it to launch an attack on ARADI, a low-latency block cipher proposed by the NSA in 2024 for the purpose of memory encryption. Consequently, we achieve the first full-round attack on ARADI with a data complexity of $2^{130}$, a time complexity of $2^{254.81}$, and a memory complexity of $2^{252.14}$. In particular, none of the previous key recovery methods of IBAs are able to attain such an outcome, which demonstrates the power of our new techniques and framework.
A Note on P $\neq$ NP
The question of whether the complexity class P equals NP is a major unsolved problem in theoretical computer science. The key to proving that P $\neq$ NP is to show that there is no efficient (polynomial time) algorithm for a language in NP, which is almost impossible, i.e., proving that P $\neq$ NP is almost impossible. In all attempts to prove P $\neq$ NP, all we can do is try to provide the best clues or evidence for P $\neq$ NP.
In this paper, we introduce a new language, the Add/XNOR problem, which has the simplest structure and perfect randomness, by extending the subset sum problem. We conjecture that the square-root complexity is required to solve the Add/XNOR problem, which is by far the strongest evidence for P $\neq$ NP. That is, problems that are verifiable in polynomial time are not necessarily solvable in polynomial time.
Furthermore, by giving up commutative and associative properties, we design a magma equipped with a permutation and successfully achieve Conjecture 2. Based on this conjecture, we obtain the Add/XOR/XNOR problem and one-way functions that are believed to require exhaustive search to solve or invert.
Blue fish, red fish, live fish, dead fish
We show that the DAG-based consensus protocol Tusk [DKSS22] does not achieve liveness, at least under certain reasonable assumptions on the implementation that are consistent with its specification. In addition, we give a simple 2-round variation of Tusk with lower latency and strong liveness properties, but with suboptimal resilience. We also show that another 2-round protocol, GradedDAG [DZX+24], which has optimal resilience, also has liveness problems analogous to Tusk.
Sing a song of Simplex
We flesh out some details of the recently proposed Simplex atomic broadcast protocol, and modify it so that leaders disperse blocks in a more communication-efficient fashion. The resulting protocol, called DispersedSimplex, maintains the simplicity and excellent -- indeed, optimal -- latency characteristics of the original Simplex protocol. We also present several variations, including a variant that supports "stable leaders", variants that incorporate very recently developed data dissemination techniques that allow us to disperse blocks even more efficiently, and variants that are "signature free". We also suggest a number of practical optimizations and provide concrete performance estimates that take into account not just network latency but also network bandwidth limitations and computational costs. Based on these estimates, we argue that despite its simplicity, DispersedSimplex should, in principle, perform in practice as well as or better than any other state-of-the-art atomic broadcast protocol, at least in terms of common-case throughput and latency.
HyperLoop: Rationally secure efficient cross-chain bridge
Cross-chain bridges, realizing the transfer of information and assets between blockchains, form the core of blockchain interoperability solutions. Most existing bridge networks are modeled in an honest-malicious setting, where the bridge nodes are either honest or malicious. Rationality allows the nodes to deviate from the protocol arbitrarily for an economic incentive. In this work, we present HyperLoop, an efficient cross-chain multi-signature bridge and prove that it is safe and live game-theoretically, under the more realistic rational-malicious model.
As rational bridge nodes are allowed to deviate from the protocol and even collude, a monitor mechanism is necessitated, which we realize by introducing whistle-blower nodes. These whistle-blowers constantly check the operations of the bridge and raise complaints to a complaint resolution network in case of discrepancies. To enforce punishments, it is necessary for the nodes to stake an amount before participating as bridge nodes. Consequently, a cap on the volume of funds transferred over the bridge is established. We describe a sliding window mechanism and establish a relation between the stake and the sliding window limit necessary for the safety of the bridge.
Our design yields an economic, computation, and communication-efficient bridge. We realize and deploy our bridge prototype bridging Ethereum and Polygon chains over testnets. For a 19-node bridge network, each bridge node takes an average of only 3 msec to detect and sign a source chain request, showing the highly efficiency and low-latency of the bridge.
No Fish Is Too Big for Flash Boys! Frontrunning on DAG-based Blockchains
Frontrunning is rampant in blockchain ecosystems, yielding attackers profits that have already soared into several million. Most existing frontrunning attacks focus on manipulating transaction order (namely, prioritizing attackers' transactions before victims' transactions) $\textit{within}$ a block. However, for the emerging directed acyclic graph (DAG)-based blockchains, these intra-block frontrunning attacks may not fully reveal the frontrunning vulnerabilities as they introduce block ordering rules to order transactions belonging to distinct blocks.
This work performs the first in-depth analysis of frontrunning attacks toward DAG-based blockchains. We observe that the current block ordering rule is vulnerable to a novel $\textit{inter-block}$ frontrunning attack, which enables the attacker to prioritize ordering its transactions before the victim transactions across blocks. We introduce three attacking strategies: $\textit{(i)}$ Fissure attack, where attackers render the victim transactions ordered later by disconnecting the victim's blocks. $\textit{(ii)}$ Speculative attack, where attackers speculatively construct order-priority blocks. $\textit{(iii)}$ Sluggish attack, where attackers deliberately create low-round blocks assigned a higher ordering priority by the block ordering rule.
We implement our attacks on two open-source DAG-based blockchains, Bullshark and Tusk. We extensively evaluate our attacks in geo-distributed AWS and local environments by running up to $n=100$ nodes. Our experiments show remarkable attack effectiveness. For instance, with the speculative attack, the attackers can achieve a $92.86\%$ attack success rate (ASR) on Bullshark and an $86.27\%$ ASR on Tusk. Using the fissure attack, the attackers can achieve a $94.81\%$ ASR on Bullshark and an $87.31\%$ ASR on Tusk.
We also discuss potential countermeasures for the proposed attack, such as ordering blocks randomly and reordering transactions globally based on transaction fees. However, we find that they either compromise the performance of the system or make the protocol more vulnerable to frontrunning using the existing frontrunning strategies.
SoK: On the Physical Security of UOV-based Signature Schemes
Multivariate cryptography currently centres mostly around UOV-based signature schemes: All multivariate round 2 candidates in the selection process for additional digital signatures by NIST are either UOV itself or close variations of it: MAYO, QR-UOV, SNOVA, and UOV. Also schemes which have been in the focus of the multivariate research community, but are broken by now - like Rainbow and LUOV - are based on UOV. Both UOV and the schemes based on it have been frequently analyzed regarding their physical security in the course of the NIST process. However, a comprehensive analysis regarding the physical security of UOV-based signature schemes is missing.
In this work, we want to bridge this gap and create a comprehensive overview of physical attacks on UOV and its variants from the second round of NIST’s selection process for additional post-quantum signature schemes, which just started. First, we collect all existing side-channel and fault attacks on UOV-based schemes and transfer them to the current UOV specification. Since UOV was subject to significant changes over the past few years, e.g., adaptions to the expanded secret key, some attacks need to be reassessed. Next, we introduce new physical attacks in order to obtain an overview as complete as possible. We then show how all these attacks would translate to MAYO, QR-UOV, and SNOVA. To improve the resistance of UOV-based signature schemes towards physical attacks, we discuss and introduce dedicated countermeasures. As related result, we observe that certain implementation decisions, like key compression techniques and randomization choices, also have a large impact on the physical security, in particular on the effectiveness of the considered fault attacks. Finally, we provide implementations of UOV and MAYO for the ARM Cortex-M4 architecture that feature first-order masking and protection against selected fault attacks. We benchmark the resulting overhead on a NUCLEO-L4R5ZI board and validate our approach by performing a TVLA on original and protected subroutines, yielding significantly smaller t-values for the latter.
Updatable Public-Key Encryption, Revisited
We revisit Updatable Public-Key Encryption (UPKE), which was introduced as a practical mechanism for building forward-secure cryptographic protocols. We begin by observing that all UPKE notions to date are neither syntactically flexible nor secure enough for the most important multi-party protocols motivating UPKE. We provide an intuitive taxonomy of UPKE properties -- some partially or completely overlooked in the past -- along with an overview of known (explicit and implicit) UPKE constructions. We then introduce a formal UPKE definition capturing all intuitive properties needed for multi-party protocols.
Next, we provide a practical pairing-based construction for which we provide concrete security bounds under a standard assumption in the random oracle and the algebraic group model. The efficiency profile of the scheme compares very favorably with existing UPKE constructions (despite the added flexibility and stronger security). For example, when used to improve the forward security of the Messaging Layer Security protocol [RFC9420], our new UPKE construction requires $\approx 1\%$ of the bandwidth of the next-most efficient UPKE construction satisfying the strongest UPKE notion previously considered.
VITARIT: Paying for Threshold Services on Bitcoin and Friends
Blockchain service offerings have seen a rapid rise
in recent times. Many of these services realize a decentralized
architecture with a threshold adversary to avoid a single
point of failure and to mitigate key escrow issues. While
payments to such services are straightforward in systems
supporting smart contracts, achieving fairness poses challenges
in systems like Bitcoin, adhering to the UTXO model with
limited scripting capabilities. This is especially challenging
without smart contracts, as we wish to pay only the required
threshold of t + 1 out of the n servers offering the service
together, without any server claiming the payment twice.
In this paper, we introduce VITARIT 1, a novel payment
solution tailored for threshold cryptographic services in UTXO
systems like Bitcoin. Our approach guarantees robust provable
security while facilitating practical deployment. We focus on
the t-out-of-n distributed threshold verifiable random function
(VRF) service with certain properties, such as threshold BLS
signatures, a recently highlighted area of interest. Our protocol
enables clients to request verifiable random function (VRF)
values from the threshold service, triggering payments to up
to t + 1 servers of the distributed threshold VRF.
Our efficient design relies on simple transactions using
signature verification scripts, making it immediately applicable
in Bitcoin-like systems. We also introduce new tools and
techniques at both the cryptographic and transaction layers,
including a novel signature-VRF exchange protocol for standard
constructions, which may be of independent interest. Addition-
ally, our transaction flow design prevents malicious servers
from claiming payments twice, offering broader implications for
decentralized payment systems. Our prototype implementation
shows that in the two-party interaction, the client takes 126.4
msec, and the server takes 204 msec, demonstrating practicality
and deployability of the system
Matching radar signals and fingerprints with MPC
Vessels can be recognised by their navigation radar due to the characteristics of the emitted radar signal. This is particularly useful if one wants to build situational awareness without revealing one's own presence. Most countries maintain databases of radar fingerprints but will not readily share these due to national security regulations. Sharing of such information will generally require some form of information exchange agreement.
However, all parties in a coalition benefit from correct identification. We use secure multiparty computation to match a radar signal measurement against secret databases and output plausible matches with their likelihoods. We also provide a demonstrator using MP-SPDZ.
SoK: Understanding zk-SNARKs: The Gap Between Research and Practice
Zero-knowledge succinct non-interactive arguments of knowledge (zk-SNARKs) are a powerful tool for proving computation correctness, attracting significant interest from researchers, developers, and users. However, the complexity of zk-SNARKs has created gaps between these groups, hindering progress. Researchers focus on constructing efficient proving systems with stronger security and new properties, while developers and users prioritize toolchains, usability, and compatibility.
In this work, we provide a comprehensive study of zk-SNARK, from theory to practice, pinpointing gaps and limitations. We first present a master recipe that unifies the main steps in converting a program into a zk-SNARK. We then classify existing zk-SNARKs according to their key techniques. Our classification addresses the main difference in practically valuable properties between existing zk-SNARK schemes. We survey over 40 zk-SNARKs since 2013 and provide a reference table listing their categories and properties. Following the steps in master recipe, we then survey 11 general-purpose popular used libraries. We elaborate on these libraries' usability, compatibility, efficiency and limitations. Since installing and executing these zk-SNARK systems is challenging, we also provide a completely virtual environment in which to run the compiler for each of them. We identify that the proving system is the primary focus in cryptography academia. In contrast, the constraint system presents a bottleneck in industry. To bridge this gap, we offer recommendations and advocate for the opensource community to enhance documentation, standardization and compatibility.
VRaaS: Verifiable Randomness as a Service on Blockchains
Web3 applications, such as on-chain games, NFT minting, and leader elections necessitate access to unbiased, unpredictable, and publicly verifiable randomness. Despite its broad use cases and huge demand, there is a notable absence of comprehensive treatments of on-chain verifiable randomness services. To bridge this, we offer an extensive formal analysis of on-chain verifiable randomness services.
We present the $first$ formalization of on-chain verifiable randomness in the blockchain setting by introducing the notion of Verifiable Randomness as a Service (VRaaS). We formally define VRaaS using an ideal functionality $\mathcal{F}_{\text{VRaaS}}$ in the Universal Composability model. Our definition not only captures the core features of randomness services, such as unbiasability, unpredictability, and public verifiability, but also accounts for many other crucial nuances pertaining to different entities involved, such as smart contracts.
Within our framework we study a generic design of Verifiable Random Function (VRF)-based randomness service - where the randomness requester provides an input on which the randomness is evaluated as VRF output. We show that it does satisfy our formal VRaaS definition. Furthermore, we show that the generic protocol captures many real-world randomness services like Chainlink VRF and Supra dVRF.
Moreover, we investigate the minimalism of the framework. Towards that, first we show that, the two transactions in-built in our framework are actually $necessary$ for any randomness service to support the essential qualities. We also discover $practical$ $vulnerabilities$ in other designs such as Algorand beacon, Pyth VRF and Band VRF, captured within our framework.
Secure Showing of Partial Attributes
Anonymous Attribute-Based Credentials (ABCs) allow users to prove possession of attributes while adhering to various authentication policies and without revealing unnecessary information. Single-use ABCs are particularly appealing for their lightweight nature and practical efficiency. These credentials are typically built using blind signatures, with Anonymous Credentials Light (ACL) being one of the most prominent schemes in the literature. However, the security properties of single-use ABCs, especially their secure showing property, have not been fully explored, and prior definitions and corresponding security proofs fail to address scenarios involving partial attribute disclosure effectively. In this work, we propose a stronger secure showing definition that ensures robust security even under selective attribute revelation. Our definition extends the winning condition of the existing secure showing experiment by adding various constraints on the subsets of opened attributes. We show how to represent this winning condition as a matching problem in a suitable bipartite graph, thus allowing for it to be verified efficiently. We then prove that ACL satisfies our strong secure showing notion without any modification. Finally, we define double-spending prevention for single-use ABCs, and show how ACL satisfies the definition.
A New World in the Depths of Microcrypt: Separating OWSGs and Quantum Money from QEFID
While in classical cryptography, one-way functions (OWFs) are widely regarded as the “minimal assumption,” the situation in quantum cryptography is less clear. Recent works have put forward two concurrent candidates for the minimal assumption in quantum cryptography: One-way state generators (OWSGs), postulating the existence of a hard search problem with an efficient verification algorithm, and EFI pairs, postulating the existence of a hard distinguishing problem. Two recent papers [Khurana and Tomer STOC’24; Batra and Jain FOCS’24] showed that OWSGs imply EFI pairs, but the reverse direction remained open. In this work, we give strong evidence that the opposite direction does not hold: We show that there is a quantum unitary oracle relative to which EFI pairs exist, but OWSGs do not. In fact, we show a slightly stronger statement that holds also for EFI pairs that output classical bits (QEFID).
As a consequence, we separate, via our oracle, QEFID, and one-way puzzles from OWSGs and several other Microcrypt primitives, including efficiently verifiable one-way puzzles and unclonable state generators. In particular, this solves a problem left open in [Chung, Goldin, and Gray Crypto’24].
Using similar techniques, we also establish a fully black-box separation (which is slightly weaker than an oracle separation) between private-key quantum money schemes and QEFID pairs.
One conceptual implication of our work is that the existence of an efficient verification algorithm may lead to qualitatively stronger primitives in quantum cryptography.
A Modular Approach to Unclonable Cryptography
We explore a new pathway to designing unclonable cryptographic primitives. We propose a new notion called unclonable puncturable obfuscation (UPO) and study its implications for unclonable cryptography. Using UPO, we present modular (and in some cases, arguably, simple) constructions of many primitives in unclonable cryptography, including, public-key quantum money, quantum copy-protection for many classes of functionalities, unclonable encryption, and single-decryption encryption.
Notably, we obtain the following new results assuming the existence of UPO:
- We show that any cryptographic functionality can be copy-protected as long as this functionality satisfies a notion of security, which we term as puncturable security. Prior feasibility results focused on copy-protecting specific cryptographic functionalities.
- We show that copy-protection exists for any class of evasive functions as long as the associated distribution satisfies a preimage-sampleability condition. Prior works demonstrated copy-protection for point functions, which follows as a special case of our result.
- We show that unclonable encryption exists in the plain model. Prior works demonstrated feasibility results in the quantum random oracle model.
We put forward a candidate construction of UPO and prove two notions of security, each based on the existence of (post-quantum) sub-exponentially secure indistinguishability obfuscation and one-way functions, the quantum hardness of learning with errors, and a new conjecture called simultaneous inner product conjecture.
Efficient Error Detection Methods for the Number Theoretic Transforms in Lattice-Based Algorithms
The Number Theoretic Transform (NTT) is a crucial component in many post-quantum cryptographic (PQC) algorithms, enabling efficient polynomial multiplication. However, the reliability of NTT computations is an important concern, especially for safety-critical applications. This work presents novel techniques to improve the fault tolerance of NTTs used in prominent PQC schemes such as Kyber, Dilithium, and Falcon. The work first establishes a theoretical framework for error detection in NTTs, exploiting the inherent algebraic properties of these transforms. It derives necessary and sufficient conditions for constructing error-detecting vectors that can identify single faults without the need for costly recomputation. For the Dilithium scheme, the work further advances the state-of-the-art by developing the first algorithm capable of detecting up to two maliciously placed faults. The proposed error detection methods are shown to reduce the number of required multiplications by half, leading to significant improvements in computational efficiency compared to existing single error-detecting algorithms. Concrete implementations for Kyber, Dilithium, and Falcon demonstrate the practicality and effectiveness of the error-detection scheme.
TallyGuard: Privacy Preserving Tallied-as-cast Guarantee
This paper presents a novel approach to verifiable vote tallying using additive homomorphism, which can be appended to existing voting systems without modifying the underlying infrastructure. Existing End-to-End Verifiable (E2E-V) systems like Belenios and ElectionGuard rely on distributed trust models or are vulnerable to decryption compromises, making them less suitable for general elections. Our approach introduces a tamper-evident commitment to votes through cryptographic hashes derived from homomorphic encryption schemes such as Paillier. The proposed system provides tallied-as-cast verifiability without revealing individual votes, thereby preventing coercion. The system also provides the ability to perform public verification of results. We also show that this system can be transitioned to quantum-secure encryption like Regev for future-proofing the system. We discuss how to deploy this system in a real-world scenario, including for general political elections, analyzing the security implications and report on the limitations of this system. We believe that the proposed system offers a practical solution to the problem of verifiable vote tallying in general elections.
Efficient Pseudorandom Correlation Generators for Any Finite Field
Correlated randomness lies at the core of efficient modern secure multi-party computation (MPC) protocols. Costs of generating such correlated randomness required for the MPC online phase protocol often constitute a bottleneck in the overall protocol.
A recent paradigm of {\em pseudorandom correlation generator} (PCG) initiated by Boyle et al. (CCS'18, Crypto'19) offers an appealing solution to this issue. In sketch, each party is given a short PCG seed, which can be locally expanded into long correlated strings, satisfying the target correlation. Among various types of correlations, there is oblivious linear evaluation (OLE), a fundamental and useful primitive for typical MPC protocols on arithmetic circuits.
Towards efficient generating a great amount of OLE, and applications to MPC protocols, we establish the following results:
(i) We propose a novel {\em programmable} PCG construction for OLE over any field $\mathbb{F}_p$. For $kN$ OLE correlations, we require $O(k\log{N})$ communication and $O(k^2N\log{N})$ computation, where $k$ is an arbitrary integer $\geq 2$. Previous works either have quadratic computation (Boyle et al. Crypto'19), or can only support fields of size larger than $2$ (Bombar et al. Crypto'23).
(ii) We extend the above OLE construction to provide various types of correlations for any finite field. One of the fascinating applications is an efficient PCG for two-party {\em authenticated Boolean multiplication triples}. For $kN$ authenticated triples, we offer PCGs with seed size of $O(k^2\log{N})$ bits. To our best knowledge, such correlation has not been realized with sublinear communication and quasi-linear computation ever before.
(iii) In addition, the \emph{programmability} admits efficient PCGs for multi-party Boolean triples, and thus the first efficient MPC protocol for Boolean circuits with {\em silent} preprocessing. In particular, we show $kN$ $m$-party Boolean multiplication triples can be generated in $O(m^2k\log{N})$-bit communication, while the state-of-the-art FOLEAGE (Asiacrypt'24) requires a broadcast channel and takes $mkN+O(m^2\log{kN})$ bits communication.
(iv) Finally, we present efficient PCGs for circuit-dependent preprocessing, matrix multiplications triples, and string OTs etc. Compared to previous works, each has its own right.