Papers updated in last 31 days (290 results)

Last updated:  2025-02-11
Leaky McEliece: Secret Key Recovery From Highly Erroneous Side-Channel Information
Marcus Brinkmann, Chitchanok Chuengsatiansup, Alexander May, Julian Nowakowski, and Yuval Yarom
The McEliece cryptosystem is a strong contender for post-quantum schemes, including key encapsulation for confidentiality of key exchanges in network protocols. A McEliece secret key is a structured parity check matrix that is transformed via Gaussian elimination into an unstructured public key. We show that this transformation is highly critical with respect to side-channel leakage. We assume leakage of the elementary row operations during Gaussian elimination, motivated by McEliece implementations in the cryptographic libraries Classic McEliece and Botan. We propose a novel decoding algorithm to reconstruct a secret key from its public key with information from a Gaussian transformation leak. Even if the obtained side-channel leakage is extremely noisy, i.e., each bit is flipped with probability as high as τ ≈ 0.4, we succeed to recover the secret key in a matter of minutes for all proposed (Classic) McEliece instantiations. Remarkably, for high-security McEliece parameters, our attack is more powerful in the sense that it can tolerate even larger τ. We demonstrate our attack on the constant-time reference implementation of Classic McEliece in a single-trace setting, using an STM32L592 ARM processor. Our result stresses the necessity of properly protecting highly structured code-based schemes such as McEliece against side-channel leakage.
Last updated:  2025-02-11
Cryptanalysis of a nonlinear filter-based stream cipher
Tim Beyne and Michiel Verbauwhede
It is shown that the stream cipher proposed by Carlet and Sarkar in ePrint report 2025/160 is insecure. More precisely, one bit of the key can be deduced from a few keystream bytes. This property extends to an efficient key-recovery attack. For example, for the proposal with 80 bit keys, a few kilobytes of keystream material are sufficient to recover half of the key.
Last updated:  2025-02-11
Bounded Collusion-Resistant Registered Functional Encryption for Circuits
Yijian Zhang, Jie Chen, Debiao He, and Yuqing Zhang
As an emerging primitive, Registered Functional Encryption (RFE) eliminates the key-escrow issue that threatens numerous works for functional encryption, by replacing the trusted authority with a transparent key curator and allowing each user to sample their decryption keys locally. In this work, we present a new black-box approach to construct RFE for all polynomial-sized circuits. It considers adaptive simulation-based security in the bounded collusion model (Gorbunov et al. - CRYPTO'12), where the security can be ensured only if there are no more than Q >= 1 corrupted users and $Q$ is fixed at the setup phase. Unlike earlier works, we do not employ unpractical Indistinguishability Obfuscation (iO). Conversely, it can be extended to support unbounded users, which is previously only known from iO. Technically, our general compiler exploits garbled circuits and a novel variant of slotted Registered Broadcast Encryption (RBE), namely global slotted RBE. This primitive is similar to slotted RBE, but needs optimally compact public parameters and ciphertext, so as to satisfy the efficiency requirement of the resulting RFE. Then we present two concrete global slotted RBE from pairings and lattices, respectively. With proposed compiler, we hence obtain two bounded collusion-resistant RFE schemes. Here, the first scheme relies on k-Lin assumption, while the second one supports unbounded users under LWE and evasive LWE assumptions.
Last updated:  2025-02-11
Optimizing Key Recovery in Impossible Cryptanalysis and Its Automated Tool
Jianing Zhang and Haoyang Wang
Impossible differential (ID) cryptanalysis and impossible boomerang (IB) cryptanalysis are two methods of impossible cryptanalysis against block ciphers. Since the seminal work introduced by Boura et al. in 2014, there have been no substantial advancements in the key recovery process for impossible cryptanalysis, particularly for the IB attack.In this paper, we propose a generic key recovery framework for impossible cryptanalysis that supports arbitrary key-guessing strategies, enabling optimal key recovery attacks. Within the framework, we provide a formal analysis of probabilistic extensions in impossible cryptanalysis for the first time. Besides, for the construction of IB distinguishers, we propose a new method for finding contradictions in multiple rounds. By incorporating these techniques, we propose an Mixed-Integer Linear Programming (MILP)-based tool for finding full ID and IB attacks. To demonstrate the power of our methods, we applied it to several block ciphers, including SKINNY, SKINNYee, Midori, and Deoxys-BC. Our approach yields a series of optimal results in impossible cryptanalysis, achieving significant improvements in time and memory complexities. Notably, our IB attack on SKINNYee is the first 30-round attack.
Last updated:  2025-02-10
New Quantum Cryptanalysis of Binary Elliptic Curves (Extended Version)
Kyungbae Jang, Vikas Srivastava, Anubhab Baksi, Santanu Sarkar, and Hwajeong Seo
This paper improves upon the quantum circuits required for the Shor's attack on binary elliptic curves. We present two types of quantum point addition, taking both qubit count and circuit depth into consideration. In summary, we propose an in-place point addition that improves upon the work of Banegas et al. from CHES'21, reducing the qubit count – depth product by more than $73\%$ – $81\%$ depending on the variant. Furthermore, we develop an out-of-place point addition by using additional qubits. This method achieves the lowest circuit depth and offers an improvement of over $92\%$ in the qubit count – quantum depth product (for a single step). To the best of our knowledge, our work improves from all previous works (including the CHES'21 paper by Banegas et al., the IEEE Access'22 paper by Putranto et al., and the CT-RSA'23 paper by Taguchi and Takayasu) in terms of circuit depth and qubit count – depth product. Equipped with the implementations, we discuss the post-quantum security of the binary elliptic curve cryptography. Under the MAXDEPTH metric (proposed by the US government's NIST), the quantum circuit with the highest depth in our work is $2^{24}$, which is significantly lower than the MAXDEPTH limit of $2^{40}$. For the gate count – full depth product, a metric for estimating quantum attack cost (proposed by NIST), the highest complexity in our work is $2^{60}$ for the curve having degree 571 (which is comparable to AES-256 in terms of classical security), considerably below the post-quantum security level 1 threshold (of the order of $2^{156}$).
Last updated:  2025-02-10
Preservation of Speculative Constant-Time by Compilation
Santiago Arranz Olmos, Gilles Barthe, Lionel Blatter, Benjamin Grégoire, and Vincent Laporte
Compilers often weaken or even discard software-based countermeasures commonly used to protect programs against side-channel attacks; worse, they may also introduce vulnerabilities that attackers can exploit. The solution to this problem is to develop compilers that preserve such countermeasures. Prior work establishes that (a mildly modified version of) the CompCert and Jasmin formally verified compilers preserve constant-time, an information flow policy that ensures that programs are protected against timing side-channel attacks. However, nothing is known about preservation of speculative constant-time, a strengthening of the constant-time policy that ensures that programs are protected against Spectre-v1 attacks. We first show that preservation of speculative constant-time fails in practice by providing examples of secure programs whose compilation is not speculative constant-time using GCC (GCC -O0 and GCC -O1) and Jasmin. Then, we define a proof-of-concept compiler that distills some of the critical passes of the Jasmin compiler and use the Coq proof assistant to prove that it preserves speculative constant-time. Finally, we patch the Jasmin speculative constant-time type checker and demonstrate that all cryptographic implementations written in Jasmin can be fixed with minimal impact.
Last updated:  2025-02-10
Protecting cryptographic code against Spectre-RSB
Santiago Arranz Olmos, Gilles Barthe, Chitchanok Chuengsatiansup, Benjamin Grégoire, Vincent Laporte, Tiago Oliveira, Peter Schwabe, Yuval Yarom, and Zhiyuan Zhang
It is fundamental that executing cryptographic software must not leak secrets through side-channels. For software-visible side-channels, it was long believed that "constant-time" programming would be sufficient as a systematic countermeasure. However, this belief was shattered in 2018 by attacks exploiting speculative execution—so called Spectre attacks. Recent work shows that language support suffices to protect cryptographic code with minimal overhead against one class of such attacks, Spectre v1, but leaves an open question of whether this result can be extended to also cover other classes of Spectre attacks. In this paper, we answer this question in the affirmative: We design, validate, implement, and verify an approach to protect cryptographic implementations against all known classes of Spectre attacks—the main challenge in this endeavor is attacks exploiting the return stack buffer, which are known as Spectre-RSB. Our approach combines a new value-dependent information-flow type system that enforces speculative constant-time in an idealized model of transient execution and a compiler transformation that realizes this idealized model on the generated low-level code. Using the Coq proof assistant, we prove that the type system is sound with respect to the idealized semantics and that the compiler transformation preserves speculative constant-time. We implement our approach in the Jasmin framework for high-assurance cryptography and demonstrate that the overhead incurred by full Spectre protections is below 2% for most cryptographic primitives and reaches only about 5–7% for the more complex post-quantum key-encapsulation mechanism Kyber.
Last updated:  2025-02-10
The supersingular endomorphism ring problem given one endomorphism
Arthur Herlédan Le Merdy and Benjamin Wesolowski
Given a supersingular elliptic curve $E$ and a non-scalar endomorphism $\alpha$ of $E$, we prove that the endomorphism ring of $E$ can be computed in classical time about $\text{disc}(\mathbb{Z}[\alpha])^{1/4}$ , and in quantum subexponential time, assuming the generalised Riemann hypothesis. Previous results either had higher complexities, or relied on heuristic assumptions. Along the way, we prove that the Primitivisation problem can be solved in polynomial time (a problem previously believed to be hard), and we prove that the action of smooth ideals on oriented elliptic curves can be computed in polynomial time (previous results of this form required the ideal to be powersmooth, i.e., not divisible by any large prime power). Following the attacks on SIDH, isogenies in high dimension are a central ingredient of our results.
Last updated:  2025-02-10
OBLIVIATOR: Oblivious Parallel Joins and other Operators in Shared Memory Environments
Apostolos Mavrogiannakis, Xian Wang, Ioannis Demertzis, Dimitrios Papadopoulos, and Minos Garofalakis
We introduce oblivious parallel operators designed for both non-foreign key and foreign key equi-joins. Obliviousness ensures nothing is revealed about the data besides input/output sizes, even against a strong adversary that can observe memory access patterns. Our solution achieves this by combining trusted hardware with efficient oblivious primitives for compaction and sorting, and two oblivious algorithms: (i) an oblivious aggregation tree, which can be described as a variation of the parallel prefix sum, customized for trusted hardware, and (ii) a novel algorithm for obliviously expanding the elements of a relation. In the sequential setting, our oblivious join performs $4.6\times$- $5.14\times$ faster than the prior state-of-the-art solution (Krastnikov et al., VLDB 2020) on data sets of size $n=2^{24}$. In the parallel setting, our algorithm achieves a speedup of up to roughly $16\times$ over the sequential version, when running with 32 threads (becoming up to $80\times$ compared to the sequential algorithm of Krastnikov et al.). Finally, our oblivious operators can be used independently to support other oblivious relational database queries, such as oblivious selection and oblivious group-by.
Last updated:  2025-02-10
Perfect Somewhat Homomorphic Encryption and 2-Party Computation
Jonathan Trostle
Two-party computation has been an active area of research since Yao's breakthrough results on garbled circuits. We present secret key additive somewhat homomorphic schemes where the client has perfect privacy (server can be computationally unbounded). Our basic scheme is additive somewhat homomorphic and we extend it to be somewhat homomorphic by supporting multiplication. The server handles circuit multiplication gates by returning the multiplicands to the client which updates the decryption key so that the original ciphertext vector includes the encrypted multiplication gate outputs. We give a 2-party protocol that also incorporates server inputs where the client has perfect privacy. Server privacy holds against a computationally bounded adversary since it depends on the hardness of the subset sum problem. Scaling the 2PC protocol via separate encryption parameters for smaller subcircuits allows the ciphertext size to remain constant as circuit size grows.
Last updated:  2025-02-10
Endomorphisms for Faster Cryptography on Elliptic Curves of Moderate CM Discriminants, II
Dimitri Koshelev and Antonio Sanso
The present article is a natural extension of the previous one about the GLV method of accelerating a (multi-)scalar multiplication on elliptic curves of moderate CM discriminants $D < 0$. In comparison with the first article, much greater magnitudes of $D$ (in absolute value) are achieved, although the base finite fields of the curves have to be pretty large. This becomes feasible by resorting to quite powerful algorithmic tools developed primarily in the context of lattice-based and isogeny-based cryptography. Curiously, pre-quantum cryptography borrows research outcomes obtained when seeking conversely quantum-resistant solutions or attacks on them. For instance, some $2$-cycle of pairing-friendly MNT curves (with $-D \approx 100{,}000{,}000$, i.e., $\log_2(-D) \approx 26.5$) is relevant for the result of the current article. The given $2$-cycle was generated at one time by Guillevic to provide $\approx 128$ security bits, hence it was close to application in real-world zk-SNARKs. Another more performant MNT $2$-cycle (with slightly smaller security level, but with much larger $D$) was really employed in the protocol Coda (now Mina) until zero-knowledge proof systems on significantly faster pairing-free (or half-pairing) $2$-cycles were invented. It is also shown in the given work that more lollipop curves, recently proposed by Costello and Korpal to replace MNT ones, are now covered by the GLV technique.
Last updated:  2025-02-10
Finding a polytope: A practical fault attack against Dilithium
Paco Azevedo-Oliveira, Andersson Calle Viera, Benoît Cogliati, and Louis Goubin
In Dilithium, the rejection sampling step is crucial for the proof of security and correctness of the scheme. However, to our knowledge, there is no attack in the literature that takes advantage of an attacker knowing rejected signatures. The aim of this paper is to create a practical black-box attack against Dilithium with a weakened rejection sampling. We succeed in showing that an adversary with enough rejected signatures can recover Dilithium's secret key in less than half an hour on a desktop computer. There is one possible application for this result: by physically preventing one of the rejection sampling tests from happening, we obtain two fault attacks against Dilithium.
Last updated:  2025-02-10
Basic Lattice Cryptography: The concepts behind Kyber (ML-KEM) and Dilithium (ML-DSA)
Vadim Lyubashevsky
This tutorial focuses on describing the fundamental mathematical concepts and design decisions used in the two ``main'' lattice schemes standardized by NIST and included in the CNSA 2.0 algorithmic suite. They are the KEM / encryption scheme CRYSTALS-Kyber (ML-KEM) and the signature scheme CRYSTALS-Dilithium (ML-DSA) . In addition, we will also give the main ideas behind other lattice-based KEMs like Frodo and NTRU.
Last updated:  2025-02-10
Post-Quantum Security for the Extended Access Control Protocol
Marc Fischlin, Jonas von der Heyden, Marian Margraf, Frank Morgner, Andreas Wallner, and Holger Bock
The Extended Access Control (EAC) protocol for authenticated key agreement is mainly used to secure connections between machine-readable travel documents (MRTDs) and inspection terminals, but it can also be adopted as a universal solution for attribute-based access control with smart cards. The security of EAC is currently based on the Diffie-Hellman problem, which may not be hard when considering quantum computers. In this work we present PQ-EAC, a quantum-resistant version of the EAC protocol. We show how to achieve post-quantum confidentiality and authentication without sacrificing real-world usability on smart cards. To ease adoption, we present two main versions of PQ-EAC: One that uses signatures for authentication and one where authentication is facilitated using long-term KEM keys. Both versions can be adapted to achieve forward secrecy and to reduce round complexity. To ensure backwards-compatibility, PQ-EAC can be implemented using only Application Protocol Data Units (APDUs) specified for EAC in standard BSI TR-03110. Merely the protocol messages needed to achieve forward secrecy require an additional APDU not specified in TR-03110. We prove security of all versions in the real-or-random model of Bellare and Rogaway. To show real-world practicality of PQ-EAC we have implemented a version using signatures on an ARM SC300 security controller, which is typically deployed in MRTDs. We also implemented PQ-EAC on a VISOCORE terminal for border control. We then conducted several experiments to evaluate the performance of PQ-EAC executed between chip and terminal under various real-world conditions. Our results strongly suggest that PQ-EAC is efficient enough for use in border control.
Last updated:  2025-02-10
AUCIL: An Inclusion List Design for Rational Parties
Sarisht Wadhwa, Julian Ma, Thomas Thiery, Barnabe Monnot, Luca Zanolini, Fan Zhang, and Kartik Nayak
The decentralized nature of blockchains is touted to provide censorship resistance. However, in reality, the ability of proposers to completely control the contents of a block makes censorship relatively fragile. To combat this, a notion of inclusion lists has been proposed in the blockchain community. This paper presents the first formal study of inclusion lists. Our inclusion list design leverages multiple proposers to propose transactions and improve censorship resistance. The design has two key components. The first component is a utility-maximizing input list creation mechanism that allows rational proposers to achieve a correlated equilibrium while prioritizing high-value transactions. The second component, AUCIL (auction-based inclusion list), is a mechanism for aggregating the input lists from the proposers to output an inclusion list.
Last updated:  2025-02-10
On the Average Random Probing Model
Julien Béguinot and Loïc Masure
We exhibit a gap between the average random probing model, as defined by Dziembowski et al. at Eurocrypt 2015, and the same model, as defined in the recent paper of Brian et al. at Eurocrypt 2024. Whereas any noisy leakage can be tightly reduced to the former one, we show in this paper that it cannot be tightly reduced to the latter one, unless requiring extra assumptions, e.g., if the noisy leakage is deterministic. As a consequence, the reduction from noisy leakages to random probings — without field size loss — remains unproven.
Last updated:  2025-02-10
A Round-Optimal Near-Linear Third-Party Private Set Intersection Protocol
Foo Yee Yeo and Jason H. M. Ying
Third-party private set intersection (PSI) enables two parties, each holding a private set to compute their intersection and reveal the result only to an inputless third party. In this paper, we present an efficient round-optimal third-party PSI protocol. Our work is motivated by real-world applications such as contact tracing whereby expedition is essential while concurrently preserving privacy. Our construction only requires $2$ communication rounds and attains a near-linear computational complexity of $O(n^{1+\varepsilon})$ for large dataset size $n$, where $\varepsilon>0$ is any fixed constant. Our improvements stem from algorithmic changes and the incorporation of new techniques along with precise parameter selections to achieve a tight asymptotic bound. Furthermore, we also present a third-party PSI cardinality protocol which has not been explored in prior third-party PSI work. In a third-party PSI cardinality setting, only the third-party obtains the size of the intersection and nothing else. Our construction to achieve the cardinality functionality attains a quasilinear computational complexity for the third-party.
Last updated:  2025-02-10
The Nonlinear Filter Model of Stream Cipher Redivivus
Claude Carlet and Palash Sarkar
The nonlinear filter model is an old and well understood approach to the design of secure stream ciphers. Extensive research over several decades has shown how to attack stream ciphers based on this model and has identified the security properties required of the Boolean function used as the filtering function to resist such attacks. This led to the problem of constructing Boolean functions which provide adequate security and at the same time are efficient to implement. Unfortunately, over the last two decades no good solutions to this problem appeared in the literature. The lack of good solutions has effectively led to nonlinear filter model becoming more or less obsolete. This is a big loss to the cryptographic design toolkit, since the great advantages of the nonlinear filter model are its simplicity, well understood security and the potential to provide low cost solutions for hardware oriented stream ciphers. In this paper we construct balanced functions on an odd number $n\geq 5$ of variables with the following provable properties: linear bias equal to $2^{-\lfloor n/2\rfloor -1}$, algebraic degree equal to $2^{\lfloor \log_2\lfloor n/2\rfloor \rfloor}$, algebraic immunity at least $\lceil (n-1)/4\rceil$, fast algebraic immunity at least $1+\lceil (n-1)/4\rceil $, and can be implemented using $O(n)$ NAND gates. The functions are obtained from a simple modification of the well known class of Maiorana-McFarland bent functions. By appropriately choosing $n$ and the length $L$ of the linear feedback shift register, we show that it is possible to obtain examples of stream ciphers which are $\kappa$-bit secure against known types of attacks for various values of $\kappa$. We provide concrete proposals for $\kappa=80,128,160,192,224$ and $256$. For the $80$-bit, $128$-bit, and the $256$-bit security levels, the circuits for the corresponding stream ciphers require about 1743.5, 2771.5, and 5607.5 NAND gates respectively. For the $80$-bit and the $128$-bit security levels, the gate count estimates compare quite well to the famous ciphers Trivium and Grain-128a respectively, while for the $256$-bit security level, we do not know of any other stream cipher design which has such a low gate count.
Last updated:  2025-02-10
Beware of KECCAK: A Practical Fault Injection Attack Scheme Apply to All Phases of ML-KEM and ML-DSA
Yuxuan Wang, Jintong Yu, Shipei Qu, Xiaolin Zhang, Xiaowei Li, Chi Zhang, and Dawu Gu
ML-KEM and ML-DSA are NIST-standardized lattice-based post-quantum cryptographic algorithms. In both algorithms, KECCAK is the designated hash algorithm and extendable-output function. It is extensively used for deriving sensitive information, making it a valuable target for attackers. In the field of fault injection attacks, few works targeted KECCAK, and they have not fully explored its potential as a general component. Consequently, many attacks remain undiscovered. This article systematically analyzes methods for recovering keys, forging signatures, and bypassing verification by utilizing (partially) recovered KECCAK outputs, presenting six attacks against ML-KEM and five attacks against ML-DSA, significantly expanding the capabilities and applicability of attacks through faulting KECCAK. These attacks cover the key generation, encapsulation, decapsulation, signing, and verification phases, making our scheme the first to apply to all phases of ML-KEM and ML-DSA. To support these upper-layer attacks, we propose various customized fault attacks on KECCAK, which manipulate the control flow through loop-abort faults to recover the (partial) output. The proposed attacks are validated on the C implementations of the PQClean library’s ML-KEM and ML-DSA running on embedded devices. Experiments show that loop-abort faults can be induced using electromagnetic fault injection on ARM Cortex-M microprocessors from multiple series, achieving a success rate of 89.5%. Furthermore, once the fault injection is successful, the proposed attacks can succeed with a probability of 100%.
Last updated:  2025-02-10
Practical Electromagnetic Fault Injection on Intel Neural Compute Stick 2
Shivam Bhasin, Dirmanto Jap, Marina Krček, Stjepan Picek, and Prasanna Ravi
Machine learning (ML) has been widely deployed in various applications, with many applications being in critical infrastructures. One recent paradigm is edge ML, an implementation of ML on embedded devices for Internet-of-Things (IoT) applications. In this work, we have conducted a practical experiment on Intel Neural Compute Stick (NCS) 2, an edge ML device, with regard to fault injection (FI) attacks. More precisely, we have employed electromagnetic fault injection (EMFI) on NCS 2 to evaluate the practicality of the attack on a real target device. We have investigated multiple fault parameters with a low-cost pulse generator, aiming to achieve misclassification at the output of the inference. Our experimental results demonstrated the possibility of achieving practical and repeatable misclassifications.
Last updated:  2025-02-10
The Supersingular Isogeny Path and Endomorphism Ring Problems: Unconditional Reductions
Maher Mamah
In this paper we study several computational problems related to current post-quantum cryptosystems based on isogenies between supersingular elliptic curves. In particular we prove that the supersingular isogeny path and endomorphism ring problems are unconditionally equivalent under polynomial time reductions. We show that access to a factoring oracle is sufficient to solve the Quaternion path problem of KLPT and prove that these problems are equivalent, where previous results either assumed heuristics or the generalised Riemann Hypothesis (GRH). Consequently, given Shor’s quantum algorithm for factorisation, our results yield unconditional quantum polynomial reductions between the isogeny path and EndRing problems. Recently these reductions have become foundational for the security of isogeny-based cryptography
Last updated:  2025-02-10
Adaptive Distributional Security: A Framework for Input-Adaptive Cryptography
Cruz Barnum and David Heath
It is often desirable to break cryptographic primitives into two components: an input-independent offline component, and a cheap online component used when inputs arrive. Security of such online/offline primitives must be proved in the input-adaptive setting: the adversary chooses its input adaptively, based on what it sees in the offline-phase. Proving security in the input-adaptive setting can be difficult, particularly when one wishes to achieve simulation security and avoid idealized objects like a random oracle (RO). This work proposes a framework for reasoning about input-adaptive primitives: adaptive distributional security (ADS). Roughly, an ADS primitive provides security when it is used with inputs drawn from one of two distributions that are themselves hard to distinguish. ADS is useful as a framework for the following reasons: - An ADS definition can often circumvent impossibility results imposed on the corresponding simulation-based definition. This allows us to decrease the online-cost of primitives, albeit by using a weaker notion of security. - With care, one can typically upgrade an ADS-secure object into a simulation-secure object (by increasing cost in the online-phase). - ADS is robust, in the sense that (1) it enables a form of composition and (2) interesting ADS primitives are highly interconnected in terms of which objects imply which other objects. - Many useful ADS-secure objects are plausibly secure from straightforward symmetric-key cryptography. We start by defining the notion of an ADS encryption (ADE) scheme. A notion of input-adaptive encryption can be easily achieved from RO, and the ADE definition can be understood as capturing the concrete property provided by RO that is sufficient to achieve input-adaptivity. From there, we use ADE to achieve ADS variants of garbled circuits and oblivious transfer, to achieve simulation-secure garbled circuits, oblivious transfer, and two-party computation, and prove interconnectedness of these primitives. In sum, this results in a family of objects with extremely cheap online-cost.
Last updated:  2025-02-10
Computing Quaternion Embeddings and Endomorphism rings of Supersingular Oriented Elliptic curves
Maher Mamah
In this paper, we investigate several computational problems motivated by post-quantum cryptosystems based on isogenies and ideal class group actions on oriented elliptic curves. Our main technical contribution is an efficient algorithm for embedding the ring of integers of an imaginary quadratic field \( K \) into some maximal order of the quaternion algebra \( B_{p,\infty} \) ramified at a prime \( p \) and infinity. Assuming the Generalized Riemann Hypothesis (GRH), our algorithm runs in probabilistic polynomial time, improving upon previous results that relied on heuristics or required the factorization of \( \textnormal{disc}(K) \). Notably, this algorithm may be of independent interest. Our approach enhances the work of Love and Boneh on computing isogenies between \( M \)-small elliptic curves by eliminating heuristics and improving computational efficiency. Furthermore, given a quadratic order \( \mathfrak{O} \) in \( K \), we show that our algorithm reduces the computational endomorphism ring problem of \( \mathfrak{O} \)-oriented elliptic curves to the Vectorization problem in probabilistic polynomial time, assuming the conductor of \( \mathfrak{O} \) can be efficiently factorized. Previously, the best known result required the full factorization of \( \textnormal{disc}(\mathfrak{O}) \), which may be exponentially large. Additionally, when the conductor of \( \mathfrak{O} \) can be efficiently factorized, we establish a polynomial-time equivalence between the Quaternion Order Embedding Problem, which asks to embed a quadratic order \( \mathfrak{O} \) into a maximal order in \( B_{p,\infty} \), and computing horizontal isogenies between \( \mathfrak{O} \)-oriented elliptic curves. Leveraging this reduction, we propose a rigorous algorithm, under GRH, that solves the quaternion order embedding problem in time \( \tilde{O}(|\mathrm{disc}(\mathfrak{O})|^{1/2}) \), improving upon previous methods that required higher asymptotic time and relied on several heuristics.
Last updated:  2025-02-10
Efficient Permutation Correlations and Batched Random Access for Two-Party Computation
Stanislav Peceny, Srinivasan Raghuraman, Peter Rindal, and Harshal Shah
In this work we formalize the notion of a two-party permutation correlation $(A, B), (C, \pi)$ s.t. $\pi(A)=B+C$ for a random permutation $\pi$ of $n$ elements and vectors $A,B,C\in \mathbb{F}^n$. This correlation can be viewed as an abstraction and generalization of the Chase et al. (Asiacrypt 2020) share translation protocol. We give a systematization of knowledge for how such a permutation correlation can be derandomized to allow the parties to perform a wide range of oblivious permutations of secret-shared data. This systematization immediately enables the translation of various popular honest-majority protocols to be efficiently instantiated in the two-party setting, e.g. collaborative filtering, sorting, database joins, graph algorithms, and many more. We give two novel protocols for efficiently generating a random permutation correlation. The first uses MPC-friendly PRFs to generate a correlation of $n$ elements, each of size $\ell=\log|\mathbb{F}|$ bits, with $O(n\ell)$ bit-OTs, time, communication, and only three rounds. Similar asymptotics previously required relatively expensive public-key cryptography, e.g. Paillier or LWE. Our protocol implementation for $n=2^{20},\ell=128$ requires just 7 seconds & $\sim2\ell n$ bits of communication, a respective 40 & $1.1\times$ improvement on the LWE solution of Juvekar at al. (CCS 2018). The second protocol is based on pseudo-random correlation generators and achieves an overhead that is sublinear in the string length $\ell$, i.e. the communication and number of OTs is $O(n\log \ell)$. The overhead of the latter protocol has larger hidden constants, and therefore is more efficient only when long strings are permuted, e.g. in graph algorithms. Finally, we present a suite of highly efficient protocols based on permutations for performing various batched random access operations. These include the ability to extract a hidden subset of a secret-shared list. More generally, we give ORAM-like protocols for obliviously reading and writing from a list in a batched manner. We argue that this suite of batched random access protocols should be a first class primitive in the MPC practitioner's toolbox.
Last updated:  2025-02-09
Binary Codes for Error Detection and Correction in a Computationally Bounded World
Jad Silbak and Daniel Wichs
We study error detection and correction in a computationally bounded world, where errors are introduced by an arbitrary $\textit{polynomial-time}$ adversarial channel. Our focus is on $\textit{seeded}$ codes, where the encoding and decoding procedures can share a public random seed, but are otherwise deterministic. We can ask for either $\textit{selective}$ or $\textit{adaptive}$ security, depending on whether the adversary can choose the message being encoded before or after seeing the seed. For large alphabets, a recent construction achieves essentially optimal rate versus error tolerance trade-offs under minimal assumptions, surpassing information-theoretic limits. However, for the binary alphabet, the only prior improvement over information theoretic codes relies on non-standard assumptions justified via the random oracle model. We show the following: $\textbf{Selective Security under LWE:}$ Under the learning with errors (LWE) assumption, we construct selectively secure codes over the binary alphabet. For error detection, our codes achieve essentially optimal rate $R \approx 1$ and relative error tolerance $\rho \approx \frac{1}{2}$. For error correction, they can uniquely correct $\rho < 1/4$ relative errors with a rate $R$ that essentially matches that of the best list-decodable codes with error tolerance $\rho$. Both cases provide significant improvements over information-theoretic counterparts. The construction relies on a novel form of 2-input correlation intractable hash functions that we construct from LWE. $\textbf{Adaptive Security via Crypto Dark Matter:}$ Assuming the exponential security of a natural collision-resistant hash function candidate based on the ``crypto dark matter'' approach of mixing linear functions over different moduli, we construct adaptively secure codes over the binary alphabet, for both error detection and correction. They achieve essentially the same trade-offs between error tolerance $\rho$ and rate $R$ as above, with the caveat that for error-correction they only do so for sufficiently small values of $\rho$.
Last updated:  2025-02-09
Experimentally studying path-finding problem between conjugates in supersingular isogeny graphs: Optimizing primes and powers to speed-up cycle finding
Madhurima Mukhopadhyay
We study the problem of finding a path between conjugate supersingular elliptic curves over $\mathbb{F}_{p^2}$ for a prime $p$, which is important for cycle finding in supersingular isogeny graphs. We see that for any given $p$, there is some $l$ corresponding to $p$ which accelerates the process of conjugate path-finding. Also, time-wise, the most efficient way of overviewing the graph is seeing $i(=3)$ steps at once. We have outlined methods in which the next vertex of any pseudo-random walk should be chosen to reach conjugate vertex faster. We have experimentally investigated the paths between frobenius conjugates for wide ranges of small prime $l$. We introduce sets to experimentally learn about the structure of the isogeny graphs when short cycles are present.
Last updated:  2025-02-09
Polynomial Inversion Algorithms in Constant Time for Post-Quantum Cryptography
Abhraneel Dutta, Emrah Karagoz, Edoardo Persichetti, and Pakize Sanal
The computation of the inverse of a polynomial over a quotient ring or a finite field plays a very important role during the key generation of post-quantum cryptosystems like NTRU, BIKE, and LEDACrypt. It is therefore important that there exist an efficient algorithm capable of running in constant time, to prevent timing side-channel attacks. In this article, we study both constant-time algorithms based on Fermat's Little Theorem and the Extended $GCD$ Algorithm, and provide a detailed comparison in terms of performance. According to our conclusion, we see that the constant-time Extended $GCD$-based Bernstein-Yang's algorithm shows a better performance with 1.76x-3.76x on \texttt{x86} platforms compared to FLT-based methods. Although we report numbers from a software implementation, we additionally provide a short glimpse of some recent results when these two algorithms are implemented on various hardware platforms. Finally, we also explore other exponentiation algorithms that work similarly to the Itoh-Tsuji inversion method. These algorithms perform fewer polynomial multiplications and show a better performance with 1.56x-1.96x on \texttt{x86} platform compared to Itoh-Tsuji inversion method.
Last updated:  2025-02-08
BulletCT: Towards More Scalable Ring Confidential Transactions With Transparent Setup
Nan Wang, Qianhui Wang, Dongxi Liu, Muhammed F. Esgin, and Alsharif Abuadbba
RingCT signatures are essential components of Ring Confidential Transaction (RingCT) schemes on blockchain platforms, enabling anonymous transaction spending and significantly impacting the scalability of these schemes. This paper makes two primary contributions: We provide the first thorough analysis of a recently developed Any-out-of-N proof in the discrete logarithm (DLOG) setting and the associated RingCT scheme, introduced by ZGSX23 (S&P '23). The proof conceals the number of the secrets to offer greater anonymity than K-out-of-N proofs and uses an efficient "K-Weight" technique for its construction. However, we identify for the first time several limitations of using Any-out-of-N proofs, such as increased transaction sizes, heightened cryptographic complexities and potential security risks. These limitations prevent them from effectively mitigating the longstanding scalability bottleneck. We then continue to explore the potential of using K-out-of-N proofs to enhance scalability of RingCT schemes. Our primary innovation is a new DLOG-based RingCT signature that integrates a refined "K-Weight"-based K-out-of-N proof and an entirely new tag proof. The latter is the first to efficiently enable the linkability of RingCT signatures derived from the former, effectively resisting double-spending attacks. Finally, we identify and patch a linkability flaw in ZGSX23's signature. We benchmark our scheme against this patched one to show that our scheme achieves a boost in scalability, marking a promising step forward.
Last updated:  2025-02-08
Twist and Shout: Faster memory checking arguments via one-hot addressing and increments
Srinath Setty and Justin Thaler
A memory checking argument enables a prover to prove to a verifier that it is correctly processing reads and writes to memory. They are used widely in modern SNARKs, especially in zkVMs, where the prover proves the correct execution of a CPU including the correctness of memory operations. We describe a new approach for memory checking, which we call the method of one-hot addressing and increments. We instantiate this method via two different families of protocols, called Twist and Shout. Twist supports read/write memories, while Shout targets read-only memories (also known as lookup arguments). Both Shout and Twist have logarithmic verifier costs. Unlike prior works, these protocols do not invoke "grand product" or "grand sum" arguments. Twist and Shout significantly improve the prover costs of prior works across the full range of realistic memory sizes, from tiny memories (e.g., 32 registers as in RISC-V), to memories that are so large they cannot be explicitly materialized (e.g., structured lookup tables of size $2^{64}$ or larger, which arise in Lasso and the Jolt zkVM). Detailed cost analysis shows that Twist and Shout are well over 10x cheaper for the prover than state-of-the-art memory-checking procedures configured to have logarithmic proof length. Prior memory-checking procedures can also be configured to have larger proofs. Even then, we estimate that Twist and Shout are at least 2--4x faster for the prover in key applications. Finally, using Shout, we provide two fast-prover SNARKs for non-uniform constraint systems, both of which achieve minimal commitment costs (the prover commits only to the witness): (1) SpeedySpartan applies to Plonkish constraints, substantially improving the previous state-of-the-art protocol, BabySpartan; and (2)Spartan++ applies to CCS (a generalization of Plonkish and R1CS), improving prover times over the previous state-of-the-art protocol, Spartan, by 6x.
Last updated:  2025-02-08
Asymptotic improvements to provable algorithms for the code equivalence problem
Huck Bennett, Drisana Bhatia, Jean-François Biasse, Medha Durisheti, Lucas LaBuff, Vincenzo Pallozzi Lavorante, and Philip Waitkevich
We present several new provable algorithms for two variants of the code equivalence problem on linear error-correcting codes, the Linear Code Equivalence Problem (LCE) and the Permutation Code Equivalence Problem (PCE). Specifically, for arbitrary codes of block length $n$ and dimension $k$ over any finite field $\mathbb{F}_q$, we show: 1) A deterministic algorithm running in $2^{n + o(n+q)}$ time for LCE. 2) A randomized algorithm running in $2^{n/2 + o(n+q)}$ time for LCE and PCE. 3) A quantum algorithm running in $2^{n/3 + o(n+q)}$ time for LCE and PCE. The first algorithm complements the deterministic roughly $2^n$-time algorithm of Babai (SODA 2011) for PCE. The second two algorithms improve on recent work of Nowakowski (PQCrypto 2025), which gave algorithms with similar running times, but only for code equivalence on \emph{random} codes and only over fields of order $q \geq 7$.
Last updated:  2025-02-08
Communication-Optimal Convex Agreement
Diana Ghinea, Chen-Da Liu-Zhang, and Roger Wattenhofer
Byzantine Agreement (BA) allows a set of $n$ parties to agree on a value even when up to $t$ of the parties involved are corrupted. While previous works have shown that, for $\ell$-bit inputs, BA can be achieved with the optimal communication complexity $O(\ell n)$ for sufficiently large $\ell$, BA only ensures that honest parties agree on a meaningful output when they hold the same input, rendering the primitive inadequate for many real-world applications. This gave rise to the notion of Convex Agreement (CA), introduced by Vaidya and Garg [PODC'13], which requires the honest parties' outputs to be in the convex hull of the honest inputs. Unfortunately, all existing CA protocols incur a communication complexity of at least $O(\ell n^2)$. In this work, we introduce the first CA protocol with the optimal communication of $\mathcal{O}(\ell n)$ bits for inputs in $\mathbb{Z}$ of size $\ell = \Omega(\kappa \cdot n \log^2 n)$, where $\kappa$ is the security parameter.
Last updated:  2025-02-08
Relativized Succinct Arguments in the ROM Do Not Exist
Annalisa Barbara, Alessandro Chiesa, and Ziyi Guan
A relativized succinct argument in the random oracle model (ROM) is a succinct argument in the ROM that can prove/verify the correctness of computations that involve queries to the random oracle. We prove that relativized succinct arguments in the ROM do not exist. The impossibility holds even if the succinct argument is interactive, and even if soundness is computational (rather than statistical). This impossibility puts on a formal footing the commonly-held belief that succinct arguments require non-relativizing techniques. Moreover, our results stand in sharp contrast with other oracle models, for which a recent line of work has constructed relativized succinct non-interactive arguments (SNARGs). Indeed, relativized SNARGs are a powerful primitive that, e.g., can be used to obtain constructions of IVC (incrementally-verifiable computation) and PCD (proof-carrying data) based on falsifiable cryptographic assumptions. Our results rule out this approach for IVC and PCD in the ROM.
Last updated:  2025-02-08
AutoDiVer: Automatically Verifying Differential Characteristics and Learning Key Conditions
Marcel Nageler, Shibam Ghosh, Marlene Jüttler, and Maria Eichlseder
Differential cryptanalysis is one of the main methods of cryptanalysis and has been applied to a wide range of ciphers. While it is very successful, it also relies on certain assumptions that do not necessarily hold in practice. One of these is the hypothesis of stochastic equivalence, which states that the probability of a differential characteristic behaves similarly for all keys. Several works have demonstrated examples where this hypothesis is violated, impacting the attack complexity and sometimes even invalidating the investigated prior attacks. Nevertheless, the hypothesis is still typically taken for granted. In this work, we propose AutoDiVer, an automatic tool that allows to thoroughly verify differential characteristics. First, the tool supports calculating the expected probability of differential characteristics while considering the key schedule of the cipher. Second, the tool supports estimating the size of the space of keys for which the characteristic permits valid pairs, and deducing conditions for these keys. AutoDiVer implements a custom SAT modeling approach and takes advantage of a combination of features of advanced SAT solvers, including approximate model counting and clause learning. To show applicability to many different kinds of block ciphers like strongly aligned, weakly aligned, and ARX ciphers, we apply AutoDiVer to GIFT, PRESENT, RECTANGLE, SKINNY, WARP, SPECK, and SPEEDY.
Last updated:  2025-02-08
SPADE: Digging into Selective and PArtial DEcryption using Functional Encryption
Camille Nuoskala, Hossein Abdinasibfar, and Antonis Michalas
Functional Encryption (FE) is a cryptographic technique established to guarantee data privacy while allowing the retrieval of specific results from the data. While traditional decryption methods rely on a secret key disclosing all the data, FE introduces a more subtle approach. The key generation algorithm generates function-specific decryption keys that can be adaptively provided based on policies. Adaptive access control is a good feature for privacy-preserving techniques. Generic schemes have been designed to run basic functions, such as linear regression. However, they often provide a narrow set of outputs, resulting in a lack of thorough analysis. The bottom line is that despite significant research, FE still requires appropriate constructions to unleash its full potential in securely analyzing data and providing more insights. In this article, we introduce SPADE, a novel FE scheme that features multiple users and offers fine-grained access control through partial decryption of the ciphertexts. Unlike existing FE schemes, our construction also supports qualitative data, such as genomics, expanding the applications of privacy-preserving analysis to enable a comprehensive study of the data. SPADE is a significant advancement that balances privacy and data analysis with clear implications in healthcare and finance. To verify its applicability, we conducted extensive experiments on datasets used in sleep medicine (hypnogram data) and DNA analysis (genomic records).
Last updated:  2025-02-08
How to Compress Garbled Circuit Input Labels, Efficiently
Marian Dietz, Hanjun Li, and Huijia Lin
Garbled circuits are a foundational primitive in both theory and practice of cryptography. Given $(\hat{C}, K[x])$, where $\hat{C}$ is the garbling of a circuit C and $K[x] = \{K[i, x_i]\}$ are the input labels for an input $x$, anyone can recover $C(x)$, but nothing else about the input $x$. Most research efforts focus on minimizing the size of the garbled circuit $\hat{C}$. In contrast, the work by Applebaum, Ishai, Kushilevitz, and Waters (CRYPTO' 13) initiated the study of minimizing the cost for transferring the input labels $K[x]$. Later improved in a follow-up by Applebaum et al. (STOC' 23), the state-of-the-art techniques allow compressing the input labels to the optimal rate of $1 + o(1)$. That is, each input label can be transferred by essentially sending 1 bit. However, existing solutions are computationally expensive, requiring large numbers of public-key operations (such as RSA exponentiation). In this work, we present an efficient input label compression technique based on Ring-LWE. We achieve the same optimal rate of $1 + o(1)$, by making use of additional communication in an offline stage (before the input $x$ becomes known), a paradigm that has already been explored in prior works. A novel feature of the offline communication in our scheme is that the information sent is either reusable or compressible using a random oracle, leading to small amortized offline cost $o(|x|)$. We further demonstrate concrete efficiency through an implementation whose online latency out-performs the naive baseline (which sends all of $K[x]$ in the online phase) in a realistic network with a bandwidth of up to 45Mbps. This break-even point could be pushed even further by leveraging the large potential for parallelization of computation. Finally, we apply our techniques to construct maliciously-secure two-party computation protocols with succinct online communication: The online phase starts once the circuit C becomes known, and requires exchanging only $poly(\lambda)$ bits (independent of $|C|$). After inputs $x_A$, $x_B$ arrive, an additional $|x_A| + |x_B | + poly(\lambda)$ bits need to be sent.
Last updated:  2025-02-07
PEReDi: Privacy-Enhanced, Regulated and Distributed Central Bank Digital Currencies
Amirreza Sarencheh, Aggelos Kiayias, and Markulf Kohlweiss
Central Bank Digital Currencies (CBDCs) aspire to offer a digital replacement for physical cash and, as such, must address two fundamental yet conflicting requirements. On the one hand, they should be private to prevent the emergence of a financial “panopticon.” On the other hand, they must be regulation friendly, facilitating threshold-limiting, tracing, and counterparty auditing functionalities necessary for compliance with regulations such as Know Your Customer (KYC), Anti-Money Laundering (AML), and Combating the Financing of Terrorism (CFT), as well as financial stability considerations. In this work, we propose PEReDi, a new asynchronous model for CBDCs and present an efficient construction that, for the first time, simultaneously addresses these challenges in full. Moreover, recognizing the necessity of avoiding a single point of failure, our construction is distributed to ensure that all its properties remain intact even when a bounded number of entities are corrupted by an adversary. Achieving all the above properties efficiently is technically involved; among others, our construction employs suitable cryptographic tools to thwart man-in-the-middle attacks, introduces a novel traceability mechanism with significant performance gains over previously known techniques, and, perhaps surprisingly, shows how to obviate Byzantine agreement or broadcast from the optimistic execution path of a payment, something that results in an essentially optimal communication pattern and minimal communication overhead. We demonstrate the efficiency of our payment system by presenting detailed computational and communication cost analyses. Beyond “simple” payments, we also discuss how our scheme can support one-off large transfers while complying with Know Your Transaction (KYT) disclosure requirements. Our CBDC concept is expressed and realized within the Universal Composition (UC) framework, providing a modular and secure way for integration into a broader financial ecosystem.
Last updated:  2025-02-07
Snake-eye Resistant PKE from LWE for Oblivious Message Retrieval and Robust Encryption
Zeyu Liu, Katerina Sotiraki, Eran Tromer, and Yunhao Wang
Oblivious message retrieval (OMR) allows resource-limited recipients to outsource the message retrieval process without revealing which messages are pertinent to which recipient. Its realizations in recent works leave an open problem: can an OMR scheme be both practical and provably secure against spamming attacks from malicious senders (i.e., DoS-resistant) under standard assumptions? In this paper, we first prove that a prior construction $\mathsf{OMRp2}$ is DoS-resistant under a standard LWE assumption, resolving an open conjecture of prior works. Then, we present $\mathsf{DoS\text{-}PerfOMR}$: a provably DoS-resistant OMR construction that is 12x faster than $\mathsf{OMRp2}$, and (almost) matches the performance of the state-of-the-art OMR scheme that is $\textit{not}$ DoS-resistant (proven by the attacks we show). To achieve this, we analyze the $\textit{snake-eye resistance}$ property for general PKE schemes (i.e., it is hard to encrypt an identical message under two keys). We construct a new lattice-based PKE scheme: $\mathsf{LWEmongrass}$, that is provably snake-eye resistant and has better efficiency than the PVW scheme underlying $\mathsf{OMRp2}$. We also show that the natural candidates (e.g., RingLWE PKE) are not snake-eye resistant. Furthermore, we show that a snake-eye resistant PKE scheme implies a robust PKE scheme, thus introducing the first robust lattice-based PKE scheme without relying on the KEM-DEM paradigm and its inherent inefficiencies. Of independent interest, we introduce two variants of LWE with side information, as components towards proving the properties of $\mathsf{LWEmongrass}$, and reduce standard LWE to them for the parameters of interest.
Last updated:  2025-02-07
NodeChain: Cheap Data Integrity Without Consensus
Orfeas Stefanos Thyfronitis Litos, Zhaoxuan Wu, Alfredo Musumeci, Songyun Hu, James Helsby, Michael Breza, and William Knottenbelt
Blockchains enable decentralised applications that withstand Byzantine failures and do not need a central authority. Unfortunately, their massive replication requirements preclude their use on constrained devices. We propose a novel blockchain-based data structure which forgoes replication without affecting the append-only nature of blockchains, making it suitable for maintaining data integrity over networks of storage-constrained devices. Our solution does not provide consensus, which is not required by our motivating application, namely securely storing sensor data of containers in cargo ships. We elucidate the practical promise of our technique by following a multi-faceted approach: We (i) formally prove the security of our protocol in the Universal Composition (UC) setting, as well as (ii) provide a small-scale proof-of-concept implementation, (iii) a performance simulation for large-scale deployments which showcases a reduction in storage of more than $1000$x compared to traditional blockchains, and (iv) a resilience simulation that predicts the practical effects of network jamming attacks.
Last updated:  2025-02-07
Worst-Case Lattice Sampler with Truncated Gadgets and Applications
Corentin Jeudy and Olivier Sanders
Gadget-based samplers have proven to be a key component of several cryptographic primitives, in particular in the area of privacy-preserving mechanisms. Most constructions today follow the approach introduced by Micciancio and Peikert (MP) yielding preimages whose dimension linearly grows with that of the gadget. To improve performance, some papers have proposed to truncate the gadget but at the cost of an important feature of the MP sampler, namely the ability to invert arbitrary syndromes. Technically speaking, they replace the worst-case MP sampler by an average-case sampler that can only be used in specific contexts. Far from being a mere theoretical restriction, it prevents the main applications of gadget-based samplers from using truncated variants and thus from benefiting from the associated performance gains. In this paper, we solve this problem by describing a worst-case sampler that still works with truncated gadgets. Its main strength is that it retains the main characteristics of the MP sampler while providing flexibility in the choice of the truncation parameter. As a consequence, it can be used as a plug-in replacement for all applications relying on the MP sampler so far, leading to performance improvements up to 30% as illustrated by several examples in this paper. Our sampler is supported by a thorough security analysis that addresses the hurdles met by previous works and its practicality is demonstrated by a concrete implementation.
Last updated:  2025-02-07
Improved Lattice Blind Signatures from Recycled Entropy
Corentin Jeudy and Olivier Sanders
Blind signatures represent a class of cryptographic primitives enabling privacy-preserving authentication with several applications such as e-cash or e-voting. It is still a very active area of research, in particular in the post-quantum setting where the history of blind signatures has been hectic. Although it started to shift very recently with the introduction of a few lattice-based constructions, all of the latter give up an important characteristic of blind signatures (size, efficiency, or security under well-known assumptions) to achieve the others. In this paper, we propose another design which revisits the link between the two main procedures of blind signatures, namely issuance and showing, demonstrating that we can significantly alleviate the second one by adapting the former. Concretely, we show that we can harmlessly inject excess randomness in the issuance phase, and then recycle the entropy surplus during showing to decrease the complexity of the zero-knowledge proof which constitutes the main component of the signature. This leads to a blind signature scheme with small sizes, low complexity, and that still relies on well-known lattice assumptions.
Last updated:  2025-02-07
A light white-box masking scheme using Dummy Shuffled Secure Multiplication
Alex Charlès and Aleksei Udovenko
In white-box cryptography, early encoding-based countermeasures have been broken by the DCA attack, leading to the utilization of masking schemes against a surge of automated attacks. The recent filtering attack from CHES 2024 broke the last viable masking scheme from CHES 2021 resisting both computational and algebraic attacks, raising the need for new countermeasures. In this work, we perform the first formal study of the combinations of existing countermeasures and demonstrate that applying Dummy Shuffling (EUROCRYPT 2021) then ISW masking (CRYPTO 2003) to a circuit carries algebraic, correlation, and filtering security - necessary conditions to withstand state-of-the-art automated attacks. We also show that applying these two countermeasures in the opposite order leads to a Higher-Order Filtering attack, highlighting the importance of the order of application of the combined countermeasures. We also propose a new masking scheme called S5, standing for the Semi-Shuffled Secret Sharing Scheme, a scheme merging Dummy Shuffling and ISW in a single countermeasure more efficiently than a direct composition.
Last updated:  2025-02-07
Deny Whatever You Want: Dual-Deniable Public-Key Encryption
Zhiyuan An and Fangguo Zhang
We introduce an enhanced requirement of deniable public key encryption that we call dual-deniability. It asks that a sender who is coerced should be able to produce fake randomness, which can explain the target ciphertext as the encryption of any alternative message under any valid key she/he desires to deny. Compared with the original notion of deniability (Canetti et al. in CRYPTO ’97, hereafter named message-deniability), this term further provides a shield for the anonymity of the receiver against coercion attacks. We first give a formal definition of dual-deniability, along with its weak-mode variant. For conceptual understanding, we then show dual-deniability implies semantic security and anonymity against CPA, separates full robustness, and even contradicts key-less or mixed robustness, while is (constructively) implied by key-deniability and full robustness with a minor assumption for bits encryption. As for the availability of dual-deniability, our main scheme is a generic approach from ciphertext-simulatable PKE, where we devise a subtle multi-encryption schema to hide the true message within random masking ciphertexts under plan-ahead setting. Further, we leverage the weak model to present a more efficient scheme having negligible detection probability and constant ciphertext size. Besides, we revisit the notable scheme (Sahai and Waters in STOC ’14) and show it is inherently dual-deniable. Finally, we extend the Boneh-Katz transform to capture CCA security, deriving dual-deniable and CCA-secure PKE from any selectively dual-deniable IBE under multi-TA setting. Overall our work mounts the feasibility of anonymous messaging against coercion attacks.
Last updated:  2025-02-07
A Critical Analysis of Deployed Use Cases for Quantum Key Distribution and Comparison with Post-Quantum Cryptography
Nick Aquina, Bruno Cimoli, Soumya Das, Kathrin Hövelmanns, Fiona Johanna Weber, Chigo Okonkwo, Simon Rommel, Boris Škorić, Idelfonso Tafur Monroy, and Sebastian Verschoor
Quantum Key Distribution (QKD) is currently being discussed as a technology to safeguard communication in a future where quantum computers compromise traditional public-key cryptosystems. In this paper, we conduct a comprehensive security evaluation of QKD-based solutions, focusing on real-world use cases sourced from academic literature and industry reports. We analyze these use cases, assess their security and identify the possible advantages of deploying QKD-based solutions. We further compare QKD-based solutions with Post-Quantum Cryptography (PQC), the alternative approach to achieving security when quantum computers compromise traditional public-key cryptosystems, evaluating their respective suitability for each scenario. Based on this comparative analysis, we critically discuss and comment on which use cases QKD is suited for, considering factors such as implementation complexity, scalability, and long-term security. Our findings contribute to a better understanding of the role QKD could play in future cryptographic infrastructures and offer guidance to decision-makers considering the deployment of QKD.
Last updated:  2025-02-07
Improved NTT and CRT-based RNR Blinding for Side-Channel and Fault Resistant Kyber
Max Duparc and Mounir Taha
In this paper, we build upon the blinding methods introduced in recent years to enhance the protection of lattice-based cryptographic schemes against side-channel and fault injection attacks. Specifically, we propose a cost-efficient blinded Number Theoretic Transform (NTT) that impedes the convergence of Soft Analytical Side-Channel Attacks (SASCA), even with limited randomness sampling. Additionally, we extend the blinding mechanism based on the Chinese Remainder Theorem (CRT) and Redundant Number Representation (RNR) introduced by Heiz and Pöppelmann by reducing the randomness sampling overhead and accelerating the verification phase. These two blinding mechanisms are nicely compatible with each other's and, when combined, provide enhanced resistance against side-channel attacks, both classical and soft analytical, as well as fault injection attacks, while maintaining high performance and low overhead, making the approach well-suited for practical applications, particularly in resource-constrained IoT environments.
Last updated:  2025-02-07
On the Atomicity and Efficiency of Blockchain Payment Channels
Di Wu, Shoupeng Ren, Yuman Bai, Lipeng He, Jian Liu, Wu Wen, Kui Ren, and Chun Chen
Payment channels have emerged as a promising solution to address the performance limitations of cryptocurrencies payments, enabling efficient off-chain transactions while maintaining security guarantees. However, existing payment channel protocols, including the widely-deployed Lightning Network and the state-of-the-art Sleepy Channels, suffer from a fundamental vulnerability: non-atomic state transitions create race conditions that can lead to unexpected financial losses. We first formalize current protocols into a common paradigm and prove that this vulnerability is fundamental—any protocol following this paradigm cannot guarantee balance security due to the inherent race conditions in their design. To address this limitation, we propose a novel atomic paradigm for payment channels that ensures atomic state transitions, effectively eliminating race conditions while maintaining all desired functionalities. Based on this paradigm, we introduce Ultraviolet, a secure and efficient payment channel protocol that achieves both atomicity and high performance, while avoiding the introduction of unimplemented Bitcoin features. Ultraviolet reduces the number of required messages per transaction by half compared to existing solutions, while maintaining comparable throughput. We formally prove the security of Ultraviolet under the universal composability framework and demonstrate its practical efficiency through extensive evaluations across multiple regions. This results in a 37% and 52% reduction in latency compared to the Lightning Network and Sleepy Channels, respectively. Regarding throughput, Ultraviolet achieves performance comparable to the Lightning Network and delivers 2× the TPS of Sleepy Channels.
Last updated:  2025-02-06
PIE: $p$-adic Encoding for High-Precision Arithmetic in Homomorphic Encryption
Luke Harmon, Gaetan Delavignette, Arnab Roy, and David Silva
A large part of current research in homomorphic encryption (HE) aims towards making HE practical for real-world applications. In any practical HE, an important issue is to convert the application data (type) to the data type suitable for the HE. The main purpose of this work is to investigate an efficient HE-compatible encoding method that is generic, and can be easily adapted to apply to the HE schemes over integers or polynomials. $p$-adic number theory provides a way to transform rationals to integers, which makes it a natural candidate for encoding rationals. Although one may use naive number-theoretic techniques to perform rational-to-integer transformations without reference to $p$-adic numbers, we contend that the theory of $p$-adic numbers is the proper lens to view such transformations. In this work we identify mathematical techniques (supported by $p$-adic number theory) as appropriate tools to construct a generic rational encoder which is compatible with HE. Based on these techniques, we propose a new encoding scheme PIE, that can be easily combined with both AGCD-based and RLWE-based HE to perform high precision arithmetic. After presenting an abstract version of PIE, we show how it can be attached to two well-known HE schemes: the AGCD-based IDGHV scheme and the RLWE-based (modified) Fan-Vercauteren scheme. We also discuss the advantages of our encoding scheme in comparison with previous works.
Last updated:  2025-02-06
Cairo – a Turing-complete STARK-friendly CPU architecture
Lior Goldberg, Shahar Papini, and Michael Riabzev
Proof systems allow one party to prove to another party that a certain statement is true. Most existing practical proof systems require that the statement will be represented in terms of polynomial equations over a finite field. This makes the process of representing a statement that one wishes to prove or verify rather complicated, as this process requires a new set of equations for each statement. Various approaches to deal with this problem have been proposed. We present Cairo, a practically-efficient Turing-complete STARK-friendly CPU architecture. We describe a single set of polynomial equations for the statement that the execution of a program on this architecture is valid. Given a statement one wishes to prove, Cairo allows writing a program that describes that statement, instead of writing a set of polynomial equations.
Last updated:  2025-02-06
Higher-Order Deterministic Masking with Application to Ascon
Vahid Jahandideh, Bart Mennink, and Lejla Batina
Side-channel attacks (SCAs) pose a significant threat to the implementations of lightweight ciphers, particularly in resource-constrained environments where masking—the primary countermeasure—is constrained by tight resource limitations. This makes it crucial to reduce the resource and randomness requirements of masking schemes. In this work, we investigate an approach to minimize the randomness complexity of masking algorithms. Specifically, we explore the theoretical foundations of deterministic higher-order masking, which relies solely on offline randomness present in the initial input shares and eliminates the need for online (fresh) randomness during internal computations. We demonstrate the feasibility of deterministic masking for ciphers such as Ascon, showing that their diffusion layer can act as a refresh subcircuit. This ensures that, up to a threshold number, probes placed in different rounds remain independent. Based on this observation, we propose composition theorems for deterministic masking schemes. On the practical side, we extend the proof of first- and second-order probing security for Ascon’s protected permutation from a single round to an arbitrary number of rounds
Last updated:  2025-02-06
Collision Attacks on Galois/Counter Mode (GCM)
John Preuß Mattsson
Advanced Encryption Standard in Galois/Counter Mode (AES-GCM) is the most widely used Authenticated Encryption with Associated Data (AEAD) algorithm in the world. In this paper, we analyze the use of GCM with all the Initialization Vector (IV) constructions and lengths approved by NIST SP 800-38D when encrypting multiple plaintexts with the same key. We derive attack complexities in both ciphertext-only and known-plaintext models, with or without nonce hiding, for collision attacks compromising integrity and confidentiality. To facilitate the analysis of GCM with random IVs, we derive a new, simplified equation for near birthday collisions. Our analysis shows that GCM with random IVs provides less than 128 bits of security. When 96-bit IVs are used, as recommended by NIST, the security drops to less than 97 bits. Therefore, we strongly recommend NIST to forbid the use of GCM with 96-bit random nonces.
Last updated:  2025-02-06
Improved Differential and Linear Cryptanalysis on Round-Reduced SIMON
Chao Niu, Muzhou Li, Jifu Zhang, and Meiqin Wang
SIMON is a lightweight block cipher proposed by the National Security Agency. According to previous cryptanalytic results on SIMON, differential and linear cryptanalysis are the two most effective attacks on it. Usually, there are many trails sharing the same input and output differences (resp. masks). These trails comprise the differential (resp. linear hull) and can be used together when mounting attacks. In ASIACRYPT 2021, Leurent et al. proposed a matrix-based method on SIMON-like ciphers, where only trails whose active bits stay in a $w$-bit window are considered. The static window in each round is chosen to be $w$ least significant bits. They applied this efficient framework on SIMON and SIMECK, and have obtained many better differentials and linear hulls than before. For SIMON, they also found that there seems to be some potential for improvement, which should be further investigated. In this paper, we dynamically choose window for each round to achieve better distinguishers. Benefiting from these dynamic windows, we can obtain stronger differentials and linear hulls than previously proposed for almost all versions of SIMON. Finally, we provided the best differential/linear attacks on SIMON48, SIMON64, and SIMON96 in terms of round number, complexity, or success rate.
Last updated:  2025-02-06
Discrete Gaussians Modulo Sub-Lattices: New Leftover Hash Lemmas for Discrete Gaussians
Haoxiang Jin, Feng-Hao Liu, Zhedong Wang, and Dawu Gu
The Leftover Hash Lemma (LHL) is a powerful tool for extracting randomness from an entropic distribution, with numerous applications in cryptography. LHLs for discrete Gaussians have been explored in both integer settings by Gentry et al. (GPV, STOC'08) and algebraic ring settings by Lyubashevsky et al. (LPR, Eurocrypt'13). However, the existing LHLs for discrete Gaussians have two main limitations: they require the Gaussian parameter to be larger than certain smoothing parameters, and they cannot handle cases where fixed and arbitrary information is leaked. In this work, we present new LHLs for discrete Gaussians in both integer and ring settings. Our results show that the Gaussian parameter can be improved by a factor of $\omega(\sqrt{\log\lambda})$ and $O(\sqrt{N})$ compared to the regularity lemmas of GPV and LPR, respectively, under similar parameter choices such as the dimension and ring. Furthermore, our new LHLs can be applied to leaked discrete Gaussians, and the result can be used to establish asymptotic hardness of the extended MLWE assumptions, addressing an open question in recent works by Lyubashevsky et al. (LNP, Crypto'22). Our central techniques involve new fine-grained analyses of the min-entropy in discrete Gaussians modulo sublattices and should be of interest.
Last updated:  2025-02-06
On the Power of Sumcheck in Secure Multiparty Computation
Zhe Li, Chaoping Xing, Yizhou Yao, and Chen Yuan
Lund et al. (JACM 1992) invented the powerful Sumcheck protocol that has been extensively used in complexity theory and also for designing concretely efficient (zero-knowledge) arguments. In this work, we systematically study Sumcheck in the context of secure multi-party computation (MPC). Our main result is a new generic framework for lifting semi-honest MPC protocols to malicious security, with a {\em constant} multiplicative overhead in {\em both} computation and communication, and even additional logarithmic communication in the best case. In general, our approach applies to any semi-honest linear secret-sharing based MPC secure up to additive attacks, where secret-sharings can be added with an authentication property. At a high-level, our approach has a highly distributive flavor, where the parties jointly emulate a Sumcheck prover that proves the correctness of MPC semi-honest evaluations in zero-knowledge, meanwhile they also jointly emulate a Sumcheck verifier and verify the proof themselves. Along the way, we provide a new angle of view on the {\em fully linear interactive oracle proof} (FLIOP) systems proposed by Boneh et al. (CRYPTO 2019). That is, essentially distributed sumcheck on proving a batch of multiplications is an optimized concrete instantiation of the FLIOP-based approach. As concrete applications of our techniques, we first consider semi-honest MPC protocols based on Shamir secret-sharing with an honest majority. Suppose $M$-party and circuit size $N$, to achieve malicious security, our approach only introduces additional $10MN+O(M\log{N})$ total computation, communication of reconstructions of $4\log{N}+6$ secret-shared values, $O(\log{N})$ rounds and $O(\log{N})$ correlated randomness. This shows that malicious security with abort in honest majority MPC comes {\em free} in terms of both computation and communication. We then consider dishonest majority MPC, where the best known semi-honest protocol achieves $2N$ online communication per party and sublinear preprocessing by using programmable pseudorandom correlation generators (PCGs). We realize malicious MPC with $4N+O(\log{N})$ online communication as well as sublinear preprocessing, matching the optimal $2\times$ communication overhead in Hazay et al. (JOC 2024). Our protocol is essentially obtained by using Sumcheck techniques to check authenticated multiplication triple relations, which requires only $N+1$ {\em standard Beaver triples} and $O(\log{N})$ random authenticated shares for $N$ semi-honestly generated authenticated triples. Compared to the FLIOP-based checking mechanism (Boyle et al. CRYPTO 2022) that requires $O(\sqrt{N})$ communication and $O(N^{1.5})$ computation, we do not introduce any cryptographic assumption beyond PCGs, and have $O(N)$ computation.
Last updated:  2025-02-06
Partial-guess, Pre-sieve, Greedy-search - New Unified Key Recovery Framework of Impossible Boomerang Attacks: Full-round Attack on ARADI
Xichao Hu and Lin Jiao
The impossible boomerang attack is a very powerful attack, and the existing results show that it is more effective than the impossible differential attack in the related-key scenario. However, several limitations persist in the current key recovery process: the division of pre-guess keys is rather coarse; the details of S-boxes are ignored in the differential propagation; the complexity estimation and the key guessing order's determination are relatively rough and primitive. These are the obstacles that prevent the broader application of impossible boomerang attacks. In this paper, we propose a series of improvement measures and overcome these limitations: we propose the flexible partial pre-guess key technique based on directed graphs, which enable selective determination of necessary guessing keys required to generate partial pairs; we propose the pre-sieving technique, which enable the early elimination of impossible quartets using the cipher details; we propose greedy key-guessing strategy, which enable the efficient search of key guessing order and precise complexity evaluation. Moreover, we integrate these techniques and propose a unified key recovery framework of IBAs. Additionally, we apply it to launch an attack on ARADI, a low-latency block cipher proposed by the NSA in 2024 for the purpose of memory encryption. Consequently, we achieve the first full-round attack on ARADI with a data complexity of $2^{130}$, a time complexity of $2^{254.81}$, and a memory complexity of $2^{252.14}$. In particular, none of the previous key recovery methods of IBAs are able to attain such an outcome, which demonstrates the power of our new techniques and framework.
Last updated:  2025-02-06
A Note on P $\neq$ NP
Ping Wang
The question of whether the complexity class P equals NP is a major unsolved problem in theoretical computer science. The key to proving that P $\neq$ NP is to show that there is no efficient (polynomial time) algorithm for a language in NP, which is almost impossible, i.e., proving that P $\neq$ NP is almost impossible. In all attempts to prove P $\neq$ NP, all we can do is try to provide the best clues or evidence for P $\neq$ NP. In this paper, we introduce a new language, the Add/XNOR problem, which has the simplest structure and perfect randomness, by extending the subset sum problem. We conjecture that the square-root complexity is required to solve the Add/XNOR problem, which is by far the strongest evidence for P $\neq$ NP. That is, problems that are verifiable in polynomial time are not necessarily solvable in polynomial time. Furthermore, by giving up commutative and associative properties, we design a magma equipped with a permutation and successfully achieve Conjecture 2. Based on this conjecture, we obtain the Add/XOR/XNOR problem and one-way functions that are believed to require exhaustive search to solve or invert.
Last updated:  2025-02-05
Blue fish, red fish, live fish, dead fish
Victor Shoup
We show that the DAG-based consensus protocol Tusk [DKSS22] does not achieve liveness, at least under certain reasonable assumptions on the implementation that are consistent with its specification. In addition, we give a simple 2-round variation of Tusk with lower latency and strong liveness properties, but with suboptimal resilience. We also show that another 2-round protocol, GradedDAG [DZX+24], which has optimal resilience, also has liveness problems analogous to Tusk.
Last updated:  2025-02-05
Sing a song of Simplex
Victor Shoup
We flesh out some details of the recently proposed Simplex atomic broadcast protocol, and modify it so that leaders disperse blocks in a more communication-efficient fashion. The resulting protocol, called DispersedSimplex, maintains the simplicity and excellent -- indeed, optimal -- latency characteristics of the original Simplex protocol. We also present several variations, including a variant that supports "stable leaders", variants that incorporate very recently developed data dissemination techniques that allow us to disperse blocks even more efficiently, and variants that are "signature free". We also suggest a number of practical optimizations and provide concrete performance estimates that take into account not just network latency but also network bandwidth limitations and computational costs. Based on these estimates, we argue that despite its simplicity, DispersedSimplex should, in principle, perform in practice as well as or better than any other state-of-the-art atomic broadcast protocol, at least in terms of common-case throughput and latency.
Last updated:  2025-02-05
HyperLoop: Rationally secure efficient cross-chain bridge
Aniket Kate, Easwar Vivek Mangipudi, Charan Nomula, Raghavendra Ramesh, Athina Terzoglou, and Joshua Tobkin
Cross-chain bridges, realizing the transfer of information and assets between blockchains, form the core of blockchain interoperability solutions. Most existing bridge networks are modeled in an honest-malicious setting, where the bridge nodes are either honest or malicious. Rationality allows the nodes to deviate from the protocol arbitrarily for an economic incentive. In this work, we present HyperLoop, an efficient cross-chain multi-signature bridge and prove that it is safe and live game-theoretically, under the more realistic rational-malicious model. As rational bridge nodes are allowed to deviate from the protocol and even collude, a monitor mechanism is necessitated, which we realize by introducing whistle-blower nodes. These whistle-blowers constantly check the operations of the bridge and raise complaints to a complaint resolution network in case of discrepancies. To enforce punishments, it is necessary for the nodes to stake an amount before participating as bridge nodes. Consequently, a cap on the volume of funds transferred over the bridge is established. We describe a sliding window mechanism and establish a relation between the stake and the sliding window limit necessary for the safety of the bridge. Our design yields an economic, computation, and communication-efficient bridge. We realize and deploy our bridge prototype bridging Ethereum and Polygon chains over testnets. For a 19-node bridge network, each bridge node takes an average of only 3 msec to detect and sign a source chain request, showing the highly efficiency and low-latency of the bridge.
Last updated:  2025-02-05
No Fish Is Too Big for Flash Boys! Frontrunning on DAG-based Blockchains
Jianting Zhang and Aniket Kate
Frontrunning is rampant in blockchain ecosystems, yielding attackers profits that have already soared into several million. Most existing frontrunning attacks focus on manipulating transaction order (namely, prioritizing attackers' transactions before victims' transactions) $\textit{within}$ a block. However, for the emerging directed acyclic graph (DAG)-based blockchains, these intra-block frontrunning attacks may not fully reveal the frontrunning vulnerabilities as they introduce block ordering rules to order transactions belonging to distinct blocks. This work performs the first in-depth analysis of frontrunning attacks toward DAG-based blockchains. We observe that the current block ordering rule is vulnerable to a novel $\textit{inter-block}$ frontrunning attack, which enables the attacker to prioritize ordering its transactions before the victim transactions across blocks. We introduce three attacking strategies: $\textit{(i)}$ Fissure attack, where attackers render the victim transactions ordered later by disconnecting the victim's blocks. $\textit{(ii)}$ Speculative attack, where attackers speculatively construct order-priority blocks. $\textit{(iii)}$ Sluggish attack, where attackers deliberately create low-round blocks assigned a higher ordering priority by the block ordering rule. We implement our attacks on two open-source DAG-based blockchains, Bullshark and Tusk. We extensively evaluate our attacks in geo-distributed AWS and local environments by running up to $n=100$ nodes. Our experiments show remarkable attack effectiveness. For instance, with the speculative attack, the attackers can achieve a $92.86\%$ attack success rate (ASR) on Bullshark and an $86.27\%$ ASR on Tusk. Using the fissure attack, the attackers can achieve a $94.81\%$ ASR on Bullshark and an $87.31\%$ ASR on Tusk. We also discuss potential countermeasures for the proposed attack, such as ordering blocks randomly and reordering transactions globally based on transaction fees. However, we find that they either compromise the performance of the system or make the protocol more vulnerable to frontrunning using the existing frontrunning strategies.
Last updated:  2025-02-05
SoK: On the Physical Security of UOV-based Signature Schemes
Thomas Aulbach, Fabio Campos, and Juliane Krämer
Multivariate cryptography currently centres mostly around UOV-based signature schemes: All multivariate round 2 candidates in the selection process for additional digital signatures by NIST are either UOV itself or close variations of it: MAYO, QR-UOV, SNOVA, and UOV. Also schemes which have been in the focus of the multivariate research community, but are broken by now - like Rainbow and LUOV - are based on UOV. Both UOV and the schemes based on it have been frequently analyzed regarding their physical security in the course of the NIST process. However, a comprehensive analysis regarding the physical security of UOV-based signature schemes is missing. In this work, we want to bridge this gap and create a comprehensive overview of physical attacks on UOV and its variants from the second round of NIST’s selection process for additional post-quantum signature schemes, which just started. First, we collect all existing side-channel and fault attacks on UOV-based schemes and transfer them to the current UOV specification. Since UOV was subject to significant changes over the past few years, e.g., adaptions to the expanded secret key, some attacks need to be reassessed. Next, we introduce new physical attacks in order to obtain an overview as complete as possible. We then show how all these attacks would translate to MAYO, QR-UOV, and SNOVA. To improve the resistance of UOV-based signature schemes towards physical attacks, we discuss and introduce dedicated countermeasures. As related result, we observe that certain implementation decisions, like key compression techniques and randomization choices, also have a large impact on the physical security, in particular on the effectiveness of the considered fault attacks. Finally, we provide implementations of UOV and MAYO for the ARM Cortex-M4 architecture that feature first-order masking and protection against selected fault attacks. We benchmark the resulting overhead on a NUCLEO-L4R5ZI board and validate our approach by performing a TVLA on original and protected subroutines, yielding significantly smaller t-values for the latter.
Last updated:  2025-02-05
Updatable Public-Key Encryption, Revisited
Joël Alwen, Georg Fuchsbauer, and Marta Mularczyk
We revisit Updatable Public-Key Encryption (UPKE), which was introduced as a practical mechanism for building forward-secure cryptographic protocols. We begin by observing that all UPKE notions to date are neither syntactically flexible nor secure enough for the most important multi-party protocols motivating UPKE. We provide an intuitive taxonomy of UPKE properties -- some partially or completely overlooked in the past -- along with an overview of known (explicit and implicit) UPKE constructions. We then introduce a formal UPKE definition capturing all intuitive properties needed for multi-party protocols. Next, we provide a practical pairing-based construction for which we provide concrete security bounds under a standard assumption in the random oracle and the algebraic group model. The efficiency profile of the scheme compares very favorably with existing UPKE constructions (despite the added flexibility and stronger security). For example, when used to improve the forward security of the Messaging Layer Security protocol [RFC9420], our new UPKE construction requires $\approx 1\%$ of the bandwidth of the next-most efficient UPKE construction satisfying the strongest UPKE notion previously considered.
Last updated:  2025-02-05
VITARIT: Paying for Threshold Services on Bitcoin and Friends
Lucjan Hanzlik, Aniket Kate, Easwar Vivek Mangipudi, Pratyay Mukherjee, and Sri AravindaKrishnan Thyagarajan
Blockchain service offerings have seen a rapid rise in recent times. Many of these services realize a decentralized architecture with a threshold adversary to avoid a single point of failure and to mitigate key escrow issues. While payments to such services are straightforward in systems supporting smart contracts, achieving fairness poses challenges in systems like Bitcoin, adhering to the UTXO model with limited scripting capabilities. This is especially challenging without smart contracts, as we wish to pay only the required threshold of t + 1 out of the n servers offering the service together, without any server claiming the payment twice. In this paper, we introduce VITARIT 1, a novel payment solution tailored for threshold cryptographic services in UTXO systems like Bitcoin. Our approach guarantees robust provable security while facilitating practical deployment. We focus on the t-out-of-n distributed threshold verifiable random function (VRF) service with certain properties, such as threshold BLS signatures, a recently highlighted area of interest. Our protocol enables clients to request verifiable random function (VRF) values from the threshold service, triggering payments to up to t + 1 servers of the distributed threshold VRF. Our efficient design relies on simple transactions using signature verification scripts, making it immediately applicable in Bitcoin-like systems. We also introduce new tools and techniques at both the cryptographic and transaction layers, including a novel signature-VRF exchange protocol for standard constructions, which may be of independent interest. Addition- ally, our transaction flow design prevents malicious servers from claiming payments twice, offering broader implications for decentralized payment systems. Our prototype implementation shows that in the two-party interaction, the client takes 126.4 msec, and the server takes 204 msec, demonstrating practicality and deployability of the system
Last updated:  2025-02-05
Matching radar signals and fingerprints with MPC
Benjamin Hansen Mortensen, Mathias Karsrud Nordal, and Martin Strand
Vessels can be recognised by their navigation radar due to the characteristics of the emitted radar signal. This is particularly useful if one wants to build situational awareness without revealing one's own presence. Most countries maintain databases of radar fingerprints but will not readily share these due to national security regulations. Sharing of such information will generally require some form of information exchange agreement. However, all parties in a coalition benefit from correct identification. We use secure multiparty computation to match a radar signal measurement against secret databases and output plausible matches with their likelihoods. We also provide a demonstrator using MP-SPDZ.
Last updated:  2025-02-05
SoK: Understanding zk-SNARKs: The Gap Between Research and Practice
Junkai Liang, Daqi Hu, Pengfei Wu, Yunbo Yang, Qingni Shen, and Zhonghai Wu
Zero-knowledge succinct non-interactive arguments of knowledge (zk-SNARKs) are a powerful tool for proving computation correctness, attracting significant interest from researchers, developers, and users. However, the complexity of zk-SNARKs has created gaps between these groups, hindering progress. Researchers focus on constructing efficient proving systems with stronger security and new properties, while developers and users prioritize toolchains, usability, and compatibility. In this work, we provide a comprehensive study of zk-SNARK, from theory to practice, pinpointing gaps and limitations. We first present a master recipe that unifies the main steps in converting a program into a zk-SNARK. We then classify existing zk-SNARKs according to their key techniques. Our classification addresses the main difference in practically valuable properties between existing zk-SNARK schemes. We survey over 40 zk-SNARKs since 2013 and provide a reference table listing their categories and properties. Following the steps in master recipe, we then survey 11 general-purpose popular used libraries. We elaborate on these libraries' usability, compatibility, efficiency and limitations. Since installing and executing these zk-SNARK systems is challenging, we also provide a completely virtual environment in which to run the compiler for each of them. We identify that the proving system is the primary focus in cryptography academia. In contrast, the constraint system presents a bottleneck in industry. To bridge this gap, we offer recommendations and advocate for the opensource community to enhance documentation, standardization and compatibility.
Last updated:  2025-02-05
VRaaS: Verifiable Randomness as a Service on Blockchains
Jacob Gorman, Lucjan Hanzlik, Aniket Kate, Easwar Vivek Mangipudi, Pratyay Mukherjee, Pratik Sarkar, and Sri AravindaKrishnan Thyagarajan
Web3 applications, such as on-chain games, NFT minting, and leader elections necessitate access to unbiased, unpredictable, and publicly verifiable randomness. Despite its broad use cases and huge demand, there is a notable absence of comprehensive treatments of on-chain verifiable randomness services. To bridge this, we offer an extensive formal analysis of on-chain verifiable randomness services. We present the $first$ formalization of on-chain verifiable randomness in the blockchain setting by introducing the notion of Verifiable Randomness as a Service (VRaaS). We formally define VRaaS using an ideal functionality $\mathcal{F}_{\text{VRaaS}}$ in the Universal Composability model. Our definition not only captures the core features of randomness services, such as unbiasability, unpredictability, and public verifiability, but also accounts for many other crucial nuances pertaining to different entities involved, such as smart contracts. Within our framework we study a generic design of Verifiable Random Function (VRF)-based randomness service - where the randomness requester provides an input on which the randomness is evaluated as VRF output. We show that it does satisfy our formal VRaaS definition. Furthermore, we show that the generic protocol captures many real-world randomness services like Chainlink VRF and Supra dVRF. Moreover, we investigate the minimalism of the framework. Towards that, first we show that, the two transactions in-built in our framework are actually $necessary$ for any randomness service to support the essential qualities. We also discover $practical$ $vulnerabilities$ in other designs such as Algorand beacon, Pyth VRF and Band VRF, captured within our framework.
Last updated:  2025-02-05
Secure Showing of Partial Attributes
Foteini Baldimtsi, Julia Kastner, Julian Loss, and Omar Renawi
Anonymous Attribute-Based Credentials (ABCs) allow users to prove possession of attributes while adhering to various authentication policies and without revealing unnecessary information. Single-use ABCs are particularly appealing for their lightweight nature and practical efficiency. These credentials are typically built using blind signatures, with Anonymous Credentials Light (ACL) being one of the most prominent schemes in the literature. However, the security properties of single-use ABCs, especially their secure showing property, have not been fully explored, and prior definitions and corresponding security proofs fail to address scenarios involving partial attribute disclosure effectively. In this work, we propose a stronger secure showing definition that ensures robust security even under selective attribute revelation. Our definition extends the winning condition of the existing secure showing experiment by adding various constraints on the subsets of opened attributes. We show how to represent this winning condition as a matching problem in a suitable bipartite graph, thus allowing for it to be verified efficiently. We then prove that ACL satisfies our strong secure showing notion without any modification. Finally, we define double-spending prevention for single-use ABCs, and show how ACL satisfies the definition.
Last updated:  2025-02-05
A New World in the Depths of Microcrypt: Separating OWSGs and Quantum Money from QEFID
Amit Behera, Giulio Malavolta, Tomoyuki Morimae, Tamer Mour, and Takashi Yamakawa
While in classical cryptography, one-way functions (OWFs) are widely regarded as the “minimal assumption,” the situation in quantum cryptography is less clear. Recent works have put forward two concurrent candidates for the minimal assumption in quantum cryptography: One-way state generators (OWSGs), postulating the existence of a hard search problem with an efficient verification algorithm, and EFI pairs, postulating the existence of a hard distinguishing problem. Two recent papers [Khurana and Tomer STOC’24; Batra and Jain FOCS’24] showed that OWSGs imply EFI pairs, but the reverse direction remained open. In this work, we give strong evidence that the opposite direction does not hold: We show that there is a quantum unitary oracle relative to which EFI pairs exist, but OWSGs do not. In fact, we show a slightly stronger statement that holds also for EFI pairs that output classical bits (QEFID). As a consequence, we separate, via our oracle, QEFID, and one-way puzzles from OWSGs and several other Microcrypt primitives, including efficiently verifiable one-way puzzles and unclonable state generators. In particular, this solves a problem left open in [Chung, Goldin, and Gray Crypto’24]. Using similar techniques, we also establish a fully black-box separation (which is slightly weaker than an oracle separation) between private-key quantum money schemes and QEFID pairs. One conceptual implication of our work is that the existence of an efficient verification algorithm may lead to qualitatively stronger primitives in quantum cryptography.
Last updated:  2025-02-05
A Modular Approach to Unclonable Cryptography
Prabhanjan Ananth and Amit Behera
We explore a new pathway to designing unclonable cryptographic primitives. We propose a new notion called unclonable puncturable obfuscation (UPO) and study its implications for unclonable cryptography. Using UPO, we present modular (and in some cases, arguably, simple) constructions of many primitives in unclonable cryptography, including, public-key quantum money, quantum copy-protection for many classes of functionalities, unclonable encryption, and single-decryption encryption. Notably, we obtain the following new results assuming the existence of UPO: - We show that any cryptographic functionality can be copy-protected as long as this functionality satisfies a notion of security, which we term as puncturable security. Prior feasibility results focused on copy-protecting specific cryptographic functionalities. - We show that copy-protection exists for any class of evasive functions as long as the associated distribution satisfies a preimage-sampleability condition. Prior works demonstrated copy-protection for point functions, which follows as a special case of our result. - We show that unclonable encryption exists in the plain model. Prior works demonstrated feasibility results in the quantum random oracle model. We put forward a candidate construction of UPO and prove two notions of security, each based on the existence of (post-quantum) sub-exponentially secure indistinguishability obfuscation and one-way functions, the quantum hardness of learning with errors, and a new conjecture called simultaneous inner product conjecture.
Last updated:  2025-02-05
Efficient Error Detection Methods for the Number Theoretic Transforms in Lattice-Based Algorithms
Mohamed Abdelmonem, Lukas Holzbaur, Håvard Raddum, and Alexander Zeh
The Number Theoretic Transform (NTT) is a crucial component in many post-quantum cryptographic (PQC) algorithms, enabling efficient polynomial multiplication. However, the reliability of NTT computations is an important concern, especially for safety-critical applications. This work presents novel techniques to improve the fault tolerance of NTTs used in prominent PQC schemes such as Kyber, Dilithium, and Falcon. The work first establishes a theoretical framework for error detection in NTTs, exploiting the inherent algebraic properties of these transforms. It derives necessary and sufficient conditions for constructing error-detecting vectors that can identify single faults without the need for costly recomputation. For the Dilithium scheme, the work further advances the state-of-the-art by developing the first algorithm capable of detecting up to two maliciously placed faults. The proposed error detection methods are shown to reduce the number of required multiplications by half, leading to significant improvements in computational efficiency compared to existing single error-detecting algorithms. Concrete implementations for Kyber, Dilithium, and Falcon demonstrate the practicality and effectiveness of the error-detection scheme.
Last updated:  2025-02-05
TallyGuard: Privacy Preserving Tallied-as-cast Guarantee
Athish Pranav Dharmalingam, Sai Venkata Krishnan, KC Sivaramakrishnan, and N.S. Narayanaswamy
This paper presents a novel approach to verifiable vote tallying using additive homomorphism, which can be appended to existing voting systems without modifying the underlying infrastructure. Existing End-to-End Verifiable (E2E-V) systems like Belenios and ElectionGuard rely on distributed trust models or are vulnerable to decryption compromises, making them less suitable for general elections. Our approach introduces a tamper-evident commitment to votes through cryptographic hashes derived from homomorphic encryption schemes such as Paillier. The proposed system provides tallied-as-cast verifiability without revealing individual votes, thereby preventing coercion. The system also provides the ability to perform public verification of results. We also show that this system can be transitioned to quantum-secure encryption like Regev for future-proofing the system. We discuss how to deploy this system in a real-world scenario, including for general political elections, analyzing the security implications and report on the limitations of this system. We believe that the proposed system offers a practical solution to the problem of verifiable vote tallying in general elections.
Last updated:  2025-02-05
Efficient Pseudorandom Correlation Generators for Any Finite Field
Zhe Li, Chaoping Xing, Yizhou Yao, and Chen Yuan
Correlated randomness lies at the core of efficient modern secure multi-party computation (MPC) protocols. Costs of generating such correlated randomness required for the MPC online phase protocol often constitute a bottleneck in the overall protocol. A recent paradigm of {\em pseudorandom correlation generator} (PCG) initiated by Boyle et al. (CCS'18, Crypto'19) offers an appealing solution to this issue. In sketch, each party is given a short PCG seed, which can be locally expanded into long correlated strings, satisfying the target correlation. Among various types of correlations, there is oblivious linear evaluation (OLE), a fundamental and useful primitive for typical MPC protocols on arithmetic circuits. Towards efficient generating a great amount of OLE, and applications to MPC protocols, we establish the following results: (i) We propose a novel {\em programmable} PCG construction for OLE over any field $\mathbb{F}_p$. For $kN$ OLE correlations, we require $O(k\log{N})$ communication and $O(k^2N\log{N})$ computation, where $k$ is an arbitrary integer $\geq 2$. Previous works either have quadratic computation (Boyle et al. Crypto'19), or can only support fields of size larger than $2$ (Bombar et al. Crypto'23). (ii) We extend the above OLE construction to provide various types of correlations for any finite field. One of the fascinating applications is an efficient PCG for two-party {\em authenticated Boolean multiplication triples}. For $kN$ authenticated triples, we offer PCGs with seed size of $O(k^2\log{N})$ bits. To our best knowledge, such correlation has not been realized with sublinear communication and quasi-linear computation ever before. (iii) In addition, the \emph{programmability} admits efficient PCGs for multi-party Boolean triples, and thus the first efficient MPC protocol for Boolean circuits with {\em silent} preprocessing. In particular, we show $kN$ $m$-party Boolean multiplication triples can be generated in $O(m^2k\log{N})$-bit communication, while the state-of-the-art FOLEAGE (Asiacrypt'24) requires a broadcast channel and takes $mkN+O(m^2\log{kN})$ bits communication. (iv) Finally, we present efficient PCGs for circuit-dependent preprocessing, matrix multiplications triples, and string OTs etc. Compared to previous works, each has its own right.
Last updated:  2025-02-04
Raccoon: A Masking-Friendly Signature Proven in the Probing Model
Rafaël del Pino, Shuichi Katsumata, Thomas Prest, and Mélissa Rossi
This paper presents Raccoon, a lattice-based signature scheme submitted to the NIST 2022 call for additional post-quantum signatures. Raccoon has the specificity of always being masked. Concretely, all sensitive intermediate values are shared into 𝑑 parts. The main design rationale of Raccoon is to be easy to mask at high orders, and this dictated most of its design choices, such as the introduction of new algorithmic techniques for sampling small errors. As a result, Raccoon achieves a masking overhead $𝑂(𝑑 \log 𝑑)$ that compares favorably with the overheads $𝑂(𝑑^2 \log 𝑞)$ observed when masking standard lattice signatures. In addition, we formally prove the security of Raccoon in the 𝑡-probing model: an attacker is able to probe $𝑡 ≤ 𝑑 −1$ shares during each execution of the main algorithms (key generation, signing, verification). While for most cryptographic schemes, the black-box 𝑡-probing security can be studied in isolation, in Raccoon this analysis is performed jointly. To that end, a bridge must be made between the black-box game-based EUF-CMA proof and the usual simulation proofs of the ISW model (CRYPTO 2003). We formalize an end-to-end masking proof by deploying the probing EUF-CMA introduced by Barthe et al.(Eurocrypt 2018) and exhibiting the simulators of the non-interference properties (Barthe et al. CCS 2016). The proof is divided into three novel parts: - a simulation proof in the ISW model that allows to propagate the dependency to a restricted number of inputs and random coins, - a game-based proof showing that the security of Raccoon with probes can be reduced to an instance of Raccoon with smaller parameters, - a parameter study to ensure that the smaller instance is secure, using a robust generalization of the Rényi divergence. While we apply our techniques to Raccoon, we expect that the algorithmic and proof techniques we introduce will be helpful for the design and analysis of future masking-friendly schemes.
Last updated:  2025-02-04
Revisiting Beimel-Weinreb Weighted Threshold Secret Sharing Schemes
Oriol Farràs and Miquel Guiot
A secret sharing scheme is a cryptographic primitive that allows a dealer to share a secret among a set of parties, so that only authorized subsets of them can recover it. The access structure of the scheme is the family of authorized subsets. In a weighted threshold secret sharing scheme, each party is assigned a weight according to its importance, and the authorized subsets are those in which the sum of their weights is at least the threshold value. For these access structures, the best general constructions were presented by Beimel and Weinreb [IPL 2006]: The scheme with perfect security has share size $n^{O(\log n)}$, while the scheme with computational security has share size polynomial in $n$. However, these constructions require the use of shallow monotone sorting networks, which limits their practical use. In this context, we revisit this work and provide variants of these constructions that are feasible in practice. This is done by considering alternative circuits and formulas for weighted threshold functions that do not require monotone sorting networks. We show that, under the assumption that subexponentially secure one-way functions exist, any WTAS over $n$ parties and threshold $\sigma$ admits a computational secret sharing scheme where the size of the shares is $\mathrm{polylog}(n)$ and the size of the public information is $O(n^2\log n\log \sigma)$. Moreover, we show that any authorized subset only needs to download $O(n\log n\log \sigma)$ bits of public information to recover the secret.
Last updated:  2025-02-04
Wiretapping LLMs: Network Side-Channel Attacks on Interactive LLM Services
Mahdi Soleimani, Grace Jia, In Gim, Seung-seob Lee, and Anurag Khandelwal
Recent server-side optimizations like speculative decoding significantly enhance the interactivity and resource efficiency of Large Language Model (LLM) services. However, we show that these optimizations inadvertently introduce new side-channel vulnerabilities through network packet timing and size variations that tend to be input-dependent. Network adversaries can leverage these side channels to learn sensitive information contained in \emph{encrypted} user prompts to and responses from public LLM services. This paper formalizes the security implications using a novel indistinguishability framework and introduces a novel attack that establishes the insecurity of real-world LLM services with streaming APIs under our security framework. Our proposed attack effectively deconstructs encrypted network packet traces to reveal the sizes of underlying LLM-generated tokens and whether the tokens were generated with or without certain server-side optimizations. Our attack can accurately predict private attributes in real-world privacy-sensitive LLM applications in medicine and finance with $71$--$92\%$ accuracy on an open-source vLLM service and $50$--$90\%$ accuracy on the commercial ChatGPT service. Finally, we show that solutions that hide these side channels to different degrees expose a tradeoff between security and performance --- specifically, interactivity and network bandwidth overheads.
Last updated:  2025-02-04
A Fault Analysis on SNOVA
Gustavo Banegas and Ricardo Villanueva-Polanco
SNOVA, a post-quantum signature scheme with compact key sizes, is a second-round NIST candidate. This paper conducts a fault analysis of SNOVA, targeting permanent and transient faults during signature generation. We propose fault injection strategies that exploit SNOVA's structure, enabling key recovery with as few as $22$ to $68$ faulty signatures, depending on security levels. A novel fault-assisted reconciliation attack is introduced that effectively extracts the secret key space by solving a quadratic polynomial system. Simulations reveal that transient or permanent faults in signature generation can severely compromise security. We also suggest a lightweight countermeasure to mitigate fault attacks with minimal overhead. Our findings emphasize the need for fault-resistant mechanisms in post-quantum schemes like SNOVA.
Last updated:  2025-02-04
Shuffle Shamir Secret Shares Uniformly with Linear Online Communication
Jiacheng Gao, Yuan Zhang, and Sheng Zhong
In this paper, we revisit shuffle protocol for Shamir secret sharing. Upon examining previous works, we observe that existing constructions either produce non-uniform shuffle or require large communication and round complexity, e.g. exponential in the number of parties. We propose two shuffle protocols, both of which shuffle uniformly within $O(\frac{k + l}{\log k}n^2m\log m)$ communication for shuffling rows of an $m\times l$ matrix shared among $n$ parties, where $k\leq m$ is a parameter balancing communication and computation. Our first construction is more concretely efficient, while our second construction requires only $O(nml)$ online communication within $O(n)$ round. In terms of overall communication and online communication, both shuffle protocols outperform current optimal shuffle protocols for Shamir secret sharing. At the core of our constructions is a novel permutation-sharing technique, which can be used to permute arbitrarily many vectors by computing matrix-vector products. Once shared, applying a permutation becomes much cheaper, which results in a faster online phase. Letting each party share one secret uniform permutation in the offline phase and applying them sequentially in the online phase, we obtain our first shuffle protocol. To further optimize online complexity and simplify the trade-off, we adopt the shuffle correlation proposed by Gao et al. and obtain the second shuffle protocol with $O(nml)$ online communication and $O(n^2ml)$ online computation. This brings an additional benefit that the online complexity is now independent of offline complexity, which reduces parameter optimization to optimizing offline efficiency. Our constructions require only most basic primitives in Shamir secret sharing scheme, and work for arbitrary field $\mathbb{F}$ of size larger than $n$. As shuffle is a frequent operation in algorithm design, we expect them to accelerate many other primitives in context of Shamir secret sharing MPC, such as sorting, oblivious data structure, etc.
Last updated:  2025-02-04
Multi-Authority Functional Encryption with Bounded Collusions from Standard Assumptions
Rishab Goyal and Saikumar Yadugiri
Multi-Authority Functional Encryption ($\mathsf{MA}$-$\mathsf{FE}$) [Chase, TCC'07; Lewko-Waters, Eurocrypt'11; Brakerski et al., ITCS'17] is a popular generalization of functional encryption ($\mathsf{FE}$) with the central goal of decentralizing the trust assumption from a single central trusted key authority to a group of multiple, independent and non-interacting, key authorities. Over the last several decades, we have seen tremendous advances in new designs and constructions for $\mathsf{FE}$ supporting different function classes, from a variety of assumptions and with varying levels of security. Unfortunately, the same has not been replicated in the multi-authority setting. The current scope of $\mathsf{MA}$-$\mathsf{FE}$ designs is rather limited, with positive results only known for (all-or-nothing) attribute-based functionalities, or need full power of general-purpose code obfuscation. This state-of-the-art in $\mathsf{MA}$-$\mathsf{FE}$ could be explained in part by the implication provided by Brakerski et al. (ITCS'17). It was shown that a general-purpose obfuscation scheme can be designed from any $\mathsf{MA}$-$\mathsf{FE}$ scheme for circuits, even if the $\mathsf{MA}$-$\mathsf{FE}$ scheme is secure only in a bounded-collusion model, where at most two keys per authority get corrupted. In this work, we revisit the problem of $\mathsf{MA}$-$\mathsf{FE}$, and show that existing implication from $\mathsf{MA}$-$\mathsf{FE}$ to obfuscation is not tight. We provide new methods to design $\mathsf{MA}$-$\mathsf{FE}$ for circuits from simple and minimal cryptographic assumptions. Our main contributions are summarized below 1. We design a $\mathsf{poly}(\lambda)$-authority $\mathsf{MA}$-$\mathsf{FE}$ for circuits in the bounded-collusion model. Under the existence of public-key encryption, we prove it to be statically simulation-secure. Further, if we assume sub-exponential security of public-key encryption, then we prove it to be adaptively simulation-secure in the Random Oracle Model. 2. We design a $O(1)$-authority $\mathsf{MA}$-$\mathsf{FE}$ for circuits in the bounded-collusion model. Under the existence of 2/3-party non-interactive key exchange, we prove it to be adaptively simulation-secure. 3. We provide a new generic bootstrapping compiler for $\mathsf{MA}$-$\mathsf{FE}$ for general circuits to design a simulation-secure $(n_1 + n_2)$-authority $\mathsf{MA}$-$\mathsf{FE}$ from any two $n_1$-authority and $n_2$-authority $\mathsf{MA}$-$\mathsf{FE}$.
Last updated:  2025-02-04
Bootstrapping (T)FHE Ciphertexts via Automorphisms: Closing the Gap Between Binary and Gaussian Keys
Olivier Bernard and Marc Joye
The GINX method in TFHE offers low-latency ciphertext bootstrapping with relatively small bootstrapping keys, but is limited to binary or ternary key distributions. In contrast, the AP method supports arbitrary key distributions, however at the cost of significantly large bootstrapping keys. Building on AP, automorphism-based methods (LMK⁺, EUROCRYPT 2023) achieve smaller keys, though each automorphism application necessitates a key switch, introducing computational overhead and noise. This paper advances automorphism-based methods in two important ways. First, it proposes a novel traversal blind rotation algorithm that optimizes the number of key switches for a given key material. Second, it introduces an automorphism-parametrized external product that seamlessly applies an automorphism to one of the input ciphertexts. Together, these techniques substantially reduce the number of key switches, resulting in faster bootstrapping and improved noise control. As an independent contribution, this paper also introduce a comprehensive theoretical framework for analyzing the expected number of automorphism key switches, whose predictions perfectly align with the results of extensive numerical experiments, demonstrating its practical relevance. In a typical setting, by utilizing additional key material, the LLW⁺ approach (TCHES 2024) reduces key switches by 17% compared to LMK⁺. Our combined techniques achieve a 46% reduction using similar key material and can eliminate an arbitrary large number (e.g., > 99%) of key switches with only a moderate (9x) increase in key material size.
Last updated:  2025-02-03
Learning from Functionality Outputs: Private Join and Compute in the Real World
Francesca Falzon and Tianxin Tang
Private Join and Compute (PJC) is a two-party protocol recently proposed by Google for various use-cases, including ad conversion (Asiacrypt 2021) and which generalizes their deployed private set intersection sum (PSI-SUM) protocol (EuroS&P 2020). PJC allows two parties, each holding a key-value database, to privately evaluate the inner product of the values whose keys lie in the intersection. While the functionality output is not typically considered in the security model of the MPC literature, it may pose real-world privacy risks, thus raising concerns about the potential deployment of protocols like PJC. In this work, we analyze the risks associated with the PJC functionality output. We consider an adversary that is a participating party of PJC and describe four practical attacks that break the other party's input privacy, and which are able to recover both membership of keys in the intersection and their associated values. Our attacks consider the privacy threats associated with deployment and highlight the need to include the functionality output as part of the MPC security model.
Last updated:  2025-02-03
Plinko: Single-Server PIR with Efficient Updates via Invertible PRFs
Alexander Hoover, Sarvar Patel, Giuseppe Persiano, and Kevin Yeo
We study single-server private information retrieval (PIR) where a client wishes to privately retrieve the $x$-th entry from a database held by a server without revealing the index $x$. In our work, we focus on PIR with client pre-processing where the client may compute hints during an offline phase. The hints are then leveraged during queries to obtain sub-linear online time. We present Plinko that is the first single-server PIR with client pre-processing that obtains optimal trade-offs between client storage and total (client and server) query time for all parameters. Our scheme uses $t = \tilde{O}(n/r)$ query time for any client storage size $r$. This matches known lower bounds of $r \cdot t = \Omega(n)$ up to logarithmic factors for all parameterizations whereas prior works could only match the lower bound when $r = \tilde{O}(\sqrt{n})$. Moreover, Plinko is also the first updateable PIR scheme where an entry can be updated in worst-case $\tilde{O}(1)$ time. As our main technical tool, we define the notion of an invertible pseudorandom function (iPRF) that generalizes standard PRFs to be equipped with an efficient inversion algorithm. We present a construction of an iPRF from one-way functions where forward evaluation runs in $\tilde{O}(1)$ time and inversion runs in time linear in the inverse set (output) size. Furthermore, our iPRF construction is the first that remains efficient and secure for arbitrary domain and range sizes (including small domains and ranges). In the context of single-server PIR, we show that iPRFs may be used to construct the first hint set representation where finding a hint containing an entry $x$ may be done in $\tilde{O}(1)$ time.
Last updated:  2025-02-03
Post-Quantum Online/Offline Signatures
Martin R. Albrecht, Nicolas Gama, James Howe, and Anand Kumar Narayanan
Post-quantum signatures have high costs compared to RSA and ECDSA, in particular for smart cards. A line of work originating from Even, Goldreich, and Micali (CRYPTO'89) aimed to reduce digital signature latency by splitting up signing into an online and offline phase. The online/offline paradigm combines an ordinary long-term signature scheme with a fast, generally one-time, signature scheme. We reconsider this paradigm in the context of lattice-based post-quantum signatures in the GPV framework, with an example instantiation based on Falcon.
Last updated:  2025-02-03
A Holistic Framework for Impossible Boomerang Attacks
Yincen Chen, Qinggan Fu, Ning Zhao, Jiahao Zhao, Ling Song, and Qianqian Yang
In 2011, Lu introduced the impossible boomerang attack at DCC. This powerful cryptanalysis technique combines the strengths of the impossible differential and boomerang attacks, thereby inheriting the advantages of both cryptographic techniques. In this paper, we propose a holistic framework comprising two generic and effective algorithms and a MILP-based model to search for the optimal impossible boomerang attack systematically. The first algorithm incorporates any key guessing strategy, while the second integrates the meet-in-the-middle (MITM) attack into the key recovery process. Our framework is highly flexible, accommodating any set of attack parameters and returning the optimal attack complexity. When applying our framework to Deoxys-BC-256, Deoxys-BC-384, Joltik-BC-128, Joltik-BC-192, and SKINNYe v2, we achieve several significant improvements. We achieve the first 11-round impossible boomerang attacks on Deoxys-BC-256\ and Joltik-BC-128. For SKINNYe v2, we achieve the first 33-round impossible boomerang attack, then using the MITM approach in the key recovery attack, the time complexity is significantly reduced. Additionally, for the 14-round Deoxys-BC-384 and Joltik-BC-192, the time complexity of the impossible boomerang attack is reduced by factors exceeding 2^{27} and 2^{12}, respectively.
Last updated:  2025-02-03
Mira: Efficient Folding for Pairing-based Arguments
Josh Beal and Ben Fisch
Pairing-based arguments offer remarkably small proofs and space-efficient provers, but aggregating such proofs remains costly. Groth16 SNARKs and KZG polynomial commitments are prominent examples of this class of arguments. These arguments are widely deployed in decentralized systems, with millions of proofs generated per day. Recent folding schemes have greatly reduced the cost of proving incremental computations, such as batch proof verification. However, existing constructions require encoding pairing operations in generic constraint systems, leading to high prover overhead. In this work, we introduce Mira, a folding scheme that directly supports pairing-based arguments. We construct this folding scheme by generalizing the framework in Protostar to support a broader class of special-sound protocols. We demonstrate the versatility and efficiency of this framework through two key applications: Groth16 proof aggregation and verifiable ML inference. Mira achieves 5.8x faster prover time and 9.7x lower memory usage than the state-of-the-art proof aggregation system while maintaining a constant-size proof. To improve the efficiency of verifiable ML inference, we provide a new lincheck protocol with a verifier degree that is independent of the matrix order. We show that Mira scales effectively to larger models, overcoming the memory bottlenecks of current schemes.
Last updated:  2025-02-02
Shadowfax: Combiners for Deniability
Phillip Gajland, Vincent Hwang, and Jonas Janneck
As cryptographic protocols transition to post-quantum security, most adopt hybrid solutions combining pre-quantum and post-quantum assumptions. However, this shift often introduces trade-offs in terms of efficiency, compactness, and in some cases, even security. One such example is deniability, which enables users, such as journalists or activists, to deny authorship of potentially incriminating messages. While deniability was once mainly of theoretical interest, protocols like X3DH, used in Signal and WhatsApp, provide it to billions of users. Recent work (Collins et al., PETS'25) has further bridged the gap between theory and real-world applicability. In the post-quantum setting, however, protocols like PQXDH, as well as others such as Apple’s iMessage with PQ3, do not support deniability. This work investigates how to preserve deniability in the post-quantum setting by leveraging unconditional (statistical) guarantees instead of computational assumptions - distinguishing deniability from confidentiality and authenticity. As a case study, we present a hybrid authenticated key encapsulation mechanism (AKEM) that provides statistical deniability, while maintaining authenticity and confidentiality through a combination of pre-quantum and post-quantum assumptions. To this end, we introduce two combiners at different levels of abstraction. First, at the highest level, we propose a black-box construction that combines two AKEMs, showing that deniability is preserved only when both constituent schemes are deniable. Second, we present Shadowfax, a non-black-box combiner that integrates a pre-quantum NIKE, a post-quantum KEM, and a post-quantum ring signature. We demonstrate that Shadowfax ensures deniability in both dishonest and honest receiver settings. When instantiated, we rely on statistical security for the former, and on a pre- or post-quantum assumption in the latter. Finally, we provide an optimised, yet portable, implementation of a specific instantiation of Shadowfax yielding ciphertexts of 1781 bytes and public keys of 1449 bytes. Our implementation achieves competitive performance: encapsulation takes 1.9 million cycles and decapsulation takes 800000 cycles on an Apple M1 Pro.
Last updated:  2025-02-02
Breaking the Blindfold: Deep Learning-based Blind Side-channel Analysis
Azade Rezaeezade, Trevor Yap, Dirmanto Jap, Shivam Bhasin, and Stjepan Picek
Physical side-channel analysis (SCA) operates on the foundational assumption of access to known plaintext or ciphertext. However, this assumption can be easily invalidated in various scenarios, ranging from common encryption modes like Cipher Block Chaining (CBC) to complex hardware implementations, where such data may be inaccessible. Blind SCA addresses this challenge by operating without the knowledge of plaintext or ciphertext. Unfortunately, prior such approaches have shown limited success in practical settings. In this paper, we introduce the Deep Learning-based Blind Side-channel Analysis (DL-BSCA) framework, which leverages deep neural networks to recover secret keys in blind SCA settings. In addition, we propose a novel labeling method, Multi-point Cluster-based (MC) labeling, accounting for dependencies between leakage variables by exploiting multiple sample points for each variable, improving the accuracy of trace labeling. We validate our approach across four datasets, including symmetric key algorithms (AES and Ascon) and a post-quantum cryptography algorithm, Kyber, with platforms ranging from high-leakage 8-bit AVR XMEGA to noisy 32-bit ARM STM32F4. Notably, previous methods failed to recover the key on the same datasets. Furthermore, we demonstrate the first successful blind SCA on a desynchronization countermeasure enabled by DL-BSCA and MC labeling. All experiments are validated with real-world SCA measurements, highlighting the practicality and effectiveness of our approach.
Last updated:  2025-02-01
Non Linearizable Entropic Operator
Daniel Nager
In [Pan21] a linearization attack is proposed in order to break the cryp- tosystem proposed in [Gli21]. We want to propose here a non-linearizable operator that disables this attack as this operator doesn't give raise to a quasigrup and doesn't obey the latin square property.
Last updated:  2025-02-01
A notion on S-boxes for a partial resistance to some integral attacks
Claude Carlet
In two recent papers, we introduced and studied the notion of $k$th-order sum-freedom of a vectorial function $F:\mathbb F_2^n\to \mathbb F_2^m$. This notion generalizes that of almost perfect nonlinearity (which corresponds to $k=2$) and has some relation with the resistance to integral attacks of those block ciphers using $F$ as a substitution box (S-box), by preventing the propagation of the division property of $k$-dimensional affine spaces. In the present paper, we show that this notion, which is rarely satisfied by vectorial functions, can be weakened while retaining the property that the S-boxes do not propagate the division property of $k$-dimensional affine spaces. This leads us to the property that we name $k$th-order $t$-degree-sum-freedom, whose strength decreases when $t$ increases, and which coincides with $k$th-order sum-freedom when $t=1$. The condition for $k$th-order $t$-degree-sum-freedom is that, for every $k$-dimensional affine space $A$, there exists a non-negative integer $j$ of 2-weight at most $t$ such that $\sum_{x\in A}(F(x))^j\neq 0$. We show, for a general $k$th-order $t$-degree-sum-free function $F$, that $t$ can always be taken smaller than or equal to $\min(k,m)$ under some reasonable condition on $F$, and that it is larger than or equal to $\frac k{\deg(F)}$, where $\deg(F)$ is the algebraic degree of $F$. We also show two other lower bounds: one, that is often tighter, by means of the algebraic degree of the compositional inverse of $F$ when $F$ is a permutation, and another (valid for every vectorial function) by means of the algebraic degree of the indicator of the graph of the function. We study examples for $k=2$ (case in which $t=1$ corresponds to APNness) showing that finding $j$ of 2-weight 2 can be challenging, and we begin the study of power functions, for which we prove upper bounds. We study in particular the multiplicative inverse function (used as an S-box in the AES), for which we characterize the $k$th-order $t$-degree-sum-freedom by the coefficients of the subspace polynomials of $k$-dimensional vector subspaces (deducing the exact value of $t$ when $k$ divides $n$) and we extend to $k$th-order $t$-degree-sum-freedom the result that it is $k$th-order sum-free if and only if it is $(n-k)$th-order sum-free.
Last updated:  2025-01-31
SHIFT SNARE: Uncovering Secret Keys in FALCON via Single-Trace Analysis
Jinyi Qiu and Aydin Aysu
This paper presents a novel single-trace side-channel attack on FALCON---a lattice-based post-quantum digital signature protocol recently approved for standardization by NIST. We target the discrete Gaussian sampling operation within the FALCON key generation scheme and use a single power measurement trace to succeed. Notably, negating the 'shift right 63-bit' operation (for 64-bit values) leaks critical information about the '-1' vs. '0' assignments to intermediate coefficients. These leaks enable full recovery of the generated secret keys. The proposed attack is implemented on an ARM Cortex-M4 microcontroller running both reference and optimized software implementation from FALCON's NIST Round 3 package. Statistical analysis with 500k tests reveals a per coefficient success rate of 99.9999999478% and a full key recovery success rate of 99.99994654% for FALCON-512. This work highlights the vulnerability of current software solutions to single-trace attacks and underscores the urgent need to develop single-trace resilient software for embedded systems.
Last updated:  2025-01-31
A Comprehensive Formal Security Analysis of OPC UA
Vincent Diemunsch, Lucca Hirschi, and Steve Kremer
OPC UA is a standardized Industrial Control System (ICS) protocol, deployed in critical infrastructures, that aims to ensure security. The forthcoming version 1.05 includes major changes in the underlying cryptographic design, including a Diffie-Hellmann based key exchange, as opposed to the previous RSA based version. Version 1.05 is supposed to offer stronger security, including Perfect Forward Secrecy (PFS). We perform a formal security analysis of the security protocols specified in OPC UA v1.05 and v1.04, for the RSA-based and the new DH-based mode, using the state-of-the-art symbolic protocol verifier ProVerif. Compared to previous studies, our model is much more comprehensive, including the new protocol version, combination of the different sub-protocols for establishing secure channels, sessions and their management, covering a large range of possible configurations. This results in one of the largest models ever studied in ProVerif raising many challenges related to its verification mainly due to the complexity of the state machine. We discuss how we mitigated this complexity to obtain meaningful analysis results. Our analysis uncovered several new vulnerabilities, that have been reported to and acknowledged by the OPC Foundation. We designed and proposed provably secure fixes, most of which are included in the upcoming version of the standard.
Last updated:  2025-01-31
Cycles and Cuts in Supersingular L-Isogeny Graphs
Sarah Arpin, Ross Bowden, James Clements, Wissam Ghantous, Jason T. LeGrow, and Krystal Maughan
Supersingular elliptic curve isogeny graphs underlie isogeny-based cryptography. For isogenies of a single prime degree $\ell$, their structure has been investigated graph-theoretically. We generalise the notion of $\ell$-isogeny graphs to $L$-isogeny graphs (studied in the prime field case by Delfs and Galbraith), where $L$ is a set of small primes dictating the allowed isogeny degrees in the graph. We analyse the graph-theoretic structure of $L$-isogeny graphs. Our approaches may be put into two categories: cycles and graph cuts. On the topic of cycles, we provide: a count for the number of non-backtracking cycles in the $L$-isogeny graph using traces of Brandt matrices; an efficiently computable estimate based on this approach; and a third ideal-theoretic count for a certain subclass of $L$-isogeny cycles. We provide code to compute each of these three counts. On the topic of graph cuts, we compare several algorithms to compute graph cuts which minimise a measure called the edge expansion, outlining a cryptographic motivation for doing so. Our results show that a greedy neighbour algorithm out-performs standard spectral algorithms for computing optimal graph cuts. We provide code and study explicit examples. Furthermore, we describe several directions of active and future research.
Last updated:  2025-01-31
KZH-Fold: Accountable Voting from Sublinear Accumulation
George Kadianakis, Arantxa Zapico, Hossein Hafezi, and Benedikt Bünz
Accumulation schemes are powerful primitives that enable distributed and incremental verifiable computation with less overhead than recursive SNARKs. However, existing schemes with constant-size accumulation verifiers, suffer from linear-sized accumulators and deciders, leading to linear-sized proofs that are unsuitable in distributed settings. Motivated by the need for bandwidth efficient accountable voting protocols, (I) We introduce KZH, a novel polynomial commitment scheme, and (II) KZH-fold, the first sublinear accumulation scheme where the verifier only does $3$ group scalar multiplications and $O(n^{1/2})$ accumulator size and decider time. Our scheme generalizes to achieve accumulator and decider complexity of $k \cdot n^{1/k}$ with verifier complexity $k$. Using the BCLMS compiler, (III) we build an IVC/PCD scheme with sublinear proof and decider. (IV) Next, we propose a new approach to non-uniform IVC, where the cost of proving a step is proportional only to the size of the step instruction circuit, and unlike previous approaches, the witness size is not linear in the number of instructions. (V) Leveraging these advancements, we demonstrate the power of KZH-fold by implementing an accountable voting scheme using a novel signature aggregation protocol supporting millions of participants, significantly reducing communication overhead and verifier time compared to BLS-based aggregation. We implemented and benchmarked our protocols and KZH-fold achieves a 2000x reduction in communication and a 50x improvement in decider time over Nova when proving 2000 Poseidon hashes, at the cost of 3x the prover time.
Last updated:  2025-01-31
Error floor prediction with Markov models for QC-MDPC codes
Sarah Arpin, Jun Bo Lau, Ray Perlner, Angela Robinson, Jean-Pierre Tillich, and Valentin Vasseur
Quasi-cyclic moderate-density parity check (QC-MDPC) code-based encryption schemes under iterative decoders offer highly-competitive performance in the quantum-resistant space of cryptography, but the decoding-failure rate (DFR) of these algorithms are not well-understood. The DFR decreases extremely rapidly as the ratio of code-length to error-bits increases, then decreases much more slowly in regimes known as the waterfall and error-floor, respectively. This work establishes three, successively more detailed probabilistic models of the DFR for iterative decoders for QC-MPDC codes: the simplified model, the refined model for perfect keys, and the refined model for all keys. The models are built upon a Markov model introduced by Sendrier and Vasseur that closely predicts decoding behavior in the waterfall region but does not capture the error floor behavior. The simplified model introduces a modification which captures the dominant contributor to error floor behavior which is convergence to near codewords introduced by Vasseur in his PhD thesis. The refined models give more accurate predictions taking into account certain structural features of specific keys. Our models are based on the step-by-step decoder, which is highly simplified and experimentally displays worse decoding performance than parallel decoders used in practice. Despite the use of the simplified decoder, we obtain an accurate prediction of the DFR in the error floor and demonstrate that the error floor behavior is dominated by convergence to a near codeword during a failed decoding instance. Furthermore, we have run this model for a simplified version of the QC-MDPC code-based cryptosystem BIKE to better ascertain whether the DFR is low enough to achieve IND-CCA2 security. Our model for a modified version of BIKE 1 gives a DFR which is below $2^{-129.5}$, using a block length $r = 13477$ instead of the BIKE 1 parameter $r = 12323$.
Last updated:  2025-01-31
On Maximum Size Simultaneous Linear Approximations in Ascon and Keccak and Related Translation and Differential Properties
Nicolas T. Courtois, Frédéric Amiel, and Alexandre Bonnard de Fonvillars
In this paper we study the S-box known as Chi or \chi initially proposed by Daemen in 1995 and very widely used ever since in Keccak, Ascon, and many other. This type of ciphers is typically analyzed [in recent research] in terms of subspace trail attacks [TeDi19] and vector space invariants. An interesting question is then, when different spaces are mapped to each other by translations with a constant. In this paper we relax this fundamental question and we consider arbitrary sets of points and their translations. We generalize previous S-box partial linearization results on Keccak and Ascon from Eurocrypt 2017. We basically introduce new ways to linearize the Ascon S-box to the maximum possible extent. On this basis we show further remarkable properties and some surprising connections between [simultaneous] linear and [prominent] differential properties. We exhibit a new family of maximum size and optimal approximation properties with 11 points, beyond the maximum size of any set in the DDT table. We prove a theorem which guarantees that this type of properties are stable by arbitrary input-side translations which holds for all quadratic permutations. All this will be placed in the context of previous work on classification of 5-bit quadratic permutations.
Last updated:  2025-01-31
Efficient Quantum-safe Distributed PRF and Applications: Playing DiSE in a Quantum World
Sayani Sinha, Sikhar Patranabis, and Debdeep Mukhopadhyay
We propose the first $\textit{distributed}$ version of a simple, efficient, and provably quantum-safe pseudorandom function (PRF). The distributed PRF (DPRF) supports arbitrary threshold access structures based on the hardness of the well-studied Learning with Rounding (LWR) problem. Our construction (abbreviated as $\mathsf{PQDPRF}$) practically outperforms not only existing constructions of DPRF based on lattice-based assumptions, but also outperforms (in terms of evaluation time) existing constructions of: (i) classically secure DPRFs based on discrete-log hard groups, and (ii) quantum-safe DPRFs based on any generic quantum-safe PRF (e.g. AES). The efficiency of $\mathsf{PQDPRF}$ stems from the extreme simplicity of its construction, consisting of a simple inner product computation over $\mathbb{Z}_q$, followed by a rounding to a smaller modulus $p < q$. The key technical novelty of our proposal lies in our proof technique, where we prove the correctness and post-quantum security of $\mathsf{PQDPRF}$ (against semi-honest corruptions of any less than threshold number of parties) for a polynomial $q/p$ (equivalently, "modulus to modulus")-ratio. Our proposed DPRF construction immediately enables efficient yet quantum-safe instantiations of several practical applications, including key distribution centers, distributed coin tossing, long-term encryption of information, etc. We showcase a particular application of $\mathsf{PQDPRF}$ in realizing an efficient yet quantum-safe version of distributed symmetric-key encryption ($\mathsf{DiSE}$ -- originally proposed by Agrawal et al. in CCS 2018), which we call $\mathsf{PQ-DiSE}$. For semi-honest adversarial corruptions across a wide variety of corruption thresholds, $\mathsf{PQ-DiSE}$ substantially outperforms existing instantiations of $\mathsf{DiSE}$ based on discrete-log hard groups and generic PRFs (e.g. AES). We illustrate the practical efficiency of our $\mathsf{PQDPRF}$ via prototype implementation of $\mathsf{PQ-DiSE}$.
Last updated:  2025-01-31
Towards Verifiable FHE in Practice: Proving Correct Execution of TFHE's Bootstrapping using plonky2
Louis Tremblay Thibault and Michael Walter
In this work we demonstrate for the first time that a full FHE bootstrapping operation can be proven using a SNARK in practice. We do so by designing an arithmetic circuit for the bootstrapping operation and prove it using plonky2. We are able to prove the circuit on an AWS Hpc7a instance in under 20 minutes. Proof size is about 200kB and verification takes less than 10ms. As the basis of our bootstrapping operation we use TFHE's programmable bootstrapping and modify it in a few places to more efficiently represent it as an arithmetic circuit (while maintaining full functionality and security). In order to achieve our results in a memory-efficient way, we take advantage of the structure of the computation and plonky2's ability to efficiently prove its own verification circuit to implement a recursion-based IVC scheme. Lastly, we present a security proof in the UC model that captures active attacks in real world applications of verifiable FHE and augment our prototype to fit such applications.
Last updated:  2025-01-31
Quantum function secret sharing
Alex B. Grilo and Ramis Movassagh
We propose a quantum function secret sharing scheme in which the communication is exclusively classical. In this primitive, a classical dealer distributes a secret quantum circuit $C$ by providing shares to $p$ quantum parties. The parties on an input state $\ket{\psi}$ and a projection $\Pi$, compute values $y_i$ that they then classically communicate back to the dealer, who can then compute $\lVert\Pi C\ket{\psi}\rVert^2$ using only classical resources. Moreover, the shares do not leak much information about the secret circuit $C$. Our protocol for quantum secret sharing uses the Cayley path, a tool that has been extensively used to support quantum primacy claims. More concretely, the shares of $C$ correspond to randomized version of $C$ which are delegated to the quantum parties, and the reconstruction can be done by extrapolation. Our scheme has two limitations, which we prove to be inherent to our techniques: First, our scheme is only secure against single adversaries, and we show that if two parties collude, then they can break its security. Second, the evaluation done by the parties requires exponential time in the number of gates.
Last updated:  2025-01-31
ProbeShooter: A New Practical Approach for Probe Aiming
Daehyeon Bae, Sujin Park, Minsig Choi, Young-Giu Jung, Changmin Jeong, Heeseok Kim, and Seokhie Hong
Electromagnetic side-channel analysis is a powerful method for monitoring processor activity and compromising cryptographic systems in air-gapped environments. As analytical methodologies and target devices evolve, the importance of leakage localization and probe aiming becomes increasingly apparent for capturing only the desired signals with a high signal-to-noise ratio. Despite its importance, there remains substantial reliance on unreliable heuristic approaches and inefficient exhaustive searches. Furthermore, related studies often fall short in terms of feasibility, practicality, and performance, and are limited to controlled DUTs and low-end MCUs. To address the limitations and inefficiencies of the previous approaches, we propose a novel methodology―${\rm P{\tiny ROBE}S{\tiny HOOTER}}$―for leakage localization and probe aiming. This approach leverages new insights into the spatial characteristics of amplitude modulation and intermodulation distortion in processors. As a result, ${\rm P{\tiny ROBE}S{\tiny HOOTER}}$ provides substantial improvements in various aspects: $\boldsymbol 1)$ it is applicable to not only simple MCUs but also complex SoCs, $\boldsymbol 2)$ it effectively handles multi-core systems and dynamic frequency scaling, $\boldsymbol 3)$ it is adoptable to uncontrollable DUTs, making it viable for constrained real-world attacks, and $\boldsymbol 4)$ it performs significantly faster than previous methods. To demonstrate this, we experimentally evaluate ${\rm P{\tiny ROBE}S{\tiny HOOTER}}$ on a high-end MCU (the NXP i.MX RT1061 featuring a single ARM Cortex-M7 core) and a complex SoC (the Broadcom BCM2711 equipped with the Raspberry Pi 4 Model B, featuring four ARM Cortex-A72 cores).
Last updated:  2025-01-31
Distributional Private Information Retrieval
Ryan Lehmkuhl, Alexandra Henzinger, and Henry Corrigan-Gibbs
A private-information-retrieval (PIR) scheme lets a client fetch a record from a remote database without revealing which record it fetched. Classic PIR schemes treat all database records the same but, in practice, some database records are much more popular (i.e., commonly fetched) than others. We introduce distributional PIR, a new type of PIR that can run faster than classic PIR---both asymptotically and concretely---when the popularity distribution is skewed. Distributional PIR provides exactly the same cryptographic privacy as classic PIR. The speedup comes from a relaxed form of correctness: distributional PIR guarantees that in-distribution queries succeed with good probability, while out-of-distribution queries succeed with lower probability. Because of its relaxed correctness, distributional PIR is best suited for applications where "best-effort" retrieval is acceptable. Moreover, for security, a client's decision to query the server must be independent of whether its past queries were successful. We construct a distributional-PIR scheme that makes black-box use of classic PIR protocols, and prove a lower bound on the server runtime of a natural class of distributional-PIR schemes. On two real-world popularity distributions, our construction reduces compute costs by $5$-$77\times$ compared to existing techniques. Finally, we build CrowdSurf, an end-to-end system for privately fetching tweets, and show that distributional-PIR reduces the end-to-end server cost by $8\times$.
Last updated:  2025-01-31
MicroSecAgg: Streamlined Single-Server Secure Aggregation
Yue Guo, Antigoni Polychroniadou, Elaine Shi, David Byrd, and Tucker Balch
This work introduces MicroSecAgg, a framework that addresses the intricacies of secure aggregation in the single-server landscape, specifically tailored to situations where distributed trust among multiple non-colluding servers presents challenges. Our protocols are purpose-built to handle situations featuring multiple successive aggregation phases among a dynamic pool of clients who can drop out during the aggregation. Our different protocols thrive in three distinct cases: firstly, secure aggregation within a small input domain; secondly, secure aggregation within a large input domain; and finally, facilitating federated learning for the cases where moderately sized models are considered. Compared to the prior works of Bonawitz et al. (CCS 2017), Bell et al. (CCS 2020), and the recent work of Ma et al. (S&P 2023), our approach significantly reduces the overheads. In particular, MicroSecAgg halves the round complexity to just 3 rounds, thereby offering substantial improvements in communication cost efficiency. Notably, it outperforms Ma et al. by a factor of n on the user side, where n represents the number of users. Furthermore, in MicroSecAgg the computation complexity of each aggregation per user exhibits a logarithmic growth with respect to $n$, contrasting with the linearithmic or quadratic growth observed in Ma et al. and Bonawitz et al., respectively. We also require linear (in n) computation work from the server as opposed to quadratic in Bonawitz et al., or linearithmic in Ma et al. and Bell et al. In the realm of federated learning, a delicate tradeoff comes into play: our protocols shine brighter as the number of participating parties increases, yet they exhibit diminishing computational efficiency as the sheer volume of weights/parameters increases significantly. We report an implementation of our system and compare the performance against prior works, demonstrating that MicroSecAgg significantly reduces the computational burden and the message size.
Last updated:  2025-01-31
On pairs of primes with small order reciprocity
Craig Costello and Gaurish Korpal
We give a sieving algorithm for finding pairs of primes with small multiplicative orders modulo each other. This problem is a necessary condition for obtaining constructions of $2$-cycles of pairing-friendly curves, which have found use in cryptographic applications. Our database of examples suggests that, with the exception of a well-known infinite family of such primes, instances become increasingly rare as the size of the primes increase. This leads to some interesting open questions for which we hope our database prompts further investigation.
Last updated:  2025-01-30
Practical Asynchronous Distributed Key Reconfiguration and Its Applications
Hanwen Feng, Yingzi Gao, Yuan Lu, Qiang Tang, and Jing Xu
In this paper, we study practical constructions of asynchronous distributed key reconfiguration ($\mathsf{ADKR}$), which enables an asynchronous fault-tolerant system with an existing threshold cryptosystem to efficiently generate a new threshold cryptosystem for a reconfigured set of participants. While existing asynchronous distributed threshold key generation ($\mathsf{ADKG}$) protocols theoretically solve $\mathsf{ADKR}$, they fail to deliver satisfactory scalability due to cubic communication overhead, even with simplifications to the reconfiguration setting. We introduce a more efficient \textit{share-dispersal-then-agree-and-recast} paradigm for constructing $\mathsf{ADKR}$ with preserving adaptive security. The method replaces expensive $O(n)$ asynchronous verifiable secret sharing protocols in classic $\mathsf{ADKG}$ with $O(n)$ cheaper dispersals of publicly-verifiable sharing transcripts; after consensus confirms a set of finished dispersals, it selects a small $\kappa$-subset of finished dispersals for verification, reducing the total overhead to $O(\kappa n^2)$ from $O(n^3)$, where $\kappa$ is a small constant (typically $\sim$30 or less). To further optimize concrete efficiency, we propose an interactive protocol with linear communication to generate publicly verifiable secret sharing (PVSS) transcripts, avoiding computationally expensive non-interactive PVSS. Additionally, we introduce a distributed PVSS verification mechanism, minimizing redundant computations across different parties and reducing the dominating PVSS verification cost by about one-third. Our design also enables diverse applications: (i) given a quadratic-communication asynchronous coin-flipping protocol, it implies the first quadratic-communication $\mathsf{ADKG}$; and (ii) it can be extended to realize the first quadratic-communication asynchronous dynamic proactive secret sharing (ADPSS) protocol with adaptive security. Experimental evaluations on a global network of 256 AWS servers show up to 40\% lower latency compared to state-of-the-art $\mathsf{ADKG}$ protocols (with simplifications to the reconfiguration setting), highlighting the practicality of our $\mathsf{ADKR}$ in large-scale asynchronous systems.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.