Papers updated in last 183 days (1833 results)
Server-Aided Anonymous Credentials
This paper formalizes the notion of server-aided anonymous credentials (SAACs), a new model for anonymous credentials (ACs) where, in the process of showing a credential, the holder is helped by additional auxiliary information generated in an earlier (anonymous) interaction with the issuer. This model enables lightweight instantiations of 'publicly verifiable and multi-use' ACs from pairing-free elliptic curves, which is important for compliance with existing national standards. A recent candidate for the EU Digital Identity Wallet, BBS#, roughly adheres to the SAAC model we have developed; however, it lacks formal security definitions and proofs.
In this paper, we provide rigorous definitions of security for SAACs, and show how to realize SAACs from the weaker notion of keyed-verification ACs (KVACs) and special types of oblivious issuance protocols for zero-knowledge proofs. We instantiate this paradigm to obtain two constructions: one achieves statistical anonymity with unforgeability under the Gap -SDH assumption, and the other achieves computational anonymity and unforgeability under the DDH assumption.
On the Independence Heuristic in the Dual Attack
Post-quantum cryptography deals with the development and analysis of cryptographic schemes that are assumed to be secure even against attackers with access to a powerful quantum computers. Along the main candidates for quantum-safe solutions are cryptographic schemes, whose security are based on classic lattice problems such
as the bounded-distance decoding (BDD) problem or learning with errors (LWE) problem. In this work we contribute to the analysis of an attack category against these problems called dual attack.
Our first contributions is to provide theoretical counterarguments against a so-called independence assumption, which was used in earlier works on this attack, and which was shown to be contradicting practical experiments. Then, we provide estimates on the success probability and the cost of the dual attack against the decisional version of the BDD problem. These estimates are derived both rigorously and heuristically. Finally, we also provide experimental evidence that confirms these results.
Circular Insecure Encryption: from Long Cycles to Short Cycles
A length encryption cycle consists of a sequence of keys, each encrypting the next, forming a cycle, and an encryption scheme is -circular secure if a length encryption cycle is computationally indistinguishable from encryptions of zeros. An interesting problem is whether CPA security implies circular security. This is shown to be not true. Using standard cryptographic assumptions and LWE, it was shown that within the class of CPA secure encryption schemes, for any , there exists an -circular insecure encryption scheme. Furthermore, there exists a particular encryption scheme that is -circular insecure for all . Following these results, it is natural to ask whether a circular insecurity of a particular length implies circular insecurity of different lengths and of multiple lengths. We answer this problem with an affirmative in this paper. We constructively prove that a CPA secure encryption scheme that is insecure in the presence of encryption cycles of length implies the existence of such a scheme for encryption cycles of any length less than . The constructed -circular insecure construction may have the same message space as the -circular insecure encryption scheme, and our results apply to both public key and symmetric key settings.
Tweakable Permutation-based Luby-Rackoff Constructions
Liskov, Rivest, and Wagner, in their seminal work, formulated tweakable blockciphers and proposed two blockcipher-based design paradigms, LRW1 and LRW2, where the basic design strategy is to xor the masked tweak to the input and output of a blockcipher. The 2-round cascaded LRW2 and 4-round cascaded LRW1 have been proven to be secure up to queries, but -bit optimal security still remains elusive for these designs. In their paper, Liskov also posed an open challenge of embedding the tweak directly in the blockcipher, and to address this, Goldenberg et al. introduced the tweakable Luby-Rackoff (LR) constructions. They showed that if the internal primitives are random functions, then for tweaks with blocks, the construction needs rounds to be optimally -bit CPA secure and rounds to be optimally -bit CCA secure, where respectively and rounds were required to process the tweaks. Since blockciphers can be designed much more efficiently than pseudorandom functions, in many practical applications the internal primitives of LR ciphers are instantiated as blockciphers, which however would lead to a birthday-bound factor, which is not ideal for say lightweight cryptography.
This paper addresses the following two key questions affirmatively: (1) Can Goldenberg et al.'s results be extended to LR constructions with random permutations as internal primitives without compromising the optimal -bit security? (2) Can the number of rounds required for handling long tweaks be reduced?
We formally define TLR-compatible functions, for processing the tweak, which when composed with 4-rounds and 5-rounds of LR construction with random permutations as internal primitives gives us respectively -bit CPA and CCA secure tweakable permutations. For the security analysis, we proved general Mirror Theory result for three permutations. We also propose instantiating TLR-compatible functions with one round LR where a permutation (resp, two AXU hash functions) is used to mask single-block tweaks (resp., variable-length tweaks), thus proposing the -bit CPA (resp., CCA) secure tweakable permutation candidates, and (resp., and ), using (resp., ) LR rounds, which is a significant reduction from the tweak-length-dependent results of Goldenberg et al.
We further extend the implications of our analysis of permutation-based LR as follows:
(1) We show -bit CPA (resp., CCA) security of -rounds (resp. -rounds) permutation-based LR construction, which is quite an improvement over the existing -bit security proved by Guo et al.
(2) We propose a new design paradigm, , for -bit secure MACs, that involves hashing the message, then passing the diblock tag through four rounds of permutation-based LR and then truncating the left block.
Merkle Mountain Ranges are Optimal: On Witness Update Frequency for Cryptographic Accumulators
We study append-only set commitments with efficient updates and inclusion proofs, or cryptographic accumulators. In particular, we examine how often the inclusion proofs (or witnesses) for individual items must change as new items are added to the accumulated set. Using a compression argument, we show unconditionally that to accumulate a set of items, any construction with a succinct commitment ( storage) must induce at least total witness updates as items are sequentially added. In a certain regime, we strengthen this bound to total witness updates. These lower bounds hold not just in the worst case, but with overwhelming probability over a random choice of the accumulated set. Our results show that a close variant of the Merkle Mountain range, an elegant construction that has become popular in practice, is essentially optimal.
Assessing the Impact of a Variant of MATZOV's Dual Attack on Kyber
The dual attacks on the Learning With Errors (LWE) problem are currently a subject of controversy. In particular, the results of [Matzov,2022], which claim to significantly lower the security level of CRYSTALS-Kyber, a lattice-based cryptosystem currently being standardized by NIST, are not widely accepted. The analysis behind their attack depends on a series of assumptions that, in certain scenarios, have been shown to contradict established theorems or well-tested heuristics [Ducas,Pulles,CRYPTO2023].
In this paper, we introduce a new dual lattice attack on LWE, drawing from ideas in coding theory. Our approach revisits the dual attack proposed by [Matzov,2022], replacing modulus switching with an efficient decoding algorithm. This decoding is achieved by generalizing polar codes over Zq, and we confirm their strong distortion properties through benchmarks. This modification enables a reduction from small-LWE to plain-LWE, with a notable decrease in the secret dimension. Additionally, we replace the enumeration step in the attack by assuming the secret is zero for the portion being enumerated, iterating this assumption over various choices for the enumeration part.
We make an analysis of our attack without using the flawed independence assumptions used in [Matzov,2022] and we fully back up our analysis with experimental evidences.
Lastly, we assess the complexity of our attack on CRYSTALS-Kyber; showing that the security levels for Kyber-512/768/1024 are 3.5/11.9/12.3 bits below the NIST requirements (143/207/272 bits) in the same nearest-neighbor cost model as in the Kyber submission.
All in all the cost of our attack matches and even slightly beat in some cases the complexities originally claimed by the attack of [Matzov,2022].
The Large Block Cipher Family Vistrutah
Vistrutah is a large block cipher with block sizes of 256 and 512 bits. It iterates a step function that applies two AES rounds to each 128-bit block of the state, followed by a state-wide cell permutation. Like Simpira, Haraka, Pholkos, and ASURA, Vistrutah leverages AES instructions to achieve high performance.
For each component of Vistrutah, we conduct a systematic evaluation of functions that can be efficiently implemented on both Intel and Arm architectures. We therefore expect them to perform efficiently on any recent vector instruction set architecture (ISA) with AES support. Our evaluation methodology combines latency estimation on an abstracted vector ISA with security analysis. The goal is to maximize the ratio
of "bits of security per unit of time", i.e., to achieve the highest security for a given performance target, or equivalently, the best performance for a given security level within this class of designs. Implementations confirm the accuracy of our latency model. Vistrutah even performs significantly better than Rijndael-256-256.
We support our security claims with a comprehensive ad-hoc cryptanalysis. An isomorphism between Vistrutah-512, the 512-bit wide variant, and the AES, allows us to also leverage the extensive cryptanalysis of AES and apply it to Vistrutah-512.
A core design principle is the use of an inline key schedule: all round keys are computed during each encryption or decryption operation without requiring memory storage. In fact, rekeying has no associated overheads. Key schedules like the AES’s must precompute and store round keys in memory for acceptable performance. However, in 2010 Kamal and Youssef showed this makes cold boot attacks more effective. Vistrutah’s approach minimizes leakage to at most one value during context switches. Furthermore, expensive key schedules reduce key agility, limiting the design of modes of operation.
Vistrutah is particularly well-suited for Birthday-Bound modes of operation, including Synthetic IV modes and Accordion modes for 256-bit block ciphers. It can serve as a building block for compression functions (such as Matyas-Meyer-Oseas) in wide Merkle-Damgard hash functions. Additionally, it can implement "ZIP" wide pseudo-random functions as recently proposed by Florez-Gutierrez et al. in 2024.
Finally, we present short, i.e., reduced-round versions of Vistrutah which are analyzed taking into account the restrictions posed on attackers by specific modes of operation. In particular, we model the use of the block ciphers in Hash-Encrypt-Hash (HEH) constructions such as HCTR2 as well as in ForkCiphers. These short versions of Vistrutah can be used to accelerate modes of operation without sacrificing security.
Another Look at the Quantum Security of the Vectorization Problem with Shifted Inputs
Cryptographic group actions provide a basis for simple post-quantum generalizations of many cryptographic protocols based on the discrete logarithm problem (DLP). However, many advanced group action-based protocols do not solely rely on the core group action problem (the so-called vectorization problem), but also on variants of this problem, to either improve efficiency or enable new functionalities. In particular, the security of the CSI-SharK threshold signature protocol relies on the hardness of the Vectorization Problem with Shifted Inputs where (in DLP formalism) the adversary not only receives and , but also for multiple known values of . A natural open question is whether the extra data provided to the adversary in this variant allows them to solve the underlying problem more efficiently.
In this paper, we revisit the concrete quantum security of this problem. We start from a quantum multiple hidden shift algorithm of Childs and van Dam, which to the best of our knowledge was never applied in cryptography before. We specify algorithms for its subroutines and we provide concrete complexity estimates for both these subroutines and the overall algorithm.
We apply our analysis to the CSI-SharK protocol. In prior analyses based on Kuperberg's algorithms, group action evaluations contributed to a significant part of the overall T-gate cost. For CSI-SharK suggested parameters, our new approach requires significantly fewer calls to the group action evaluation subroutine, leading to significant complexity improvements overall. We describe two instances of our approach, one which lowers the T-gate complexity, and the other the QRAM requirements. We obtain significant speedups -- even in both metrics simultaneously -- and we quantify the degradation of the quantum security of the protocol when the number of curves in the public key increases.
More Efficient Isogeny Proofs of Knowledge via Canonical Modular Polynomials
Proving knowledge of a secret isogeny has recently been proposed as a means to generate supersingular elliptic curves of unknown endomorphism ring, but is equally important for cryptographic protocol design as well as for real world deployments. Recently, Cong, Lai and Levin (ACNS'23) have investigated the use of general-purpose (non-interactive) zero-knowledge proof systems for proving the knowledge of an isogeny of degree between supersingular elliptic curves. In particular, their approach is to model this relation via a sequence of successive steps of a walk in the supersingular isogeny graph and to show that the respective -invariants are roots of the second modular polynomial. They then arithmetize this relation and show that this approach, when compared to state-of-the-art tailor-made proofs of knowledge by Basso et al. (EUROCRYPT'23), gives a 3-10 improvement in proof and verification times, with comparable proof sizes.
In this paper we ask whether we can further improve the modular polynomial-based approach and generalize its application to primes , as used in some recent isogeny-based constructions. We will answer these questions affirmatively, by designing efficient arithmetizations for each that achieve an improvement over Cong, Lai and Levin of up to 48%.
Our main technical tool and source of efficiency gains is to switch from classical modular polynomials to canonical modular polynomials. Adapting the well-known results on the former to the latter polynomials, however, is not straight-forward and requires some technical effort. We prove various interesting connections via novel use of resultant theory, and advance the understanding of canonical modular polynomials, which might be of independent interest.
On the Power of Polynomial Preprocessing: Proving Computations in Sublinear Time, and More
Cryptographic proof systems enable a verifier to be convinced of a computation's correctness without re-executing it; common efficiency requirements include both succinct proofs and fast verification. In this work we put forth the general study of cryptographic proof systems with \textit{sublinear} proving time (after a preprocessing).
Prior work has achieved sublinear proving only for limited computational settings (e.g., vector commitments and lookup arguments), relying on specific assumptions or through non-black-box use of cryptographic primitives. In this work we lift many of these limitations through the systematic study of a specific object: polynomial commitments (PC) with sublinear proving time, a choice motivated by the crucial role that PC play in the design of efficient cryptographic schemes.
We show a simple construction of a PC with sublinear prover based on any vector commitment scheme (VC) and any preprocessing technique for fast polynomial evaluation. We prove that this PC satisfies \emph{evaluation binding}, which is the standard security notion for PC, and show how to expand our construction to achieve the stronger notion of knowledge soundness (extractability).
Next, we study applications of PCs with sublinear prover. The first one is to construct \emph{index-efficient} SNARKs, that are SNARKs where the prover is sublinear, after preprocessing, in the size of the index (i.e., the NP-relation describing the proven statement). We propose a method to transform a class of standard Polynomial Interactive Oracle Proofs (PIOPs) into index-efficient PIOPs, obtaining two results that advance the state of the art: the first lookup argument for unstructured tables in which the prover is sublinear in the size of the table, while making only black-box use of a VC and thus allowing instantiations from generic assumptions such as collision-resistant hash functions, and the first transparent SNARK for circuits where the prover is sublinear in the number of addition gates.
Finally, our last application is a transformation that builds \emph{UC-secure} SNARKs from simulation-extractable ones, with an approximately linear overhead in proving time (as opposed to quadratic in prior work).
How to (not) combine Oblivious Pseudorandom Functions
An oblivious pseudorandom function (OPRF) is a cryptographic tool that enables fast and secure authentication and key derivation from passwords. In the past few years, the adoption of OPRFs has flourished and today they are at the core of the PIN-protected backup methods of WhatsApp and Signal, and of privacy-enhancing browser technologies. All vendors deploy the so-called 2Hash-Diffie-Hellman (2HashDH) OPRF, which relies on discrete-logarithm-type assumptions that are standard yet known to be prone to quantum attacks.
Recent advancements in cryptographic research (e.g., Dodgson et al., Eurocrypt 2025) have brought up post-quantum OPRFs that are fast enough to deploy them in the setting of, e.g., WhatsApp or Signal. Yet none of these constructions are based on standard assumptions.
In this work, we investigate combiners for OPRFs, namely a "best-of-both'' combination of a classical and a post-quantum OPRF that is secure as long as one of them is. First, we give formal evidence that so-called black-box combiners do not exist, indicating that combining OPRFs is subtle and bears similarities with other powerful yet hard-to-combine cryptographic primitives like oblivious transfer (OT).
We then give a (non-black-box) combiner for OPRFs and show that it can be instantiated with 2HashDH and the currently most efficient post-quantum OPRFs based on Legendre symbols. In particular, the reliance on the less standard Legendre-based hardness assumption does not harm the security of 2HashDH. This gives vendors a viable path to lift the security of their OPRF deployments to a post-quantum level.
Cryptography meets worst-case complexity: Optimal security and more from iO and worst-case assumptions
We study several problems in the intersection of cryptography and complexity theory based on the following high-level thesis.
1) Obfuscation can serve as a general-purpose worst-case to average-case reduction, reducing the existence of various forms of cryptography to corresponding worst-case assumptions.
2) We can therefore hope to overcome barriers in cryptography and average-case complexity by (i) making worst-case hardness assumptions beyond , and (ii) leveraging worst-case hardness reductions, either proved by traditional complexity-theoretic methods or facilitated further by cryptography.
Concretely, our results include:
- Optimal hardness. Assuming sub-exponential indistinguishability obfuscation, we give fine-grained worst-case to average case reductions for circuit-SAT. In particular, if finding an -witness requires nearly brute-force time in the worst case, then the same is true for some efficiently sampleable distribution. In fact, we show that under these assumptions, there exist families of one-way functions with optimal time-probability security tradeoffs.
Under an additional, stronger assumption -- the optimal non-deterministic hardness of refuting circuit-SAT -- we construct additional cryptographic primitives such as PRGs and public-key encryption that have such optimal time-advantage security tradeoffs.
- Direct Product Hardness. Again assuming and optimal non-deterministic hardness of SAT refutation, we show that the ``(search) -fold SAT problem'' -- the computational task of finding satisfying assignments to circuit-SAT instances simultaneously -- has (optimal) hardness roughly for time algorithms. In fact, we build ``optimally secure one-way product functions'' (Holmgren-Lombardi, FOCS '18), demonstrating that optimal direct product theorems hold for some choice of one-way function family.
- Single-Input Correlation Intractability. Assuming either or , we show a worst-case to average-case reduction for strong forms of single-input correlation intractability. That is, powerful forms of correlation-intractable hash functions exist provided that a collection of \emph{worst-case} ``correlation-finding'' problems are hard.
- Non-interactive Proof of Quantumness. Assuming sub-exponential and OWFs, we give a non-interactive proof of quantumness based on the worst-case hardness of the white-box Simon problem. In particular, this proof of quantumness result does not explicitly assume quantum advantage for an average-case task.
To help prove our first two results, we show along the way how to improve the Goldwasser-Sipser ``set lower bound'' protocol to have communication complexity quadratically smaller in the multiplicative approximation error .
Novel approximations of elementary functions in zero-knowledge proofs
In this paper, we study the computation of complex mathematical functions in statements executed on top of zero-knowledge proofs (ZKP); these functions may include roots, exponentials and logarithms, trigonometry etc. While existing approaches to these functions in privacy-preserving computations (and sometimes also in general-purpose processors) have relied on polynomial approximation, more powerful methods are available for ZKP. In this paper, we note that in ZKP, all algebraic functions are exactly computable. Recognizing that, we proceed to the approximation of transcendental functions with algebraic functions. We develop methods of approximation, instantiate them on a number of common transcendental functions, and benchmark their precision and efficiency in comparison with best polynomial approximations.
Shorter VOLE-in-the-Head-based Signatures from Vector Semi-Commitment
The VOLE-in-the-Head (VOLEitH) paradigm transforms VOLE-based zero-knowledge proofs into post-quantum signature schemes by allowing public verification. We introduce reduced VOLE-in-the-Head (rVOLEitH), which incorporates the Vector Semi-Commitment (VSC) technique. VSC, originally developed for MPC-in-the-Head (MPCitH) schemes, reduces commitment size while maintaining security by relaxing the binding property. We adapt the ideal cipher version of VSC (IC-VSC) into the VOLEitH framework, leading to a reduction in signature size. Our security analysis proves that rVOLEitH achieves existential unforgeability under chosen-message attacks (EUF-CMA) in the ideal cipher model. Compared to existing VOLEitH-based signatures, our approach reduces signature size by up to 6.0\% while improving computational efficiency.
Furthermore, we analyze the impact of eliminating individual seed commitments and demonstrate a practical attack against a recently proposed VOLEitH variant that lacks such commitments. Our results establish rVOLEitH as an optimized and secure alternative for post-quantum signatures, improving both efficiency and security in the VOLEitH paradigm.
SCIF: Privacy-Preserving Statistics Collection with Input Validation and Full Security
Secure aggregation is the distributed task of securely computing a sum of values (or a vector of values) held by a set of parties, revealing only the output (i.e., the sum) in the computation. Existing protocols, such as Prio (NDSI’17), Prio+ (SCN’22), Elsa (S&P’23), and Whisper (S&P’24), support secure aggregation with input validation to ensure inputs belong to a specified domain. However, when malicious servers are present, these protocols primarily guarantee privacy but not input validity. Also, malicious server(s) can cause the protocol to abort. We introduce SCIF, a novel multi-server secure aggregation protocol with input validation, that remains secure even in the presence of malicious actors, provided fewer than one-third of the servers are malicious. Our protocol overcomes previous limitations by providing two key properties: (1) guaranteed output delivery, ensuring malicious parties cannot prevent the protocol from completing, and (2) guaranteed input inclusion, ensuring no malicious party can prevent an honest party’s input from being included in the computation. Together, these guarantees provide strong resilience against denial-of-service attacks. Moreover, SCIF offers these guarantees without increasing client costs over Prio and keeps server costs moderate. We present a robust end-to-end implementation of SCIF and demonstrate the ease with which it can be instrumented by integrating it in a simulated Tor network for privacy-preserving measurement.
Double Auction Meets Blockchain: Consensus from Scored Bid-Assignment
A double auction system, where buyers and sellers trade through bids, requires a transparent and immutable mechanism to record allocation results. This demand can be met with robust ledgers that ensure persistence and liveness, as exemplified by the Bitcoin blockchain (EuroCrypt {'}15). While existing blockchain-aided auction systems often rely on secure smart contracts or layer- techniques, this work proposes a more fundamental approach by constructing a provably secure blockchain protocol directly from the computation of bid allocations. The core component is an alternative proof-of-work (PoW) scheme based on a scored generalized multiple assignment problem (SGMAP), integrated into a tailored blockchain protocol. Unlike conventional PoW-based protocols, our leader selection is driven by block scores derived from the SGMAP scoring function, which is designed to be flexible enough to define the difficulty level and accommodate real-life requirements of the underlying double auction system. We prove persistence and a modified liveness property for our design, and present implementation results to validate its robustness and practicality.
FABLE: Batched Evaluation on Confidential Lookup Tables in 2PC
Abstract
Secure two-party computation (2PC) is a cryptographic technique that enables two mutually distrusting parties to jointly evaluate a function over their private inputs. We consider a 2PC primitive called confidential lookup table (LUT) evaluation, which is useful in privacy-preserving ML inference and data analytics. In this setting, a server holds a confidential LUT and evaluates it over an input secret-shared between a client and the server, producing a secret-shared output. Existing approaches for 2PC LUT evaluation suffer from high asymptotic complexity and practical inefficiency, with some designs lacking confidentiality guarantees for the LUT. Recognizing that many applications involving confidential LUT evaluation require processing multiple inputs with the same LUT, we propose FABLE, a system designed to efficiently evaluate a LUT on a large batch of queries simultaneously. Compared to the state-of-the-art confidential LUT evaluation methods, FABLE achieves up to 28.46-101.47 speedup in LAN environments and up to 50.10-392.93 speedup in WAN environments.
Multi-Key Fully Homomorphic Encryption: removing noise flooding in distributed decryption via the smudging lemma on discrete Gaussian distribution
The current Multi-key Fully Homomorphic Encryption (MKFHE) needs to add exponential noise in the distributed decryption phase to ensure the simulatability of partial decryption. Such a large noise causes the ciphertext modulus of the scheme to increase exponentially compared to the Single-key Fully Homomorphic Encryption (FHE), further reducing the efficiency of the scheme and making the hardness problem on the lattice on which the scheme relies have a sub-exponential approximation factor (which means that the security of the scheme is reduced). To address this problem, this paper analyzes in detail the noise in partial decryption of the MKFHE based on the LWE problem. It points out that this part of the noise is composed of private key and the noise in initial ciphertext. Therefore, as long as the encryption scheme is leak-resistant and the noise in partial decryption is independent of the noise in the initial ciphertext, the semantic security of the ciphertext can be guaranteed. In order to make the noise in the initial ciphertext independent of the noise in the partial decryption, this paper proves the smudging lemma on discrete Gaussian distribution and achieves this goal by multiplying the initial ciphertext by a ``dummy" ciphertext with a plaintext of 1. Based on the above method, this paper removes the exponential noise in the distributed decryption phase for the first time and reduces the ciphertext modulus of MKFHE from to as the same level as the FHE.
Fast, private and regulated payments in asynchronous networks
We propose a decentralized asset-transfer system that enjoys full privacy: no party can learn the details of a transaction, except for its issuer and its recipient. Furthermore, the recipient is only aware of the amount of the transaction. Our system does not rely on consensus or synchrony assumptions, and therefore, it is responsive, since it runs at the actual network speed. Under the hood, every transaction creates a consumable coin equipped with a non-interactive zero-knowledge proof (NIZK) that confirms that the issuer has sufficient funds without revealing any information about her identity, the recipient's identity, or the payment amount. Moreover, we equip our system with a regulatory enforcement mechanism that can be used to regulate transfer limits or restrict specific addresses from sending or receiving funds, while preserving the system's privacy guarantees.
Finally, we report on Paxpay, our implementation of Fully Private Asset Transfer (FPAT) that uses the Gnark library for the NIZKs. In our benchmark, Paxpay exhibits better performance than earlier proposals that either ensure only partial privacy, require some kind of network synchrony or do not implement regulation features. Our system thus reconciles privacy, responsiveness, regulation enforcement and performance.
Hierarchical Identity-Based Matchmaking Encryption
Identity-based matchmaking encryption (IB-ME) is an advanced encryption scheme that allows both the sender and the receiver to specify their respective identities. We study the notion of \emph{hierarchical IB-ME (HIB-ME)}, which augments IB-ME with delegation capabilities.
Specifically, we first formalize HIB-ME and construct it based on hierarchical identity-based encryption and hierarchical identity-based signature. Moreover, as applications of the HIB-ME, we show two chosen ciphertext secure (H)IB-ME constructions for different security levels.
ZHE: Efficient Zero-Knowledge Proofs for HE Evaluations
Homomorphic Encryption (HE) allows computations on encrypted data without decryption. It can be used where the users’ information are to be processed by an untrustful server, and has been a popular choice in privacy-preserving applica- tions. However, in order to obtain meaningful results, we have to assume an honest-but-curious server, i.e., it will faithfully follow what was asked to do. If the server is malicious, there is no guarantee that the computed result is correct. The notion of verifiable HE (vHE) is introduced to detect malicious server’s behaviors, but current vHE schemes are either more than four orders of magnitude slower than the underlying HE operations (Atapoor et. al, CIC 2024) or fast but incompatible with server- side private inputs (Chatel et. al, CCS 2024).
In this work, we propose a vHE framework ZHE: effi- cient Zero-Knowledge Proofs (ZKPs) that prove the correct execution of HE evaluations while protecting the server’s private inputs. More precisely, we first design two new highly- efficient ZKPs for modulo operations and (Inverse) Number Theoretic Transforms (NTTs), two of the basic operations of HE evaluations. Then we build a customized ZKP for HE evaluations, which is scalable, enjoys a fast prover time and has a non-interactive online phase. Our ZKP is applicable to all Ring-LWE based HE schemes, such as BGV and CKKS. Finally, we implement our protocols for both BGV and CKKS and conduct extensive experiments on various HE workloads. Compared to the state-of-the-art works, both of our prover time and verifier time are improved; especially, our prover cost is only roughly 27-36× more expensive than the underlying HE operations, this is two to three orders of magnitude cheaper than state-of-the-arts.
XHMQV: Better Efficiency and Stronger Security for Signal’s Initial Handshake based on HMQV
The Signal protocol is the most widely deployed end-to-end-encrypted messaging protocol. Its initial handshake protocol X3DH allows parties to asynchronously derive a shared session key without the need to be online simultaneously, while providing implicit authentication, forward secrecy, and a form of offline deniability. The X3DH protocol has been extensively studied in the cryptographic literature and is acclaimed for its strong "maximum-exposure" security guarantees, hedging against compromises of users' long-term keys and medium-term keys but also the ephemeral randomness used in the handshake. This maximum-exposure security is achieved by deriving keys from the concatenation of 3–4 Diffie–Hellman (DH) secrets, each combining two long-term, medium-term, or ephemeral DH shares.
Remarkably, X3DH's approach of concatenating plain DH combinations is sub-optimal, both in terms of maximum-exposure security and performance. Indeed, Krawczyk's well-known HMQV protocol (Crypto '05) is a high-performance, DH-based key exchange that provides strong security against long-term and ephemeral key compromise. One might hence wonder: why not base Signal's initial handshake on HMQV?
In this work, we study this question and show that a carefully adapted variant of HMQV, which we call XHMQV, indeed enables stronger security and efficiency while matching the constraints of Signal's initial handshake. Most notably, HMQV does not work as a drop-in replacement for X3DH, as the latter's asynchronicity requires the protocol to handle cases where one party runs out of ephemeral keys (pre-uploaded to the Signal server). Our XHMQV design hence augments HMQV with medium-term keys analogous to those used in X3DH. We prove that XHMQV provides security in all 3–4 compromise scenarios where X3DH does and additionally in 1–2 further scenarios, strengthening the handshake's maximum-exposure guarantees while using more efficient group operations. We further confirm that our XHMQV design achieves deniability guarantees comparable to X3DH. Our security model is the first to capture Signal's long-term key reuse between DH key exchange and signatures, which may be of independent interest.
Verifiable Decapsulation: Recognizing Faulty Implementations of Post-Quantum KEMs
Cryptographic schemes often contain verification steps that are essential for security. Yet, faulty implementations missing these steps can easily go unnoticed, as the schemes might still function correctly. A prominent instance of such a verification step is the re-encryption check in the Fujisaki-Okamoto (FO) transform that plays a prominent role in the post-quantum key encapsulation mechanisms (KEMs) considered in NIST's PQC standardization process. In KEMs built from FO, decapsulation performs a re-encryption check that is essential for security, but not for functionality. In other words, it will go unnoticed if this essential step is omitted or wrongly implemented, opening the door for key recovery attacks. Notably, such an implementation flaw was present in HQC's reference implementation and was only noticed after 19 months.
In this work, we develop a modified FO transform that binds re-encryption to functionality, ensuring that a faulty implementation which skips re-encryption will be exposed through basic correctness tests. We do so by adapting the "verifiable verification" methodology of Fischlin and Günther (CCS 2023) to the context of FO-based KEMs. More concretely, by exporting an unpredictable confirmation code from the public key encryption and embedding it into the key derivation function, we can confirm that (most of) the re-encryption step was indeed performed during decapsulation. We formalize this concept, establish modified FO transforms, and prove how unpredictable PKE confirmation codes turn into noticeable correctness errors for faulty implementations. We show how to apply this technique to ML-KEM and HQC, both with negligible overhead, by leveraging the entropy lost through ciphertext compression or truncation. We confirm that our approach works through mathematical proofs, as well as experimental data. Our experiments show that the implementation flaw in HQC's reference implementation indeed makes basic test cases fail when following our approach.
Nominal State-Separating Proofs
State-separating proofs are a powerful tool to structure cryptographic arguments, so that they are amenable for mechanization, as has been shown through implementations, such as SSProve. However, the treatment of separation for heaps has never been satisfactorily addressed. In this work, we present the first comprehensive treatment of nominal state separation in state-separating proofs using nominal sets. We provide a Rocq library, called Nominal-SSProve, that builds on nominal state separation supporting mechanized proofs that appear more concise and arguably more elegant.
“Check-Before-you-Solve”: Verifiable Time-lock Puzzles
Time-lock puzzles are cryptographic primitives that guarantee to the generator that the puzzle cannot be solved in less than sequential computation steps. They have recently found numerous applications, e.g., in fair contract signing and seal-bid auctions. However, solvers have no a priori guarantee about the solution they will reveal, e.g., about its ``usefulness'' within a certain application scenario. In this work, we propose verifiable time-lock puzzles (VTLPs) that address this by having the generator publish a succinct proof that the solution satisfies certain properties (without revealing anything else about it). Hence solvers are now motivated to ``commit'' resources into solving the puzzle. We propose VTLPs that support proving arbitrary NP relations about the puzzle solution.
At a technical level, to overcome the performance hurdles of the ``naive'' approach of simply solving the puzzle within a SNARK that also checks , our scheme combines the ``classic'' RSA time-lock puzzle of Rivest, Shamir, and Wagner, with novel building blocks for ``offloading'' expensive modular group exponentiations and multiplications from the SNARK circuit. We then propose a second VTLP specifically for checking RSA-based signatures and verifiable random functions (VRFs). Our second scheme does not rely on a SNARK and can have several applications, e.g., in the context of distributed randomness generation. Along the road, we propose new constant-size proofs for modular exponent relations over hidden-order groups that may be of independent interest. Finally, we experimentally evaluate the performance of our schemes and report the findings and comparisons with prior approaches.
Fabric-X: Redesigning Hyperledger Fabric Architecture for High-throughput Regulated Asset Exchange Applications
The adoption of Distributed Ledger Technology (DLT) for critical
financial infrastructures like Central Bank Digital Currencies (CB-
DCs) is hindered by a significant performance gap. Permissioned
blockchains such as Hyperledger Fabric, while conceptually suit-
able, are limited by architectural bottlenecks in their monolithic
peer design and consensus mechanisms, preventing them from
achieving the required scale.
This paper presents a fundamental re-architecture of Hyper-
ledger Fabric that addresses these challenges end-to-end. We de-
compose the monolithic peer into independently scalable microser-
vices for endorsement, validation, and committing. To maximize
parallelism, we introduce a transaction dependency graph that en-
ables the safe, concurrent validation of transactions across multiple
blocks. Complementing the peer redesign, we introduce Arma, a
novel sharded Byzantine Fault Tolerant (BFT) ordering service that
dramatically increases throughput by ordering compact transaction
digests rather than full transaction payloads. We implemented and
benchmarked this framework with a UTXO-based CBDC applica-
tion. Our evaluation demonstrates a peak throughput exceeding
200,000 transactions per second (TPS)—a two-orders-of-magnitude
improvement over the standard implementation. This work proves
that permissioned DLTs can be engineered for national-scale pay-
ment systems, providing a resilient and highly performant foun-
dation for practical CBDC deployments and the integration of ad-
vanced, computationally intensive features.
Homomorphic Encryption for Large Integers from Nested Residue Number Systems
Existing fully homomorphic encryption (FHE) schemes primarily support a plaintext space defined over a relatively small prime. However, in some important applications of FHE one needs arithmetic over a large prescribed prime. In this paper we construct a new FHE system that is specifically designed for this purpose. Our system composes three layers of residue systems to enable much better performance than was previously possible. Our experiments show that for arithmetic modulo a 256-bit integer, when compared to the TFHE-rs implementation of 256-bit arithmetic, our new system achieves a factor of two thousand better multiplication throughput and a factor of twenty better latency. Moreover, for a 2048-bit prime modulus we achieve far better performance than was previously possible.
Quantum-safe Signatureless DNSSEC
We present : a backward-compatible protocol that leverages a quantum-safe KEM and a MAC to perform signature-less DNSSEC validations in a single UDP query/response style. Our experiments targeting NIST level I security for QTYPE A query resolution show that is practically equivalent to the presently deployed RSA-2048 in terms of bandwidth usage and resolution speeds. Compared to post-quantum signatures, reduces bandwidth consumption and resolution times by up to and , respectively. Moreover, with response size query size bytes, obviates the long-standing issues of IP fragmentation, TCP re-transmits and DDoS amplification attacks.
Fairness in the Wild: Secure Atomic Swap with External Incentives
Atomic swaps enable asset exchanges across blockchains without relying on trusted intermediaries, and are a key component of decentralized finance (DeFi) ecosystems. Recently, Chung, Masserova, Shi, and Thyagarajan introduced Rapidash (Financial Cryptography 2025), an atomic swap protocol that remains incentive compatible under user-miner collusion, by ensuring that the honest strategy forms a coalition-resistant Nash equilibrium. However, their model assumes a closed system where players act solely based on internal protocol incentives. In practice, participants may be influenced by external incentives such as off-chain rewards or adversarial bribes, which can undermine such equilibrium guarantees.
In this work, we introduce a new game-theoretic notion, bounded maximin fairness, which ensures that honest participants remain protected against rational adversaries with arbitrary but bounded external incentives. We construct an atomic swap protocol that satisfies this notion, while preserving the equilibrium properties of prior work in the absence of external influence.
As we show, our protocol is easy to implement and can be instantiated even in Bitcoin’s limited scripting language.
SmallWood: Hash-Based Polynomial Commitments and Zero-Knowledge Arguments for Relatively Small Instances
Zero-knowledge proofs (ZKPs) are a fundamental building block in cryptography, enabling powerful privacy-preserving and verifiable computations. In the post-quantum era, hash-based ZKPs have emerged as a promising direction due to their conjectured resistance to quantum attacks, along with their simplicity and efficiency.
In this work, we introduce SmallWood, a hash-based polynomial commitment scheme (PCS) and zero-knowledge argument system optimized for relatively small instances. Building on the recent degree-enforcing commitment scheme (DECS) from the Threshold-Computation-in-the-Head (TCitH) framework, we refine its formalization and combine it with techniques from Brakedown. This results in a new hash-based PCS that is particularly efficient for polynomials of relatively small degree —typically up to — outperforming existing approaches in this range.
Leveraging this new PCS, we design a hash-based zero-knowledge argument system that outperforms the state-of-the-art in terms of proof sizes for witness size ranging from to . Additionally, we present exact zero-knowledge arguments for lattice-based problems using SmallWood, demonstrating highly competitive performance: our scheme yields proof sizes under 25 KB across a wide range of lattice parameters, including Kyber and Dilithium instances.
The complexity of the SupportMinors Modeling for the MinRank Problem
In this note, we provide proven estimates for the complexity of the SupportMinors Modeling, mostly confirming the heuristic complexity estimates contained in the original article.
Compass: Encrypted Semantic Search with High Accuracy
We present Compass, a semantic search system for encrypted data that achieves high accuracy, matching state-of-the-art plaintext search quality, while ensuring the privacy of data, queries, and results, even if the server is compromised. Compass contributes a novel way to traverse a state-of-the-art graph-based semantic search index and a white-box co-design with Oblivious RAM, a cryptographic primitive that hides access patterns, to enable efficient search over encrypted embeddings. With our techniques, Directional Neighbor Filtering, Speculative Neighbor Prefetch, and Graph-Traversal Tailored ORAM, Compass achieves user-perceived latencies within or around a second and is orders of magnitude faster than baselines under various network conditions.
Treebeard: A Scalable and Fault Tolerant ORAM Datastore
We present Treebeard - the first scalable and fault tolerant Oblivious RAM (ORAM) based datastore designed to protect applications from access pattern attacks. Current ORAM systems face challenges in practical adoption due to their limited ability to handle concurrent workloads, scale effectively, and ensure fault tolerance. We address all three limitations in Treebeard by utilizing a multi-layer architecture that scales horizontally, handling thousands of requests in parallel, while replicating the data to prevent data loss upon failures. Experimental evaluation demonstrates Treebeard's ability to scale linearly, achieving a throughput of 160K ops/sec with 16 machines; this behavior is similar to the enclave-based state-of-the-art, Snoopy. Being fault-tolerant, Treebeard recovers from failures with close to zero downtime and achieves 13.8x the throughput of QuORAM, the latest fault tolerant ORAM system, even without scaling.
Leftover Hash Lemma(s) Over Cyclotomic Rings
In this work, we propose a novel systematic approach for obtaining leftover hash lemmas (LHLs) over cyclotomic rings. Such LHLs build a fundamental tool in lattice-based cryptography, both in theoretical reductions as well as in the design of cryptographic primitives. The scattered set of prior works makes it difficult to navigate the landscape and requires a substantial effort to understand the mathematical constraints under which the LHL holds over cyclotomic rings. This is especially painful if one’s given setting does not fit exactly into prior studies.
We argue that all prior approaches boil down to two different proof strategies, resulting in two main theorems. From there on, we are able to recover all previous flavours of seemingly independent LHLs as corollaries. Moreover, we showcase the power of our interpretation by providing new statements, covering mathematical settings not considered before. Our work further proves LHLs in the presence of leakage for both approaches and provides novel bounds for wide families of leakage functions.
Revisiting Discrete Logarithm Reductions
A reduction showing that the hardness of the discrete logarithm ( ) assumption implies the hardness of the computational Diffie-Hellman ( ) assumption in groups of order , where is smooth, was first presented by den Boer [Crypto, 88].}
We also consider groups
of prime order , where is somewhat smooth (say, every prime that divides is less than ).
Several practically relevant groups satisfy this condition.
1. We present a concretely efficient version of the reduction for such groups.
In particular, among practically relevant groups, we obtain the most efficient and tightest reduction in the literature for BLS12-381, showing that = .
2. By generalizing the reduction, we show that in these groups the -Power ( - ) assumption implies -Diffie-Hellman Exponent ( - ) assumption, where is polynomial in the security parameter.
On the negative side, we show there is no generic reduction, which could demonstrate that - implies the -Generalized Diffie-Hellman Exponent ( - ) assumption.
This is in stark contrast with the algebraic group model, where this implication holds.
Enhancing E-Voting with Multiparty Class Group Encryption
CHide is one of the most prominent e-voting protocols, which, while combining security and efficiency, suffers from having very long encrypted credentials.
In this paper, starting from CHide, we propose a new protocol, based on multiparty Class Group Encryption (CGE) instead of discrete logarithm cryptography over known order groups. We achieve a computational complexity of , for votes and voters, while calling the MixNet algorithm one time. The homomorphic properties of CGE allow for credentials that are shorter by a factor of 20 while maintaining the same level of security, at the cost of a small slowdown in efficiency.
A Theoretical Perspective on the Formal Verification of IoT Protocols Using LTL and Rewriting Logic in Maude
As Internet of Things (IoT) systems become increasingly complex, verifying communication protocols is essential to guarantee their security and correctness. This paper introduces a brief introduction to the theoretical concepts of how to use formal techniques, LTL and rewriting logic within the Maude system to verify the security and proper behavior of the protocols. While rewriting logic explains how various events occur over time, LTL explains how a property can be formulated. This theoretical perspective is intended to inform future applications and research.
Garbled Lookup Tables from Homomorphic Secret Sharing
Garbled Circuit (GC) is a fundamental tool in cryptography, especially in secure multiparty computation. Most garbling schemes follow a gate-by-gate paradigm. The communication cost is proportional to the circuit size times the security parameter .
Recently, Heath, Kolesnikov and Ng (Eurocrypt 2024) partially transcend the circuit size barrier by considering large gates. To garble an arbitrary -input -output gate, their scheme requires bits of communication. The security relies on circular correlation robust hash functions (CCRH).
We further improve the communication cost to , removing the exponential term. The computation cost is , dominated by exponentiations. Our construction is built upon recent progress in DCR-based Homomorphic Secret Sharing (HSS), so it additionally relies on the decisional composite residuosity (DCR) assumption.
As an intermediate step, we construct programmable distributed point functions with decomposable keys, relying on the DCR assumption. Previously, such primitive can only be constructed from multilinear maps or sub-exponential lattice assumptions.
Advancements in Distributed RSA Key Generation: Enhanced Biprimality Tests
This work re-evaluates the soundness guarantees of the Boneh-Franklin biprimality test ( ) for Blum integers.
Under the condition , we show that the test accepts a non-RSA modulus with probability at most . This is a refinement of the previously established bound and holds for all cases except the specific instance where . We further demonstrate that this bound is tight, thereby halving the number of test iterations required to achieve equivalent soundness. This directly reduces computational and communication overhead in distributed RSA generation protocols.
Additionally, we propose a generalized biprimality test based on the Lucas sequence. In the worst case, the acceptance probability of the proposed test is at most , where is the smallest prime factor of . To validate our approach, we implemented the variant Miller-Rabin test, the Boneh-Franklin test, and our proposed test, performing pairwise comparisons of their effectiveness. Both theoretical analysis and simulations indicate that the proposed test is generally more efficient than the Boneh-Franklin test in detecting cases where is not an RSA modulus. Furthermore, this test is applicable to generating RSA moduli for arbitrary odd primes.
A distributed RSA modulus verification protocol that incorporates our test is also introduced. The protocol is secure against semi-honest adversaries for general odd primes. For Blum integers, it also offers security against malicious adversaries. We analyze its efficiency and compatibility with existing distributed RSA protocols, including those of Boneh-Franklin and Burkhardt et al. (CCS 2023). Our protocol offers competitive performance while enhancing soundness and generality in cryptographic applications.
Weight reduction in distributed protocols: new algorithms and analysis
We study the problem of minimizing the total weight of (potentially many) participants of a distributed protocol, a necessary step when the original values are large but the scheme to be deployed scales poorly with the weights. We assume that fraction of the original weights can be corrupted and we must output new weights with at most adversarial fraction, for . This problem can be viewed from the prism of electing a small committee that does the heavy work, a powerful tool for making distributed protocols scalable. We solve the variant that requires giving parties potentially multiple seats in the committee and counting each seat towards the cost of the solution. Moreover, we focus on the ``deterministic'' version of the problem where the computed committee must be secure for any subset of parties that can be corrupted by the adversary; such a committee can be smaller than a randomly sampled one in some cases and is useful when security against adaptive corruptions is desired but parties in the sub-protocol speak multiple times.
Presented are new algorithms for the problem as well as analysis of prior work. We give two variants of the algorithm Swiper (PODC 2024), one that significantly improves the running time without sacrificing the quality of the output and the other improving the output for a reasonable increase in the running time. We prove, however, that all known algorithms, including our two variants of Swiper, have worst case approximation ratio . To counter that, we give the first polynomial time algorithm with approximation factor and also the first sub-exponential time exact algorithm, practical for some real-world inputs. Of theoretical interest is another polytime algorithm that we present, based on linear programming, whose output is no worse than an optimal solution to the problem with slightly different parameters.
We implemented and tested previous and new algorithms, comparing them on the stake distributions of popular proof-of-stake blockchains, and found that our second variant of Swiper computes solutions extremely close to the optimal, confirmed by our exact algorithm.
Secure graph computation enables computing on graphs while hiding the graph topology as well as the associated node/edge data. This facilitates collaborative analysis among multiple data owners, who may only hold a private partial view of the global graph. Several works address this problem using the technique of secure multiparty computation (MPC) in the presence of 2 or 3 parties. However, when moving to the multiparty setting, as required for collaborative analysis among multiple data owners, these solutions are no longer scalable. This remains true with respect to the state-of-the-art framework of (Koti et al., CCS 2024) as well. Specifically, incurs a round complexity linear in the number of parties or data owners. This is due to its reliance on secure shuffle protocol, constituting a bottleneck in the multiparty setting. Additionally, has a prohibitively expensive initialisation phase due to its reliance on secure sort, with a round complexity dependent on both the graph size and the number of parties.
We propose , a generic framework for secure graph computation in the multiparty setting that eliminates the need of shuffle and instead, relies on a weaker primitive known as . Further is designed to have a lightweight initialisation, that eliminates the need for sorting, making its round complexity independent of the graph size and number of parties. Unlike any of the prior works, achieving a round complexity independent of the number of parties is what makes scalable.
Finally, we implement and benchmark the performance of for the application of PageRank computation and showcase its efficiency and scalability improvements over . Concretely, we witness improvements of up to in runtime in comparison to state-of-the-art framework . Further, we observe that takes under a minute to perform 10 iterations of PageRank computation on a graph of size that is distributed among parties/data owners, making it highly practical for secure graph computation in the multiparty setting.
Secure and Practical Cold (and Hot) Staking
The stake delegation technique is what turns the general Proof of Stake (PoS) into a practical protocol for a large number of participants, ensuring the security of the distributed system, in what is known as Delegated PoS (DPoS). Karakostas et al. (SCN ’20) formalized the delegation method paving the way for a whole industry of stake pools by proposing a formal definition for wallet as a universal composable (UC) functionality and introducing a corresponding protocol. On the other hand, a widely used technique named hot/cold wallet was formally studied by Das et al. (CCS ’19 and ’21), and Groth and Shoup (Eurocrypt ’22) for different key derivation methods in the Proof of Work (PoW) setting, but not PoS. Briefly, while hot wallets are exposed to the risks of the network, the cold wallet is kept offline, thus more secure. However this may impair some capabilities given that the cold wallet is kept indefinitely offline. It is straightforward to observe that this “double wallet” design is not naturally portable to the setting where delegation is paramount, i.e., DPoS. This work identifies challenges for PoS Hot/Cold Wallet and proposes a secure and practical protocol.
Multiparty Distributed Point Functions
We present the first construction of multiparty distributed point functions based on one-way functions, where the share sizes remain sublinear in the domain size and grow {\em only polynomially} with the number of parties. In contrast, existing multiparty distributed point function constructions in Minicrypt have share sizes that grow {\em exponentially} with the number of parties.
Cryptanalysis of Isogeny-Based Quantum Money with Rational Points
Quantum money is the cryptographic application of the quantum no-cloning theorem. It has recently been instantiated by Montgomery and Sharif (Asiacrypt '24) from class group actions on elliptic curves. In this work, we propose a concrete cryptanalysis by leveraging the efficiency of evaluating division polynomials with the coordinates of rational points, offering a speedup of compared to the brute-force attack. Since our attack still requires exponential time, it remains impractical to forge a quantum banknote. Interestingly, due to the inherent properties of quantum money, our attack method also results in a more efficient verification procedure. Our algorithm leverages the properties of quadratic twists to utilize rational points in verifying the cardinality of the superposition of elliptic curves. We expect this approach to contribute to future research on elliptic-curve-based quantum cryptography.
The Algebraic One-More MISIS Problem and Applications to Threshold Signatures
This paper introduces a new one-more computational problem for lattice-based cryptography, which we refer to as the Algebraic One-More MISIS problem, or AOM-MISIS for short. It is a modification of the AOM-MLWE problem recently introduced by Espitau et al. (CRYPTO '24) to prove security of new two-round threshold signatures.
Our first main result establishes that the hardness of AOM-MISIS is implied by the hardness of MSIS and MLWE (with suitable parameters), both of which are standard assumptions for efficient lattice-based cryptography. We prove this result via a new generalization of a technique by Tessaro and Zhu (EUROCRYPT '23) used to prove hardness of a one-more problem for linear hash functions assuming their collision resistance, for which no clear lattice analogue was known. Since the hardness of AOM-MISIS implies the hardness of AOM-MLWE, our result resolves the main open question from the work of Espitau et al., who only provided a similar result for AOM-MLWE restricted to selective adversaries, a class which does not cover the use for threshold signatures.
Furthermore, we show that our novel formulation of AOM-MISIS offers a better interface to develop tighter security bounds for state-of-the-art two-round threshold signatures. We exemplify this by providing new proofs of security, assuming the hardness of MLWE and MSIS, for two threshold signatures, the one proposed in the same work by Espitau et al., as well as a recent construction by Chairattana-Apirom et al. (ASIACRYPT 2024). For the former scheme, we also show that it satisfies the strongest security notion (TS-UF-4) in the security hierarchy of Bellare et al. (CRYPTO '22), as a result of independent interest.
Depending on DEEPAND: Cryptanalysis of NLFSR-based Lightweight Ciphers TinyJAMBU, KATAN and KTANTAN
Automated cryptanalysis has taken center stage in the arena of cryptanalysis since the pioneering work by Mouha et al., which showcased the power of Mixed Integer Linear Programming (MILP) in solving cryptanalysis problems that otherwise required significant effort. Since the inception, research in this area has moved in primarily two directions. One is to model more and more classical cryptanalysis tools as optimization problems to leverage the ease provided by state-of-the-art solvers. The other direction is to improve existing models to make them more efficient and/or accurate. The current work is an attempt to contribute to the latter. In this work, a general model referred to as DEEPAND has been devised to capture the correlation between AND gates in NLFSR-based lightweight block ciphers. DEEPAND builds upon and generalizes the idea of joint propagation of differences through AND gates captured using refined MILP modeling of TinyJAMBU by Saha et al. in FSE 2020. The proposed model has been applied to TinyJAMBU, KATAN, and KTANTAN and can detect correlations that were missed by earlier models. This leads to more accurate differential bounds for both the ciphers.
In particular, a 384-round (full round as per earlier specification) Type-IV trail is found for TinyJAMBU with 14 active AND gates using the new model, while the refined model reported this figure to be 19. This also reaffirms the decision of the designers to increase the number of rounds from 384 to 640. Moreover, the model succeeds in searching a full-round Type-IV trail of TinyJAMBU keyed permutation P_1024 with probability 2^-105 (much greater than 2^-128). This reveals the non-random properties of P_1024, thereby showing it to be non-ideal. Hence, it cannot be expected to provide the same security levels as robust block ciphers. Further, the provable security of the TinyJAMBU AEAD scheme should be carefully revisited.
Similarly, for the variants of KATAN, several previously reported trails are improved upon by employing the DEEPAND model. Moreover, in the related-key setting, the DEEPAND model is able to make a better 140-round boomerang distinguisher (for both the data and time complexity) in comparison to the previous boomerang attack by Isobe et al. in ACISP 2013. Furthermore, for enhanced applicability, we employ the DEEPAND model on another multiple-AND-based cipher, KTANTAN, in the related-key setting. Our analysis reveals practical differential distinguishers with low data and time complexities for all full-round KTANTAN variants. In summary, DEEPAND seems to capture the underlying correlation better when multiple AND gates are at play and can be adapted to other classes of ciphers as well.
Computing Asymptotic Bounds for Small Roots in Coppersmith's Method via Sumset Theory
Coppersmith's method is a well-known and practical method for solving polynomial modular equations involved in some cryptosystems such as RSA. An important and tedious task in this method consists in computing the asymptotic bounds. In this work, we address the challenge of computing such asymptotic bounds by introducing the Sumsets theory from Additive Combinatorics as a new analytical tool, which significantly streamlines manual calculations. More precisely, we develop the first provable algorithm for determining these asymptotic bounds, whereas the recent methods based on simple Lagrange interpolation are heuristic.
Moreover, the experiments showed that our method is much more efficient than the previous method in practice. We also employ our method to improve the cryptanalytic results for the Commutative Isogeny Hidden Number Problem. Our approach may deepen the understanding of Coppersmith's method and inspire more security analysis methodologies.
ethSTARK Documentation
This document is intended to accompany the ethSTARK codebase, describing the computational integrity statement proved by that code and the specific STARK construction used to prove the statement.
LAPWN: A Lightweight User–Server Authentication Protocol for Wireless Networks
The Internet of Things (IoT) is composed of interconnected devices that exchange data over a network,
enabling applications in healthcare, transportation, and smart environments. As IoT ecosystems expand,
ensuring security and privacy remains a critical challenge. Many IoT devices rely on wireless
networks for data transmission, making them vulnerable to eavesdropping, tracking, and tampering.
This highlights the need for robust authentication mechanisms. To address these concerns, numerous
authentication protocols have been proposed. However, many fail to ensure adequate security against
both passive and active attacks. In this research, we introduce LAPWN, a lightweight protocol for
user–server communication, specifically designed for constrained environments, ensuring a balance
between security and efficiency. The proposed protocol is implemented as a fully functional Python
application, demonstrating its practical usability and evaluating its efficiency in real-world scenarios.
To validate its security, we performboth informal and formal analyses, utilizing Scyther, ProVerif, and
the Real-or-Random (RoR) model. The results confirm that LAPWN provides a secure, lightweight,
and efficient authentication solution with low computational and communication overhead. Furthermore,
performance evaluations show that it surpasses existing authentication protocols, making it a
highly effective solution for secure user–server interactions in constrained environments.
State Machine Replication Among Strangers, Fast and Self-Sufficient
A set of unacquainted parties, some of which may misbehave, communicate with each other over an unauthenticated and unreliable gossip network. They wish to jointly replicate a state machine so that each one of them has fair access to its operation. Specifically, assuming parties' computational power is measured as queries to an oracle machine , parties can issue symbols to the state machine in proportion to their queries to at a given fixed rate. Moreover, if such access to the state machine is provided continuously in expected constant time installments we qualify it as fast fairness.
A state machine replication (SMR) protocol in this permissionless setting is expected to offer consistency across parties and reliably process all symbols that honest parties wish to add to it in a timely manner despite continuously fluctuating participation and in the presence of an adversary who commands less than half of the total queries to per unit of time.
A number of protocols strive to offer the above guarantee together with fast settlement --- notably, the Bitcoin blockchain offers a protocol that settles against Byzantine adversaries in polylogarithmic rounds, while fairness only holds in a fail-stop adversarial model (due to the fact that Byzantine behavior can bias access to the state machine in the adversary's favor). In this work, we put forth the first Byzantine-resilient protocol solving SMR in this setting with both expected-constant-time settlement and fast fairness. Furthermore, our protocol is self-sufficient in the sense of performing its own time keeping while tolerating an adaptively fluctuating set of parties.
How to Model Unitary Oracles
We make the case for modeling unitary oracles by allowing for controlled access to the oracle as well as its conjugate transpose (inverse), but also its conjugate and transpose. Controlling and conjugate transposes are common if even standard, but conjugates and transposes appear to be non-standard. In order to justify our modeling, we give several formal examples of what goes wrong or is missed when using a more restrictive modeling. We also argue that our model is the "right" level of granularity, and that other transformations likely do not correspond to efficient computation. We also discuss other modeling choices, such as ancillas and approximation error.
Through our exploration, we uncover interesting phenomena. Examples include an attack on the recent pseudorandom unitary construction of Ma and Huang (STOC'25) if used incorrectly as a publicly evaluatable unitary, and a quantum complexity-theoretic separation that follows from a purely classical separation.
Blockchain Governance via Sharp Anonymous Multisignatures
Electronic voting has occupied a large part of the cryptographic protocols literature. The recent reality of blockchains---in particular, their need for online governance mechanisms---has brought new parameters and requirements to the problem. We identify the key requirements of a blockchain governance mechanism, namely correctness (including eliminative double votes), voter anonymity, and traceability, and investigate mechanisms that can achieve them with minimal interaction and under assumptions that fit the blockchain setting.
First, we define a signature-like primitive, which we term \textit{sharp anonymous multisignatures} (in short, AMS) that tightly meets the needs of blockchain governance. In a nutshell, AMSs allow any set of parties to generate a signature, e.g., on a proposal to be voted upon, which, if posted on the blockchain, hides the identities of the signers/voters but reveals their number. This can be seen as a (strict) generalization of threshold ring signatures (TRS).
We next turn to constructing such AMSs and using them in various governance scenarios---e.g., single vote vs. multiple votes per voter. In this direction, although the definition of TRS does not imply AMS, one can compile some existing TRS constructions into AMS. This raises the question: What is the TRS structure that allows such a compilation?
To answer the above, we devise templates for TRSs. Our templates encapsulate and abstract the structure that allows for the above compilation---most of the TRS schemes that can be compiled into AMS are, in fact, instantiations of our template. This abstraction makes our template generic for instantiating TRSs and AMSs from different cryptographic assumptions (e.g., DDH, LWE, etc.). One of our templates is based on chameleon hashes, and we explore a framework of lossy chameleon hashes to understand their nature fully.
Finally, we turn to how AMS schemes can be used in our applications. We provide fast (in some cases non-interactive) AMS-based blockchain governance mechanisms for a wide spectrum of assumptions on the honesty (semi-honest vs malicious) and availability of voters and proposers.
PICS: Private Intersection over Committed (and reusable) Sets
Private Set Intersection (PSI) enables two parties to compute the intersection of their private sets without revealing any additional information. While maliciously secure PSI protocols prevent many attacks, adversaries can still exploit them by using inconsistent inputs across multiple sessions. This limitation stems from the definition of malicious security in secure multiparty computation, but is particularly problematic in PSI because: (1) real-world applications---such as Apple’s PSI protocol for CSAM detection and private contact discovery in messaging apps---often require multiple PSI executions over consistent inputs, and (2) the PSI functionality makes it relatively easy for adversaries to infer additional information.
We propose {\em Private Intersection over Committed Sets (PICS)}, a new framework that enforces input consistency across multiple sessions via committed sets. Building on the state-of-the-art maliciously secure PSI framework (i.e., VOLE-PSI [EUROCRYPT 2021]), we present an efficient instantiation of PICS % in the random oracle model using lightweight cryptographic tools. We implement our protocol to demonstrate concrete efficiency. Compared to VOLE-PSI, for input sets of size , our communication overhead is as low as . Our end-to-end performance overhead is in the LAN setting and decreases to in the WAN setting with bandwidths ranging from to Mbps.
Multi-Holder Anonymous Credentials from BBS Signatures
The eIDAS 2.0 regulation aims to develop interoperable digital identities for European citizens, and it has recently become law. One of its requirements is that credentials be unlinkable. Anonymous credentials (AC) allow holders to prove statements about their identity in a way that does not require to reveal their identity and does not enable linking different usages of the same credential. As a result, they are likely to become the technology that provides digital identity for Europeans.
Any digital credential system, including anonymous credentials, needs to be secured against identity theft and fraud. In this work, we introduce the notion of a multi-holder anonymous credential scheme that allows issuing shares of credentials to different authentication factors (or "holders"). To present the credential, the user's authentication factors jointly run a threshold presentation protocol. Our definition of security requires that the scheme provide unforgeability: the adversary cannot succeed in presenting a credential with identity attributes that do not correspond to an identity for which the adversary controls at least shares; this is true even if the adversary can obtain credentials of its choice and cause concurrent executions of the presentation protocol. Further, our definition requires that the presentation protocol provide security with identifiable abort. Finally, presentations generated by all honest holders must be unlinkable and must not reveal the user's secret identity attributes even to an adversary that controls some of the user's authentication factors.
We design and prove the (concurrent) security of a multi-holder version of the BBS anonymous credential scheme. In our construction, each holder is issued a secret share of a BBS credential. Using these shares, the holders jointly compute a credential presentation that is identical to (and therefore compatible with) the traditional, single-holder variant (due to Tessaro and Zhu, Eurocrypt'23) of a BBS credential presentation.
HI-CKKS: Is High-Throughput Neglected? Reimagining CKKS Efficiency with Parallelism
The proliferation of data outsourcing and cloud services has heightened privacy vulnerabilities. CKKS, among the most prominent homomorphic encryption schemes, allows computations on encrypted data, serving as a critical privacy safeguard. However, performance remains a central bottleneck, hindering widespread adoption. Existing optimization efforts often prioritize latency reduction over throughput performance. This paper presents HI-CKKS, a throughput-oriented High-performance Implementation of CKKS homomorphic encryption, addressing these challenges. Our HI-CKKS introduces a batch-supporting asynchronous execution scheme, effectively mitigating frequent data interactions and high waiting delays between hosts and servers in service-oriented scenarios. We analyze the fundamental (I)NTT primitive, which is critical in CKKS, and develop a hierarchical, hybrid high-throughput implementation. This includes efficient arithmetic module instruction set implementations, unified kernel fusion, and hybrid memory optimization strategies that significantly improve memory access efficiency and the performance of (I)NTT operations. Additionally, we propose a multi-dimensional parallel homomorphic multiplication scheme aimed at maximizing throughput and enhancing the performance of (I)NTT and homomorphic multiplication. In conclusion, our implementation is deployed on the RTX 4090, where we conduct a thorough throughput performance evaluation of HI-CKKS, enabling us to pinpoint the most effective parallel parameter settings. Compared to the CPU implementation, our system achieves throughput increases of , , and for NTT, INTT, and HMult, respectively. And our throughput performance still demonstrates a significant improvement, ranging from to compared to the latest GPU-based works.
Zeus: Defending against Fee Stealing and Griefing Attacks in Multi-Hop Payments
Payment Channel Networks (PCNs) are the most scalable and trust-minimized solution to Bitcoin's scalability challenges. Within PCNs, connected payer and payee can make arbitrary off-chain transactions through multi-hop payments (MHPs) over payment channel paths, while intermediate relays charge relay fees by providing liquidity.
However, current MHP protocols face critical security threats including fee-stealing attacks and griefing attacks. In this paper, we identify new fee-stealing attacks targeting most existing MHP protocols. Second, we prove that eliminating griefing attacks in current MHP protocols is impossible by reducing the problem to fair secret exchange. Finally, we introduce Zeus, the first Bitcoin-compatible MHP protocol that is secure against fee-stealing attacks and offers bounded griefing protection against -cost-sensitive adversaries—those who only launch griefing attacks when the expected damage exceeds a fraction of their own cost. These guarantees are established through rigorous proofs in the Global Universal Composability (GUC) framework. Our comprehensive evaluation demonstrates that Zeus reduces worst-case griefing damage to 28\% and 75\% compared to MHP schemes such as AMHL~(NDSS'19) and Blitz~(USENIX SEC'21), respectively. Our results further show that, even under the most adverse configurations within the Lightning Network, Zeus imposes costs on adversaries that are at least ten times greater than their potential damage.
PRESENT Full Round Emulation : Structural Flaws and Predictable Outputs
The Internet of Things (IoT) has become integral to modern life, enabling smart cities, healthcare, and industrial automation. However, the increasing connectivity of IoT devices exposes them to various cyber threats, necessitating robust encryption methods. The PRESENT cipher, a lightweight block cipher, is well-suited for resource-constrained IoT environments, offering strong security with minimal computational overhead. This paper explores the application of deep learning (DL) techniques in cryptanalysis, specifically using an Aggregated Perceptron Group (APG) Model, which employs a Multi-Layer Perceptron (MLP) to predict input-output relations for each round of the PRESENT cipher’s encryption, excluding the key. This approach focuses solely on emulating the cipher's Substitution Permutation Network (SPN), capturing its essential structure and revealing the structural flaws in the way data is transformed through rounds. The models are chained together to generate the final ciphertext for any 64-bit plaintext with high accuracy, reducing the probability form a random guess of . The results demonstrate the potential of DL models in cryptanalysis, providing insights into the security of lightweight ciphers in IoT communications and highlighting the practicality of deep learning for cryptographic analysis on standard computing systems.
Efficient Modular Multiplication Using Vector Instructions on Commodity Hardware
Modular arithmetic is the computational backbone of many cryptographic and scientific algorithms.
In particular, modular multiplication in a large prime field is computationally expensive and dictates the runtime of many algorithms. While it is relatively easy to utilize vectorization to accelerate batches of independent modular multiplications, our goal is to reduce the latency of a modular multiplication under a generic prime using vectorization, while maintaining constant-time execution.
We investigate the technique of Residue Number System (RNS) Montgomery modular multiplication. We first contribute a unified view of algorithmic optimizations in prior art. This view enables us to further reduce the number of elementwise multiplications in an algorithm with a simplified structure that we prove correct.
We explore AVX512 acceleration on CPUs, and show how to map our algorithm to vector instructions. We implement our algorithm in C++ and achieve speedup, which is nearly maximal, for -bit primes on a CPU with AVX512 over optimized library implementations of standard Montgomery modular multiplication algorithms. GPUs contain vector cores that each support tens of physical threads. We show how to intelligently map our algorithm to threads in a vector core, ``overparallelizing'' to minimize latency. We show substantial speedups over a commercial library implementation of standard modular multiplication algorithms across a wide range of prime sizes.
Separating Pseudorandom Codes from Local Oracles
Pseudorandom codes (PRCs) are error-correcting codes with the distinguishing feature that their codewords are computationally indistinguishable from random strings. Introduced by Christ and Gunn (CRYPTO 2024), PRCs have found applications in areas such as AI watermarking, where both robustness and pseudorandomness are essential. All known constructions of PRCs rely on coding-theoretic hardness assumptions. In this work, we study how inherent the use of coding-theoretic hardness is in the construction of pseudorandom codes.
We show that there is no black-box construction of PRCs with binary alphabets capable of decoding from a constant fraction of Bernoulli noise from a class of oracles we call local oracles. The class of local oracles includes random oracles and trapdoor permutation oracles, and can be interpreted as a meaningful notion of oracles that are not resilient against noise. Our separation result is cast in the Impagliazzo-Rudich framework and crucially relies on the Bonami-Beckner hypercontractivity theorem on the Boolean hypercube.
As a complementary result, we show that PRCs with large alphabets that can tolerate high error rates can indeed be constructed in a black-box manner from one-way functions.
Full Anonymity in the Asynchronous Setting from Peony Onion Encryption
Onion routing is a popular practical approach to anonymous communication, and the subject of a growing body of foundational theoretical work aiming to design efficient schemes with provable anonymity, the strongest notion of which is full anonymity.
Unfortunately, all previous schemes that achieve full anonymity assume the synchronous communication setting, which is unrealistic as real networks may experience message loss and timing attacks that render such schemes insecure. Recently, Ando, Lysyanskaya, and Upfal (TCC '24) took a first step towards addressing the asynchronous setting by constructing an efficient onion routing protocol with the strictly weaker guarantee of differential privacy. Their scheme relies on a new primitive called bruisable onion encryption.
In this paper, we construct the first efficient fully anonymous onion routing protocol in the asynchronous setting. To do so, we overcome two main technical challenges: First, we develop the first bruisable onion construction that does not leak information about the onion's position on the routing path. Second, we design an onion routing protocol that uses such bruisable onion encryption to achieve full anonymity (rather than just differential privacy). Along the way, we develop a new fully anonymous onion routing protocol in the synchronous setting, which improves on the state of the art in terms of communication complexity and round complexity.
Both our protocols are secure against an active adversary corrupting a constant fraction of the nodes (up to <1 for the synchronous protocol, and <1/2 for the asynchronous protocol) and rely on standard cryptographic assumptions (CCA-secure public key encryption and collision-resistant hash functions).
A New PUF-Based Authenticated Key Establishment Protocol for V2G Networks
Vehicle-to-grid (V2G) refers to the bidirectional communication and energy flows that allow renewable energy sources to supply supplementary electrical services between electric cars (EVs) and the power grid. Additionally, V2G lowers environmental pollution and energy issues while providing efficient charging services. A PUF-based, reliable, anonymous authentication and key establishment scheme for V2G networks was recently presented by Sungjin Yu et al. In this paper, we show that the Yu et al. protocol is vulnerable to tracking attacks and does not guarantee user anonymity. We also discovered that ephemeral secret leakage attacks can target their scheme. Additionally, we propose a new PUF-based authenticated key establishment scheme for V2G networks that is more effective than the most recent relevant scheme and is resistant to all known attacks. We prove that the presented scheme is semantically secure, and we also simulate our protocol using the Scyther tool.
High-Order and Cortex-M4 First-Order Implementations of Masked FrodoKEM
The key encapsulation mechanism FrodoKEM is a post-quantum algorithm based on plain LWE. While it has not been selected by the NIST for standardization, FrodoKEM shares a lot of similarities with the lattice-based standard ML-KEM and offers strong security assumptions by relying on the unstructured version of the LWE problem. This leads FrodoKEM to be recommended by European agencies ANSSI and BSI as a possible choice to obtain post-quantum security. In this paper, we discuss the practical aspects of incorporating side-channel protections in FrodoKEM by describing a fully masked version of the scheme based on several previous works on LWE-based KEMs. Furthermore, we propose an arbitrary order C implementation based on the reference code and a Cortex-M4 implementation with gadgets specialized at order 1 in low level assembly code that incorporates bespoke modifications to thwart (micro-)architectural leakages. Finally, we validate our order 1 gadgets by performing TVLA on a ChipWhisperer.
From Signature-Based Witness Encryption to RAM Obfuscation: Achieving Blockchain-Secured Cryptographic Primitives
Goyal and Goyal demonstrated that extractable witness encryption, when combined with smart-contract equipped proof-of-stake blockchains, can yield powerful cryptographic primitives such as one-time programs and pay-to-use programs. However, no standard model construction for extractable witness encryption is known, and instantiations from alternatives like indistinguishability obfuscation are highly inefficient.
This paper circumvents the need for extractable witness encryption by combining signature-based witness encryption (Döttling et al.) with witness encryption for KZG commitments (Fleischhacker et al.). Inspired by Goyal et al., we introduce -Extractable Witness Encryption for Blockchains ( -eWEB), a novel primitive that encrypts a secret, making its decryption contingent upon the subsequent block's state. Leveraging -eWEBs, we then build a conditional one-time memory, leading to a one-time program ( -OTP) also conditional on the next block state. Finally, using our -OTP, we develop a conditional RAM obfuscation scheme where program execution can be contingent on the blockchain state, thereby enabling applications like pay-to-use programs.
Despite its theoretical value, our construction is impractical due to a "bit-by-bit" signing requirement for the state root and an inefficient method for storing validator keys. We thus posit the construction of a practical -OTP as a significant open problem. This work provides the first theoretical pathway for building such primitives without extractable witness encryption, representing a novel step for blockchain-secured cryptography
Key Exchange with Tight (Full) Forward Secrecy via Key Confirmation
Weak forward secrecy (wFS) of authenticated key exchange (AKE) protocols is a passive variant of (full) forward secrecy (FS). A natural mechanism to upgrade from wFS to FS is the use of key confirmation messages which compute a message authentication code (MAC) over the transcript. Unfortunately, Gellert, Gjøsteen, Jacobson and Jager (GGJJ, CRYPTO 2023) show that this mechanism inherently incurs a loss proportional to the number of users, leading to an overall non-tight reduction, even if wFS was established using a tight reduction.
Inspired by GGJJ, we propose a new notion, called one-way verifiable weak forward secrecy (OW-VwFS), and prove that OW-VwFS can be transformed tightly to FS using key confirmation in the random oracle model (ROM). To implement our generic transformation, we show that several tightly wFS AKE protocols additionally satisfy our OW-VwFS notion tightly. We highlight that using the recent lattice-based protocol from Pan, Wagner, and Zeng (CRYPTO 2023) can give us the first lattice-based tightly FS AKE via key confirmation in the classical random oracle model. Besides this, we also obtain a Decisional-Diffie-Hellman-based protocol that is considerably more efficient than the previous ones.
Finally, we lift our study on FS via key confirmation to the quantum random oracle model (QROM). While our security reduction is overall non-tight, it matches the best existing bound for wFS in the QROM (Pan, Wagner, and Zeng, ASIACRYPT 2023), namely, it is square-root- and session-tight. Our analysis is in the multi-challenge setting, and it is more realistic than the single-challenge setting as in Pan et al..
Highway to Hull: An Algorithm for Solving the General Matrix Code Equivalence Problem
The matrix code equivalence problem consists, given two matrix spaces of dimension , in finding invertible matrices and such that . Recent signature schemes such as MEDS and ALTEQ relate their security to the hardness of this problem. Recent works by Narayanan, Qiao and Tang on the one hand and by Ran and Samardjiska on the other hand tackle this problem. The former is restricted to the ``cubic'' case and succeeds in operations. The latter is an algebraic attack on the general problem whose complexity is not fully understood and which succeeds only on instances. We present a novel algorithm which solves the problem in the general case. Our approach consists in reducing the problem to the matrix code conjugacy problem, \emph{i.e.} the case . For the latter problem, similarly to the permutation code equivalence problem in Hamming metric, a natural invariant based on the \emph{Hull} of the code can be used. Next, the equivalence of codes can be deduced using a usual list collision argument. For , our algorithm achieves the same time complexity as Narayanan \emph{et al.} but with a lower space complexity. Moreover, ours extends to a much broader range of parameters.
MIZAR: Boosting Secure Three-Party Deep Learning with Co-Designed Sign-Bit Extraction and GPU Acceleration
Three-party secret sharing-based computation has emerged as a promising approach for secure deep learning, benefiting from its high throughput. However, it still faces persistent challenges in computing complex operations such as secure Sign-Bit Extraction, particularly in high-latency and low-bandwidth networks. A recent work, Aegis (Lu et al., Cryptology ePrint'2023), made significant strides by proposing a constant-round DGK-style Sign-Bit Extraction protocol with GPU acceleration on Piranha (Watson et. al., USENIX Security'2022). However, Aegis exhibits two critical limitations: it \romannumeral1) overlooks the use of \textit{bit-wise prefix-sum}, and \romannumeral2) inherits non-optimized modular arithmetic over prime fields and excessive memory overhead from the underlying GPU-based MPC framework. This results in suboptimal performance in terms of communication, computation, and GPU memory usage.
Driven by the limitations of Aegis, we propose an optimized constant-round secure Sign-Bit Extraction protocol with communication and GPU-specific optimizations. Concretely, we construct a new masked randomized list by exploiting the upper bound of bit-wise prefix-sum to reduce online communication by up to , and integrate fast modular-reduction and kernel fusion techniques to enhance GPU utilization in MPC protocols. Besides, we propose specific optimizations for secure piecewise polynomial approximations and Maxpool computation in neural network evaluations. Finally, we instantiate these protocols as a framework MIZAR and report their improved performance over state-of-the-art GPU-based solutions: \romannumeral1) For secure Sign-Bit Extraction, we achieve a speedup of -- and reduce communication by -- . \romannumeral2) Furthermore, we improve the performance of secure evaluation of nonlinear functions and neural networks by -- . \romannumeral3) Lastly, our framework achieves -- GPU memory savings.
TrafficProof: Privacy-Preserving Reliable Traffic Information Sharing in Social Internet of Vehicles
In the Social Internet of Vehicles (SIoV), effective data sharing is essential for applications including road safety, traffic management, and situational awareness. However, the decentralized and open nature of SIoV presents significant challenges in simultaneously ensuring data integrity, user privacy, and system accountability. This paper presents a protocol for secure and location-accurate traffic data sharing that fully preserves the anonymity and privacy of participating witnesses. The protocol leverages zero-knowledge proofs (ZKPs) to allow vehicles to broadcast redacted traffic information—such as images—tied to specific geographic locations, while withholding both the original content and the identity of the reporting vehicle. To ensure the authenticity of the redacted content and the legitimacy of the witness, an additional ZKP is used to privately validate both elements. Upon receiving a report, the verifying node checks the submitted proofs, aggregates validated inputs, and publishes the resulting metadata to both IPFS and a blockchain. This design ensures public verifiability, tamper resistance, and the reliability of the shared data, while maintaining strong privacy guarantees through cryptographic anonymity. To improve the efficiency of proof generation on resource-constrained devices, the protocol employs folding-based ZKP constructions. We conduct a formal security and soundness analysis of the protocol and implement a proof-of-concept, which is publicly available as open-source software. Experimental evaluations on commodity hardware demonstrate that the protocol is computationally efficient and introduces less than 1.5\% communication overhead relative to the size of the shared traffic data, indicating its suitability for real-world deployment.
Achieving "beyond CCA1" security for linearly homomorphic encryption, without SNARKs?
In the wake of Manulis and Nguyen's Eurocrypt'24 paper, new CCA security notions, vCCA and vCCAD, and associated construction blueprints have been proposed to leverage either CPA or CPAD secure FHE beyond the CCA1 security barrier. These two notions are the strongest CCA security notions so far achievable, respectively, by correct and approximate homomorphic schemes. However, the only known construction strategies intimately require advanced SNARK machinery, undermining their practicality. In this context, this paper is an attempt to achieve these advanced CCA security notions in the restricted case of linearly homomorphic encryption, without resorting to SNARKs. To do so, we investigate the relationship between the Linear-Only Homomorphism (LOH) assumption, an assumption that has been used for more than a decade at the core of several proof-of-knowledge constructions, and these two recent security notions (vCCA and vCCAD). On the bright side, when working under the correctness assumption, we establish that the LOH property is sufficient to achieve vCCA security in both the private and public key settings. In the public key setting, we further show that a surprisingly simple and previously known Paillier-based construction also achieves this level of security, at only twice the cost of the baseline scheme. We then turn our attention to LWE-based schemes for which the Pandora box of decryption errors opens up. In the private key setting, we are able to achieve CPAD and vCCAD security but only in a fairly restrictive non-adaptive setting, in which vCCAD collapses onto a weak relaxation of CCA1. Finally, we eventually achieve adaptive vCCAD security provided that the number of ciphertexts given to the adversary is suitably restricted. While bridging the gap towards credible practicality requires further work, this is a first step towards obtaining linear homomorphic schemes achieving these recent CCA security notions by means only of relatively lightweight machinery.
Non-interactive Anonymous Tokens with Private Metadata Bit
Anonymous tokens with private metadata bit (ATPM) have received increased interest as a method for anonymous user authentication while also allowing the issuer to embed trust signals inside the token that are only readable by the authority who holds the secret key. A drawback of all existing ATPM constructions is that they require interaction between the client and the issuer during the issuance process. In this work, we build the first non-interactive anonymous tokens (NIAT) with private metadata bit, inspired by the recent work of Hanzlik (Eurocrypt '23) on non-interactive blind signatures. We discuss how the property of non-interactivity during issuance allows for more efficient protocols that avoid the need for online signing. We construct an efficient NIAT scheme based on Structure-preserving Signatures on Equivalence Classes (SPS-EQ) and experimentally evaluate its performance. We also present an extension to our NIAT construction that allows the identification of clients who attempt to double-spend a token (i.e., present the same token twice) and argue that non-interactive schemes are uniquely positioned to offer this essential feature.
On the Adaptive Security of FROST
FROST and its variants are state-of-the-art protocols for threshold Schnorr signatures that are used in real-world applications. While static security of these protocols has been shown by several works, the security of these protocols under adaptive corruptions—where an adversary can choose which parties to corrupt at any time based on information it learns during protocol executions—has remained a notorious open problem that has received renewed attention due to recent standardization efforts for threshold schemes.
We show adaptive security (without erasures) of FROST and several variants under different corruption thresholds and computational assumptions. Let n be the total number of parties, t+1 the signing threshold, and t_c an upper bound on the number of corrupted parties.
1. We prove adaptive security when t_c = t/2 in the random oracle model (ROM) based on the algebraic one-more discrete logarithm assumption (AOMDL)—the same conditions under which FROST is proven statically secure.
2. We introduce the low-dimensional vector representation (LDVR) problem, parameterized by t_c, t, and n, and prove adaptive security in the algebraic group model (AGM) and ROM based on the AOMDL assumption and the hardness of the LDVR problem for the corresponding parameters. In some regimes (including some t_c >t/2) we show the LDVR problem is unconditionally hard, while in other regimes (in particular, when t_c = t) we show that hardness of the LDVR problem is necessary for adaptive security to hold. In fact, we show that hardness of the LDVR problem is necessary for proving adaptive security of a broad class of threshold Schnorr signatures.
Uniform Black-Box Separations via Non-Malleable Extractors
We construct -non-malleable extractors---which allow an attacker to tamper with a source times---for high min-entropy sources samplable by poly-time hierarchy circuits and for tampering classes corresponding to poly-time hierarchy functions from derandomization-type assumptions.
We then show an application of this new object to ruling out constructions of succinct, non-interactive, arguments (SNARGs) secure against \emph{uniform} adversaries from \emph{uniform} falsifiable assumptions via a class of black-box reductions that has not been previously considered in the literature. This class of black-box reductions allows the reduction to arbitrarily set the \emph{coins}, as well as the input, of the uniform adversary it interacts with.
The class of reductions we consider is restricted in allowing only non-adaptive queries to the adversary.
Single-server Stateful PIR with Verifiability and Balanced Efficiency
Recent stateful private information retrieval (PIR) schemes have significantly improved amortized computation and amortized communication while aiming to keep client storage minimal. However, all the schemes in the literature still suffer from a poor tradeoff between client storage and computation.
We present BALANCED-PIR, a stateful PIR scheme that effectively balances computation and client storage. For a database of a million entries, each of 8 bytes, our scheme requires 0.2 MB of client storage, 0.2 ms of amortized computation, and 11.14 KB of amortized communication. Compared with the state-of-the-art scheme using a similar storage setting, our scheme is almost 9x better in amortized computation and 40x better in offline computation.
Verifiable private information retrieval has been gaining more attention recently. However, all existing schemes require linear amortized computation and huge client storage. We present Verifiable BALANCED-PIR, a verifiable stateful PIR scheme with sublinear amortized computation and small client storage. In fact, our Verifiable BALANCED-PIR adds modest computation, communication, and storage costs on top of BALANCED-PIR. Compared with the state-of-the-art verifiable scheme, the client storage of our scheme is 100x smaller, the amortized computation is 15x less, and the amortized communication is 2.5x better.
Post-Quantum Security of Keyed Sponge-Based Constructions through a Modular Approach
Sponge-based constructions have successfully been receiving widespread adoption, as represented by the standardization of SHA-3 and Ascon by NIST. Yet, their provable security against quantum adversaries has not been investigated much. This paper studies the post-quantum security of some keyed sponge-based constructions in the quantum ideal permutation model, focusing on the Ascon AEAD mode and KMAC as concrete instances. For the Ascon AEAD mode, we prove the post-quantum security in the single-user setting up to about queries, where is the capacity and is the key length. Unlike the recent work by Lang et al.~(ePrint 2025/411), we do not need non-standard restrictions on nonce sets or the number of forgery attempts. In addition, our result guarantees even non-conventional security notions such as the nonce-misuse resilience confidentiality and authenticity under release of unverified plaintext. For KMAC, we show the security up to about queries, where is the rate, ignoring some small factors. In fact, we prove the security not only for KMAC but also for general modes such as the inner-, outer-, and full-keyed sponge functions.
We take a modular proof approach, adapting the ideas by several works in the classical ideal permutation model into the quantum setting: For the Ascon AEAD mode, we observe it can be regarded as an iterative application of a Tweakable Even-Mansour (TEM) cipher with a single low-entropy key, and gives the security bound as the sum of the post-quantum TPRP advantage of TEM and the classical security advantage of Ascon when TEM is replaced with a secret random object. The proof for keyed sponges is obtained analogously by regarding them as built on an Even-Mansour (EM) cipher with a single low-entropy key.
The post-quantum security of (T)EM has been proven by Alagic et al. (Eurocrypt 2022 and Eurocrypt 2024). However, they show the security only when the keys are uniformly random. In addition, the proof techniques, so-called the resampling lemmas, are inapplicable to our case with a low-entropy key. Thus, we introduce and prove a modified resampling lemma, thereby showing the security of (T)EM with a low-entropy key.
Adaptive TDF from PKE with Randomness Recoverability and Pseudorandom Ciphertext Property
We present a generic construction of adaptive trapdoor function (TDF) from any public-key encryption (PKE) scheme that satisfies two properties: the randomness recoverability and the pseudorandom ciphertext property. As a direct corollary, we can obtain adaptive TDF from any trapdoor permutation (TDP) whose domain is both recognizable and sufficiently dense. In our construction, we can prove that the function's output is indistinguishable from uniform even when an adversary has access to the inversion oracle.
Efficient Mixed-Mode Oblivious RAMs
Oblivious RAMs (ORAMs) allow data outsourcing to servers so that the access pattern to the outsourced data is kept private. It is also a crucial building block to enable private RAM access within secure multi-party computation (MPC). In recent years, schemes that match the ORAM lower bound have been proposed in both the outsourcing setting and the RAM-model MPC setting, seemingly putting an epilogue in the theory of ORAM. In this paper, we initiate a study of mixed-mode ORAMs, where accesses to the ORAM are a mix of both public and private accesses. Although existing ORAMs can support public access by treating them as private ones, achieving better efficiency is highly non-trivial.
- We present a mixed-mode ORAM algorithm, assuming the existence of private information retrieval (PIR). When the PIR scheme is communication-efficient, this ORAM achieves the best possible outcome: it has a bandwidth blowup of for private accesses and for public accesses. This construction can be easily extended for the MPC setting achieving circuit size for private accesses to -sized blocks and circuit size for public accesses to the same array.
- We instantiate the above protocol in the three-party computation (3PC) setting with more concrete optimizations, yielding a protocol that performs almost as efficiently as state-of-the-art RAM-3PC protocols for private accesses while being more efficient for public accesses in the LAN setting.
Private Signaling Secure Against Actively Corrupted Servers
Private signaling allows servers to identify a recipient's messages on a public bulletin board without knowing the recipient's metadata. It is a central tool for systems like privacy-preserving blockchains and anonymous messaging. However, unless with TEE, current constructions all assume that the servers are only passively corrupted, which significantly limits their practical relevance. In this work, we present a TEE-free simulation-secure private signaling protocol assuming two non-colluding servers, either of which can be actively corrupted.
Crucially, we convert signal retrieval into a problem similar to private set intersection and use custom-built zero-knowledge proofs to ensure consistency with the public bulletin board. As a result, our protocol achieves lower server-to-server communication overhead and a much smaller digest compared to state-of-the-art semi-honest protocol. For example, for a board size of messages, the resulting digest size is only 33.57KB. Our protocol is also computationally efficient: retrieving private signals only takes about 2 minutes, using 16 threads and a LAN network.
Arc: Accumulation for Reed--Solomon Codes
Proof-Carrying Data (PCD) is a foundational tool for ensuring the correctness of incremental distributed computations that has found numerous applications in theory and practice. The state-of-the-art PCD constructions are obtained via accumulation or folding schemes. Unfortunately, almost all known constructions of accumulation schemes rely on homomorphic vector commitments (VCs), which results in relatively high computational costs and insecurity in the face of quantum adversaries. A recent work of Bünz, Mishra, Nguyen, and Wang removes the dependence on homomorphic VCs by relying only on the random oracle model, but introduces a bound on the number of consecutive accumulation steps, which in turn bounds the depth of the PCD computation graph and greatly affects prover and verifier efficiency.
In this work, we propose Arc, a novel hash-based accumulation scheme that overcomes this restriction and supports an unbounded number of accumulation steps. The core building block underlying Arc is a new accumulation scheme for claims about proximity of claimed codewords to the Reed--Solomon code. Our approach achieves near-optimal efficiency, requiring a small number of Merkle tree openings relative to the code rate, and avoids the efficiency loss associated with bounded accumulation depth. Unlike prior work, our scheme is also able to accumulate claims up to list-decoding radius, resulting in concrete efficiency improvements.
We use this accumulation scheme to construct two distinct accumulation schemes, again relying solely on random oracles. The first approach accumulates RS proximity claims and can be used as an almost-drop-in replacement in existing PCD deployments based on IOP-based SNARKs.
The second approach directly constructs an accumulation scheme for rank-1 constraint systems (and more generally polynomial constraint systems) that is simpler and more efficient than the former and prior approaches.
We introduce the notion of Interactive Oracle Reductions (IORs) to enable a modular and simple security analysis. These extend prior notions of Reductions of Knowledge to the setting of IOPs.
The Jacobi Factoring Circuit: Quantum Factoring with Near-Linear Gates and Sublinear Space and Depth
We present a compact quantum circuit for factoring a large class of integers, including some whose classical hardness is expected to be equivalent to RSA (but not including RSA integers themselves). Most notably, we factor -bit integers of the form with for in space and depth sublinear in n (specifically, ) using quantum gates; for these integers, no known classical algorithms exploit the relatively small size of to run asymptotically faster than general-purpose factoring algorithms. To our knowledge, this is the first polynomial-time circuit to achieve sublinear qubit count for a classically-hard factoring problem. We thus believe that factoring such numbers has potential to be the most concretely efficient classically-verifiable proof of quantumness currently known.
Our circuit builds on the quantum algorithm for squarefree decomposition discovered by Li, Peng, Du, and Suter (Nature Scientific Reports 2012), which relies on computing the Jacobi symbol in quantum superposition. The technical core of our contribution is a new space-efficient quantum algorithm to compute the Jacobi symbol of mod , in the regime where is classical and much larger than . Our circuit for computing the Jacobi symbol generalizes to related problems such as computing the greatest common divisor and modular inverses, and thus could be of independent interest.
Rewardable Naysayer Proofs
Combining verifiable computation with optimistic approaches is a promising direction to scale blockchain applications.
The basic idea consists of saving computations by avoiding the verification of proofs unless there are complaints.
A key tool to design systems in the above direction has been recently proposed by Seres, Glaeser and Bonneau [FC'24] who formalized the concept of a Naysayer proof: an efficient to verify proof disproving a more demanding to verify original proof.
In this work, we discuss the need of rewarding naysayer provers, the risks deriving from front-running attacks, and the failures of generic approaches trying to defeat them.
Next, we introduce the concept of verifiable delayed naysayer proofs and show a construction leveraging proofs of sequential work, without relying on any additional infrastructure.
Breaking the 1/λ-Rate Barrier for Arithmetic Garbling
Garbled circuits, introduced in the seminal work of Yao (FOCS, 1986), have received considerable attention in the boolean setting due to their efficiency and application to round-efficient secure computation. In contrast, arithmetic garbling schemes have received much less scrutiny. The main efficiency measure of garbling schemes is their rate, defined as the bit size of each gate's output divided by the size of the (amortized) garbled gate. Despite recent progress, state-of-the-art garbling schemes for arithmetic circuits suffer from important limitations: all existing schemes are either restricted to -bounded integer arithmetic circuits (a computational model where the arithmetic is performed over and correctness is only guaranteed if no intermediate computation exceeds the bound ) and achieve constant rate only for very large bounds , or have a rate at most otherwise, where denotes a security parameter. In this work, we improve this state of affairs in both settings.
- As our main contribution, we introduce the first arithmetic garbling scheme over modular rings with rate , breaking for the first time the -rate barrier for modular arithmetic garbling. Our construction relies on the power-DDH assumption.
- As a secondary contribution, we introduce a new arithmetic garbling scheme for -bounded integer arithmetic that achieves a constant rate for bounds as low as . Our construction relies on a new non-standard KDM-security assumption on Paillier encryption with small exponents.
How to Trace Viral Content in End-to-End Encrypted Messaging
We study the problem of combating *viral* misinformation campaigns in end-to-end encrypted (E2EE) messaging systems such as WhatsApp. We propose a new notion of Hop Tracking Signatures (HTS) that allows for tracing originators of messages that have been propagated on long forwarding paths (i.e., gone viral), while preserving anonymity of everyone else. We define security for HTS against malicious servers.
We present both negative and positive results for HTS: on the one hand, we show that HTS does not admit succinct constructions if tracing and anonymity thresholds differ by exactly one "hop". On the other hand, by allowing for a larger gap between tracing and anonymity thresholds, we can build succinct HTS schemes where the signature size does not grow with the forwarding path. Our positive result relies on streaming algorithms and strong cryptographic assumptions.
Prior works on tracing within E2EE messaging systems either do not achieve security against malicious servers or focus only on tracing originators of pre-defined banned content.
Foundations of Platform-Assisted Auctions
Today, many auctions are carried out with the help of intermediary platforms like Google and eBay. These platforms serve as a rendezvous point for the buyers and sellers, and charge a fee for its service. We refer to such auctions as platform-assisted auctions. Traditionally, the auction theory literature mainly focuses on designing auctions that incentivize the buyers to bid truthfully, assuming that the platform always faithfully implements the auction. In practice, however, the platforms have been found to manipulate the auctions to earn more profit, resulting in high-profile anti-trust lawsuits.
We propose a new model for studying platform-assisted auctions in the permissionless setting, where anyone can register and participate in the auction. We explore whether it is possible to design a dream auction in this new model, such that honest behavior is the utility-maximizing strategy for each individual buyer, the platform, the seller, as well as platform-seller or platform-buyer coalitions. Through a collection of feasibility and infeasibility results, we carefully characterize the mathematical landscape of platform-assisted auctions.
Interestingly, our work reveals exciting connections between cryptography and mechanism design. We show how cryptography can lend to the design of an efficient platform-assisted auction with dream properties. Although a line of works have also used multi-party computation (MPC) or the blockchain to remove the reliance on a trusted auctioneer, our work is distinct in nature in several dimensions. First, we initiate a systematic exploration of the game theoretic implications when the service providers (e.g., nodes that implement the MPC or blockchain protocol) are strategic and can collude with sellers or buyers. Second, we observe that the full simulation paradigm used in the standard MPC literature is too stringent and leads to high asymptotical costs. Specifically, because every player has a different private outcome in an auction protocol, to the best of our knowledge, running any generic MPC protocol among the players would incur at least total cost where is the number of buyers.We propose a new notion of simulation called {\it utility-dominated emulation} that is sufficient for guaranteeing the game-theoretic properties needed in an auction. Under this new notion of simulation, we show how to design efficient auction protocols with quasilinear efficiency, which gives an -fold improvement over any generic approach.
Truncation Untangled: Scaling Fixed-Point Arithmetic for Privacy-Preserving Machine Learning to Large Models and Datasets
Fixed Point Arithmetic (FPA) is widely used in Privacy-Preserving Machine Learning (PPML) to efficiently handle decimal values. However, repeated multiplications in FPA can lead to overflow, as the fractional part doubles in size with each multiplication. To address this, truncation is applied post-multiplication to maintain precision. Various truncation schemes based on Secure Multiparty Computation (MPC) exist, but trade-offs between accuracy and efficiency in PPML models and datasets remain underexplored. In this work, we analyze and consolidate different truncation approaches from the MPC literature.
We conduct the first large-scale systematic evaluation of PPML inference accuracy across truncation schemes, ring sizes, neural network architectures, and datasets. Our study provides clear guidelines for selecting the optimal truncation scheme and parameters for PPML inference. All evaluations are implemented in the open-source HPMPC MPC framework, facilitating future research and adoption.
Beyond our large scale evaluation, we also present improved constructions for each truncation scheme, achieving up to a threefold reduction in communication and round complexity over existing schemes. Additionally, we introduce optimizations tailored for PPML, such as strategically fusing different neural network layers. This leads to a mixed-truncation scheme that balances truncation costs with accuracy, eliminating communication overhead in the online phase while matching the accuracy of plaintext floating-point PyTorch inference for VGG-16 on the ImageNet dataset.
Cryptographic Commitments on Anonymizable Data
Local Differential Privacy (LDP) mechanisms consist of (locally) adding controlled noise to data in order to protect the privacy of their owner. In this paper, we introduce a new cryptographic primitive called LDP commitment. Usually, a commitment ensures that the committed value cannot be modified before it is revealed. In the case of an LDP commitment, however, the value is revealed after being perturbed by an LDP mechanism. Opening an LDP commitment therefore requires a proof that the mechanism has been correctly applied to the value, to ensure that the value is still usable for statistical purposes. We also present a security model for this primitive, in which we define the hiding and binding properties. Finally, we present a concrete scheme for an LDP staircase mechanism (generalizing the randomized response technique), based on classical cryptographic tools and standard assumptions. We provide an implementation in Rust that demonstrates its practical efficiency (the generation of a commitment requires just a few milliseconds). On the application side, we show how our primitive can be used to ensure simultaneously privacy, usability and traceability of medical data when it is used for statistical studies in an open science context. We consider a scenario where a hospital provides sensitive patients data signed by doctors to a research center after it has been anonymized, so that the research center can verify both the provenance of the data (i.e. verify the doctors’ signatures even though the data has been noised) and that the data has been correctly anonymized (i.e. is usable even though it has been anonymized).
Breaktooth: Breaking Security and Privacy in Bluetooth Power-Saving Mode
With the increasing demand for Bluetooth devices, various Bluetooth devices support a power-saving mode to reduce power consumption. One of the features of the power-saving mode is that the Bluetooth sessions among devices are temporarily disconnected or are close to being disconnected. Prior works have analyzed that the power-saving mode is vulnerable to denial of sleep (DoSL) attacks that interfere with the transition to the power-saving mode of Bluetooth devices, thereby increasing its power consumption. However, to the best of our knowledge, no prior work has analyzed vulnerabilities or attacks on the state after transitioning to the power-saving mode.
To address this issue, we present an attack that abuses two novel vulnerabilities in sleep mode, which is one of the Bluetooth power-saving modes, to break Bluetooth sessions. We name the attack Breaktooth. The attack is the first to abuse the vulnerabilities as an entry point to hijack Bluetooth sessions between victims. The attack also allows overwriting the link key between the victims using the hijacked session, enabling arbitrary command injection on the victims. Furthermore, while many prior attacks assume that attackers can forcibly disconnect the Bluetooth session using methods such as jamming to launch their attacks, our attack does not require such assumptions, making it more realistic.
In this paper, we present the root causes of the Breaktooth attack and their impact. We also provide the technical details of how attackers can secretly detect the sleep mode of their victims. The attackers can easily recognize the state of the victim's Bluetooth session remotely using a standard Linux command. Additionally, we develop a low-cost toolkit to perform our attack and confirm the effectiveness of our attack. Then, we evaluate the attack on 17 types of commodity Bluetooth keyboards, mice and audio devices that support the sleep mode and show that the attack poses a serious threat to Bluetooth devices supporting the sleep mode. To prevent our attack, we present defenses and their proof-of-concept. We responsibly disclosed our findings to the Bluetooth SIG. We also released the toolkit as open-source.
Synergy: A Lightweight Block Cipher with Variable Bit Rotation Feistel Network
Synergy is a lightweight block cipher designed for resource-constrained environments such as IoT devices, embedded systems, and mobile applications. Built around a 16-round Feistel network, 8 independent pseudorandom number generators (PRNGs) ensure strong diffusion and confusion through the generation of per-block unique round keys. With a 1024-bit key and a 64-bit block size, Synergy mitigates vulnerabilities to ML-based cryptanalysis by using a large key size in combination with key- and data-dependent bit rotations, which reduce statistical biases and increase unpredictability. By utilizing 32-bit arithmetic for efficient processing, Synergy achieves high throughput, low latency, and low power consumption, providing performance and security for applications where both are critical.
Improved Resultant Attack against Arithmetization-Oriented Primitives
In the last decade, the introduction of advanced cryptographic protocols operating on large finite fields has raised the need for efficient cryptographic primitives in this setting, commonly referred to as Arithmetization-Oriented (AO). The cryptanalysis of AO hash functions is essentially done through the study of the CICO problem on the underlying permutation. Two recent works at Crypto 2024 and Asiacrypt 2024 managed to solve the CICO problem much more efficiently than traditional Gröbner basis methods, using respectively advanced Gröbner basis techniques and resultants.
In this paper, we propose an attack framework based on resultants that applies to a wide range of AO permutations and improves significantly upon these two recent works. Our improvements mainly come from an efficient reduction procedure that we propose and rigorously analyze, taking advantage of fast multivariate multiplication. We present the most efficient attacks on Griffin, Arion, Anemoi, and Rescue. We show that most variants of Griffin, Arion and Anemoi fail to reach the claimed security level. For the first time, we successfully break a parameter set of Rescue, namely its -bit security variant. The presented theory and complexity estimates are backed up with experimental attacks. Notably, we practically find CICO solutions for out of rounds of Griffin, out of rounds of Anemoi, out of rounds of Rescue, improving by respectively , and rounds on the previous best practical attacks.
Integral Resistance of Block Ciphers with Key Whitening by Modular Addition
Integral attacks exploit structural weaknesses in symmetric cryptographic primitives by analyzing how subsets of inputs propagate to produce outputs with specific algebraic properties. For the case of (XOR) key-alternating block ciphers using (independent) round keys, at ASIACRYPT'21, Hebborn et al. established the first non-trivial lower bounds on the number of rounds required for ensuring integral resistance in a quite general sense. For the case of adding keys by modular addition, no security arguments are known so far. Here, we present a unified framework for analyzing the integral resistance of primitives using (word-wise) modular addition for key whitening, allowing us to not only fill the gap for security arguments, but also to overcome the heavy computational cost inherent in the case of XOR-whitening.
The Nonlinear Filter Model of Stream Cipher Redivivus
The nonlinear filter model is an old and well understood approach to the design of secure stream ciphers. Extensive research over several decades has shown how to attack stream ciphers based on this model and has identified the security properties required of the Boolean function used as the filtering function to resist such attacks. This led to the problem of constructing Boolean functions which provide adequate security and at the same time are efficient to implement. Unfortunately, over the last two decades no good solutions to this problem appeared in the literature. The lack of good solutions has effectively led to nonlinear filter model becoming more or less obsolete. This is a big loss to the cryptographic design toolkit, since the great advantages of the nonlinear filter model are its simplicity, well understood security and the potential to provide low cost solutions for hardware oriented stream ciphers. In this paper we construct balanced functions on an odd number of variables with the following provable properties: linear bias equal to , algebraic degree equal to , algebraic immunity at least , fast algebraic immunity at least , and can be implemented using NAND gates. The functions are obtained from a simple modification of the well known class of Maiorana-McFarland bent functions. By appropriately choosing and the length of the linear feedback shift register, we show that it is possible to obtain examples of stream ciphers which are -bit secure against known types of attacks for various values of . We provide concrete proposals for and . For the -bit, -bit, and the -bit security levels, the circuits for the corresponding stream ciphers require about 1743.5, 2771.5, and 5607.5 NAND gates respectively. For the -bit and the -bit security levels, the gate count estimates compare quite well to the famous ciphers Trivium and Grain-128a respectively, while for the -bit security level, we do not know of any other stream cipher design which has such a low gate count.
Designated-Verifier SNARGs with One Group Element
We revisit the question of minimizing the proof length of designated-verifier succinct non-interactive arguments (dv-SNARGs) in the generic group model. Barta et al. (Crypto 2020) constructed such dv-SNARGs with inverse-polynomial soundness in which the proof consists of only two group elements. For negligible soundness, all previous constructions required a super-constant number of group elements.
We show that one group element suffices for negligible soundness. Concretely, we obtain dv-SNARGs (in fact, dv-SNARKs) with soundness where proofs consist of one element of a generic group and additional bits. In particular, the proof length in group elements is constant even with soundness error.
In more concrete terms, compared to the best known SNARGs using bilinear groups, we get dv-SNARGs with roughly x shorter proofs (with soundness at a -bit security level). We are not aware of any practically feasible proof systems that achieve similar succinctness, even fully interactive or heuristic ones.
Our technical approach is based on a novel combination of techniques for trapdoor hash functions and group-based homomorphic secret sharing with linear multi-prover interactive proofs.
A Complete Characterization of One-More Assumptions In the Algebraic Group Model
One-more problems like One-More Discrete Logarithm (OMDL) and One-More Diffie--Hellman (OMDH) have found wide use in cryptography, due to their ability to naturally model security definitions for interactive primitives like blind signatures and oblivious PRF. Furthermore, a generalization of OMDH called Threshold OMDH (TOMDH) has proven useful for building threshold versions of interactive protocols. However, due to their complexity it is often unclear how hard such problems actually are, leading cryptographers to analyze them in idealized models like the Generic Group Model (GGM) and Algebraic Group Model (AGM). In this work we give a complete characterization of known group-based one-more problems in the AGM, using the -DL hierarchy of assumptions defined in the work of Bauer, Fuchsbauer and Loss (CRYPTO '20).
1. Regarding (T)OMDH, we show (T)OMDH is part of the -DL hierarchy in the AGM; in particular, -OMDH is equivalent to -DL. Along the way we find and repair a flaw in the original GGM hardness proof of TOMDH, thereby giving the first correct proof that TOMDH is hard in the GGM.
2. Regarding OMDL, we show the -OMDL problems constitute an infinite hierarchy of problems in the AGM incomparable to the -DL hierarchy; that is, -OMDL is separate from -OMDL if , and also separate from -DL unless .
Compiled Nonlocal Games from any Trapdoor Claw-Free Function
A recent work of Kalai et al. (STOC 2023) shows how to compile any multi-player nonlocal game into a protocol with a single computationally-bounded prover. Subsequent works have built on this to develop new cryptographic protocols, where a completely classical client can verify the validity of quantum computations done by a quantum server — classical verification of quantum computations, for short. Their compiler relies on the existence of quantum fully homomorphic encryption.
In this work, we propose a new compiler for transforming nonlocal games into single-prover protocols.
Our compiler is based on a novel blind classical delegation of quantum computation protocol, constructed within the framework of measurement-based quantum computation. The latter construction combines a new blind remote state preparation protocol with a generalization of the universal blind quantum computation protocol by Broadbent, Fitzsimons, and Kashefi (FOCS 2009).
These two constructions may also be of independent interest, as the techniques employed could enable the blind construction of more specific quantum states, and the generalized framework allows for the use of blind quantum computation in a broader range of applications, particularly in quantum cryptographic protocols.
The compiler can be instantiated assuming the existence of any plain trapdoor claw-free function.
Leveraging results by Natarajan and Zhang (FOCS 2023) on compiled nonlocal games, our work implies the existence of new protocols for classically verifying quantum computations based on a broader range of computational assumptions, potentially weaker than those previously known.
One-way multilinear functions of the second order with linear shifts
We introduce and analyze a novel class of binary operations on finite-dimensional vector spaces over a field , defined by second-order multilinear expressions with linear shifts. These operations generate polynomials whose degree increases linearly with each iterated application, while the number of distinct monomials grows combinatorially. We demonstrate that, despite the non-associative and non-commutative nature in general, these operations exhibit power associativity and internal commutativity when iterated on a single vector. This allows for well-defined exponentiation . Crucially, the absence of a simple closed-form expression for suggests a one-way property: computing from and is straightforward, but recovering from (the Discrete Iteration Problem) appears computationally hard. We propose a Diffie–Hellman-like key exchange protocol utilizing these properties over finite fields, defining an Algebraic Diffie–Hellman Problem (ADHP). The proposed structures are of interest for cryptographic primitives, algebraic dynamics, and computational algebra.
Orient Express: Using Frobenius to Express Oriented Isogenies
In this paper we study supersingular elliptic curves primitively oriented by an imaginary quadratic order, where the orientation is determined by an endomorphism that factors through the Frobenius isogeny. In this way, we partly recycle one of the main features of CSIDH, namely the fact that the Frobenius orientation can be represented for free. This leads to the most efficient family of ideal-class group actions in a range where the discriminant is significantly larger than the field characteristic . Moreover, if we orient with a non-maximal order and we assume that it is feasible to compute the ideal-class group of the maximal order, then also the ideal-class group of is known and we recover the central feature of SCALLOP-like constructions.
We propose two variants of our scheme. In the first one, the orientation is by a suborder of the form for some coprime to , so this is similar to SCALLOP. In the second one, inspired by the work of Chenu and Smith, the orientation is by an order of the form where is square-free and not a multiple of . We give practical ways of generating parameters, together with a proof-of-concept SageMath implementation of both variants, which shows the effectiveness of our construction.
A Quasi-polynomial Time Algorithm for the Extrapolated Dihedral Coset Problem over Power-of-Two Moduli
The Learning With Errors (LWE) problem, introduced by Regev (STOC'05), is one of the fundamental problems in lattice-based cryptography, believed to be hard even for quantum adversaries. Regev (FOCS'02) showed that LWE reduces to the quantum Dihedral Coset Problem (DCP). Later, Brakerski, Kirshanova, Stehlé and Wen (PKC'18) showed that LWE reduces to a generalization known as the Extrapolated Dihedral Coset Problem (EDCP). We present a quasi-polynomial time quantum algorithm for the EDCP problems over power-of-two moduli using a quasi-polynomial number of samples, which also applies to the SLWE problem defined by Chen, Liu, and Zhandry (Eurocrypt'22). Our EDCP algorithm can be viewed as a provable variant to the "Simon-meets-Kuperberg" algorithm introduced by Bonnetain and Naya-Plasencia (Asiacrypt'18), adapted to the EDCP setting. We stress that our algorithm does not affect the security of LWE with standard parameters, as the reduction from standard LWE to EDCP limits the number of samples to be polynomial.
Silent Circuit Relinearisation: Sublinear-Size (Boolean and Arithmetic) Garbled Circuits from DCR
We introduce a general template for building garbled circuits with low communication, under the decisional composite residuosity (DCR) assumption. For the case of layered Boolean circuits, we can garble a circuit of size with communication proportional to bits, plus an additive factor that is polynomial in the security parameter. For layered arithmetic circuits with -bounded integer computation, we obtain a similar result: the garbled arithmetic circuit has size bits, where is the security parameter. These are the first constructions of general-purpose, garbled circuits with sublinear size, without relying on heavy tools like indistinguishability obfuscation or attribute-based and fully homomorphic encryption.
To achieve these results, our main technical tool is a new construction of a form of homomorphic secret sharing where some of the inputs are semi-private, that is, known to one of the evaluating parties. Through a new relinearisation technique that allows performing arbitrary additions and multiplications on semi-private shares, we build such an HSS scheme that supports evaluating any function of the form , where is any polynomially-sized circuit applied to the semi-private input , and is a restricted-multiplication (or, NC1) circuit applied to the private input . This significantly broadens the expressiveness of prior known HSS constructions.
Privacy and Security of FIDO2 Revisited
We revisit the privacy and security analyses of FIDO2, a widely deployed standard for passwordless authentication on the Web. We discuss previous works and conclude that each of them has at least one of the following limitations:
(i) impractical trusted setup assumptions,
(ii) security models that are inadequate in light of state of the art of practical attacks,
(iii) not analyzing FIDO2 as a whole, especially for its privacy guarantees.
Our work addresses these gaps and proposes revised security models for privacy and authentication. Equipped with our new models, we analyze FIDO2 modularly and focus on its component protocols, WebAuthn and CTAP2, clarifying their exact security guarantees. In particular, our results, for the first time, establish privacy guarantees for FIDO2 as a whole. Furthermore, we suggest minor modifications that can help FIDO2 provably meet stronger privacy and authentication definitions and withstand known and novel attacks.
Fully Homomorphic Encryption for Cyclotomic Prime Moduli
This paper presents a Generalized BFV (GBFV) fully homomorphic encryption scheme that encrypts plaintext spaces of the form with the -th cyclotomic polynomial and an arbitrary polynomial. GBFV encompasses both BFV where is a constant, and the CLPX scheme (CT-RSA 2018) where and is a linear polynomial. The latter can encrypt a single huge integer modulo , has much lower noise growth than BFV, but it is not known to be efficiently bootstrappable.
We show that by a clever choice of and higher degree polynomial , our scheme combines the SIMD capabilities of BFV with the low noise growth of CLPX, whilst still being efficiently bootstrappable. Moreover, we present parameter families that natively accommodate packed plaintext spaces defined by a large cyclotomic prime, such as the Fermat prime and the Goldilocks prime . These primes are often used in homomorphic encryption applications and zero-knowledge proof systems.
Due to the lower noise growth, GBFV can evaluate much deeper circuits compared to native BFV in the same ring dimension. As a result, we can evaluate either larger circuits or work with smaller ring dimensions. In particular, we can natively bootstrap GBFV at 128-bit security already at ring dimension , which was impossible before. We implemented the GBFV scheme on top of the SEAL library and achieve a latency of only 2 seconds to bootstrap a ciphertext encrypting up to 8192 elements modulo .
Finding and Protecting the Weakest Link - On Side-Channel Attacks on y in Masked ML-DSA
NIST standardized ML-KEM and ML-DSA as post-quantum key exchanges and digital signatures. Both schemes have already seen analysis with respect to side-channels, and first fully masked implementations of ML-DSA have been published. Previous attacks focused on unprotected implementations or assumed only hiding countermeasures to be in-place. Thus, in contrast to ML-KEM, the threat of side-channel attacks for protected ML-DSA implementations is mostly unclear.
In this work, we analyze the side-channel vulnerability of masked ML-DSA implementations. We first systematically assess the vulnerability of several potential points of attacks in different leakage models using information theory. Then, we explain how an adversary could launch first, second, and higher-order attacks using a recently presented framework for side-channel information in lattice-based schemes. In this context, we propose a filtering technique that allows the framework to solve for the secret key from a large number of hints; this had previously been prevented by numerical instabilities. We simulate the presented attacks and discuss the relation to the information-theoretic analysis.
Finally, we carry out relevant attacks on physical devices, discuss recent masked implementations, and instantiate a countermeasure against the most threatening attacks. The countermeasure mitigates the attacks with the highest noise-tolerance while having very little overhead. The results on the physical devices validate our simulations.
Unlocking Mix-Basis Potential: Geometric Approach for Combined Attacks
This paper explores the possibility of using different bases in Beyne's geometric approach, a flexibility that was theoretically proposed in Beyne's doctoral thesis but has not been adopted in real cryptanalytic attacks despite its potential to unify multiple attack paradigms.
We revisit three bases from previous geometric approach papers and extend them to four extra ones determined by simple rules. With the final seven bases, we can obtain different basis-based attacks in the -th-order spaces, where the \textit{order} is defined as the number of messages used in one sample during the attack. All these attacks can be studied in unified automatic search methods.
We provide several demonstrative applications of this framework.
First, we show that by choosing an alternative pair of bases, the divisibility property analyzed by Beyne and Verbauwhede with ultrametric integral cryptanalysis (ASIACRYPT 2024) can be interpreted as a single element rather than as a linear combination of elements of the transition matrix; thus, the property can be studied in a unified way as other geometric approach applications.
Second, we revisit the multiple-of- property (EUROCRYPT 2017) under our new framework and present new multiple-of- distinguishers for \skinny-64 that surpass the state-of-the-art results,
from the perspectives of both first-order and second-order attacks.
Finally, we give a closed formula for differential-linear approximations without any assumptions, even confirming that the two differential-linear approximations of \simeck-32 and \simeck-48 found by Hadipour \textit{et al.} are deterministic independently of concrete key values.