All papers in 2025 (Page 4 of 604 results)
Lattice-based Cryptography: A survey on the security of the lattice-based NIST finalists
This survey, mostly written in the years 2022-2023, is meant as an as short as possible description of the current state-of-the-art lattice attacks on lattice-based cryptosystems, without losing the essence of the matter.
The main focus is the security of the NIST finalists and
alternatives that are based on lattices, namely CRYSTALS-Kyber, CRYSTALS-Dilithium and Falcon. Instead of going through these cryptosystems case by case, this survey considers attacks on the underlying hardness assumptions: in the case of the mentioned lattice-based schemes, these are (variants of) LWE (Learning With Errors) and NTRU.
Asynchronous Algorand: Reaching Agreement with Near Linear Communication and Constant Expected Time
The celebrated Algorand protocol solves validated byzantine agreement in a scalable manner in the synchronous setting. In this paper, we study the feasibility of similar solutions in the asynchronous setting. Our main result is an asynchronous validated byzantine agreement protocol that we call Asynchronous Algorand. As with Algorand, it terminates in an expected constant number of rounds, and honest parties send an expected bits, where is the number of parties. The protocol is resilient to a fully-asynchronous weakly-adaptive adversary that can corrupt a near-optimal number of parties ( ) and requires just a VRF setup and secure erasures.
A key innovation in Asynchronous Algorand is a rather simple but surprisingly effective method to do \textit{committee-based role assignment} for asynchronous verifiable secret sharing in the YOSO (You Only Speak Once) model. This method achieves near-optimal resilience and near-linear communication complexity while relying solely on a verifiable random function (VRF) setup and secure erasures.
FHE-SNARK vs. SNARK-FHE: From Analysis to Practical Verifiable Computation
Verifiable Computation over encrypted data (VC) faces a critical dilemma between two competing paradigms: SNARK-FHE (applying SNARKs to prove FHE operations) and FHE-SNARK (homomorphically evaluating SNARK proofs). There are two interesting questions remain open to solving such a dilemma: 1) Are they identical in terms of security? 2) How practically efficient can we get? This work answers these questions through the following results:
1) We establish a formal security analysis between VC and secure two-party computation (2PC). We investigate VC with server inputs and show the following: a) VC with server input has an exact 1-bit security loss compared to 2PC; b) SNARK-FHE aligns with 2PC while FHE-SNARK naturally falls in the VC category; c) Existing FHE-SNARK works is vulnerable in the VC with server input setting, for which we formalize a input-dependent attack.
2) We design an FHE-friendly SNARK that is: a) 3× lower multiplicative depth than FRI-based SNARKs; b) Compatible with FHE SIMD operations. Based on this novel SNARK, we construct an FHE-SNARK scheme that has: a) Stronger security: resistant against input-dependent attack; b) 8× speedup: 3.6-hour proof generation for -gate circuits on a single core CPU (vs. 29 hours in the state-of-the-art); c) Practical verification: 65.3 MB proofs with 2.7 seconds verification (single core).
Making Protocol FSU Revocable
This paper examines whether a revocation function can be added to a protocol, protocol FSU, which is an asymmetric pairing variant of a protocol that has been adopted as an international standard, ISO/IEC11770-3. Protocol FSU is an identity-based authenticated-key exchange protocol based on a mathematical problem, an asymmetric gap bilinear Diffie--Hellman (GBDH) problem.
To make protocol FSU revocable, a generic technique is applied, which converts an identity-based encryption scheme to a revocable identity-based encryption scheme by introducing a symmetric-key encryption scheme. In addition, to make the converted revocable identity-based authenticated-key exchange protocol efficient, we reduce ephemeral information exchanged in the protocol, and introduce an additional parameter to the master public-key where the secret information of the additional parameter is not needed to include in the master secret-key.
We discuss the security of the resultant protocol, and prove that it is rid-eCK secure under the asymmetric GBDH assumption.
Pseudorandom Functions with Weak Programming Privacy and Applications to Private Information Retrieval
Although privately programmable pseudorandom functions (PPPRFs) are known to have numerous applications, so far, the only known constructions rely on Learning with Error (LWE) or indistinguishability obfuscation. We show how to construct a relaxed PPPRF with only one-way functions (OWF). The resulting PPPRF satisfies security and works for polynomially sized input domains. Using the resulting PPPRF, we can get new results for preprocessing Private Information Retrieval (PIR) that improve the state of the art. Specifically, we show that relying only on OWF, we can get a 2-server preprocessing PIR with polylogarithmic bandwidth while consuming client space and server space for an arbitrarily small constant . In the 1-server setting, we get a preprocessing PIR from OWF that achieves polylogarithmic online bandwidth and offline bandwidth, while preserving the same client and server space as before. Our result, in combination with the lower bound of Ishai, Shi, and Wichs (CRYPTO'24), establishes a tight understanding of the bandwidth and client space tradeoff for 1-server preprocessing PIR from Minicrypt assumptions. Interestingly, we are also the first to show non-trivial ways to combine client-side and server-side preprocessing to get improved results for PIR.
(Un)breakable curses - re-encryption in the Fujisaki-Okamoto transform
The Fujisaki-Okamoto transform (FO) is the go-to method for achieving chosen-ciphertext (CCA) security for post-quantum key encapsulation mechanisms (KEMs). An important step in FO is augmenting the decryption/ decapsulation algorithm with a re-encryption step -- the decrypted message is re-encrypted to check whether the correct encryption randomness was used. While solving a security problem (ciphertext-malleability), re-encryption has turned out to introduce side-channel vulnerabilities and is computationally expensive, which has lead designers to searching for alternatives. In this work, we perform a comprehensive study of such alternatives. We formalize a central security property, computational rigidity, and show that it is sufficient for obtaining CCA security. We present a framework for analyzing algorithms that can replace re-encryption and still achieve rigidity, and analyze existing proposals in this framework.
Along the way, we pick up a novel QROM security statement for explicitly rejecting KEMs based on deterministic PKE schemes, something that so far only was possible when requiring a hard-to-ensure quantum property for the base PKE scheme.
Stateless Hash-Based Signatures for Post-Quantum Security Keys
The U.S. National Institute of Standards and Technology
recently standardized the first set of post-quantum cryptography algo-
rithms. These algorithms address the quantum threat, but also present
new challenges due to their larger memory and computational footprint.
Three of the four standardized algorithms are lattice based, offering good
performance but posing challenges due to complex implementation and
intricate security assumptions. A more conservative choice for quantum-
safe authentication are hash-based signature systems. However, due to
large signature sizes and low signing speeds, hash-based systems have
only found use in niche applications. The first NIST standardized, state-
less hash-based signature system is the SPHINCS+-based SLH-DSA.
In this work we combine different approaches to show that SPHINCS+
can be optimized in its parameters and implementation, to be high per-
forming, even when signing in an embedded setting. We demonstrate
this in the context of user authentication using hardware security keys
within FIDO. Our SPHINCS+-based implementation can even outper-
form lattice-based solutions while remaining highly portable. Due to con-
servative security assumptions, our solution does not require a hybrid
construction and can perform authentication on current security keys.
For reproducibility and to encourage further research we publish our
Cortex M4-based implementation.
Practical Zero-Trust Threshold Signatures in Large-Scale Dynamic Asynchronous Networks
Threshold signatures have become a critical tool in cryptocurrency systems, offering enhanced security by distributing the signing process among multiple signers. In this work, we distribute this process between a client and a permissionless decentralized blockchain, and present novel protocols for ECDSA and EdDSA/Schnorr signatures in this setting. Typical threshold access architectures used by trusted custodians suffer from the honeypot problem, wherein the more assets the custodian holds, the greater the incentive of compromising it.
Implementing threshold signatures over permissionless blockchains poses a few challenges.
First, existing networks typically work over an asynchronous reliable broadcast communication channel. Accordingly, our protocol is implemented over such a channel. As a result, it also benefits from identifiable abort, public verifiability, and guaranteed output delivery, and the client benefits from censorship resistance of blockchain systems.
Second, upon signing each block, the participating quorum may dynamically change and is post-determined.
Therefore, we design a fluid protocol, that supports a post-determined dynamic quorum in each communication round, thereby complying with existing broadcast channel implementations. Third, in permissionless networks, parties may join, leave, and change their stake. Therefore, we offer protocols for network reconfiguration, with complexity independent of the number of clients in the system, and our protocol efficiently supports a weighted threshold access structure for the network. Specifically, the complexity of distributed key generation and presign only depends on the number of parties and not on the overall weight, and the amortized cost of sign only depends on the individual weight.
Furthermore, our protocol introduces key improvements, including the removal of zero-knowledge proofs towards the client, and presigns with a non-interactive client. For Schnorr, the presigns are client-independent, and can be collected by the blockchain in a common pool, available for all clients in the system. These optimizations reduce communication overhead and improve the system's ability to handle traffic spikes during high-demand periods.
Our protocol is UC-secure, and is therefore natively designed for multiple clients to use the system in parallel. Notably, we propose a novel assumption, Slightly-Enhanced ECDSA Unforgeability, offering concrete security for 256-bit elliptic curves for threshold ECDSA with support for parallel execution of presigns.
In addition to securing cryptocurrency wallets, we demonstrate how our protocol enables various cross-chain applications, such as decentralized bridges, future transactions, andwallet transfer. Our system is designed for interoperability across multiple blockchains, enhancing security, scalability, and flexibility for decentralized finance (DeFi) ecosystems.
DFS: Delegation-friendly zkSNARK and Private Delegation of Provers
Zero-Knowledge Succinct Non-interactive Arguments of Knowledge (zkSNARKs) lead to proofs that can be succinctly verified but require huge computational resources to generate. Prior systems outsource proof generation either through public delegation, which reveals the witness to the third party, or, more preferably, private delegation that keeps the witness hidden using multiparty computation (MPC). However, current private delegation schemes struggle with scalability and efficiency due to MPC inefficiencies, poor resource utilization, and suboptimal design of zkSNARK protocols.
In this paper, we introduce DFS, a new zkSNARK that is delegation-friendly for both public and private scenarios. Prior work focused on optimizing the MPC protocols for existing zkSNARKs, while DFS uses co-design between MPC and zkSNARK so that the protocol is efficient for both distributed computing and MPC. In particular, DFS achieves linear prover time and logarithmic verification cost in the non-delegated setting. For private delegation, DFS introduces a scheme with zero communication overhead in MPC and achieves malicious security for free, which results in logarithmic overall communication; while prior work required linear communication. Our evaluation shows that DFS is as efficient as state-of-the-art zkSNARKs in public delegation; when used for private delegation, it scales better than previous work. In particular, for constraints, the total communication of DFS is less than KB, while prior work incurs GB, which is linear to the circuit size. Additionally, we identify and address a security flaw in prior work, EOS (USENIX'23).
Stationary Syndrome Decoding for Improved PCGs
Syndrome decoding (SD), and equivalently Learning Parity with Noise (LPN), is a fundamental problem in cryptography, which states that for a field , some compressing public matrix , and a secret sparse vector sampled from some noise distribution, is indistinguishable from uniform. Recently, the SD has gained significant interest due to its use in pseudorandom correlation generators (PCGs).
In pursuit of better efficiency, we propose a new assumption called Stationary Syndrome Decoding (SSD). In SSD, we consider correlated noise vectors and associated instances where the noise vectors are restricted to having non-zeros in the same small subset of positions . That is, for all , is uniformly random, while for all other , .
Although naively reusing the noise vector renders SD and LPN insecure via simple Gaussian elimination, we observe known attacks do not extend to our correlated noise. We show SSD is unconditionally secure against so-called linear attacks, e.g., advanced information set decoding and representation techniques (Esser and Santini, Crypto 2024). We further adapt the state-of-the-art nonlinear attack (Briaud and Oygarden, Eurocrypt 2023) to SSD and demonstrate both theoretically and experimentally resistance to the attack.
We apply SSD to PCGs to amortize the cost of noise generation protocol. For OT and VOLE generation, each instance requires communication instead of . For suggested parameters, we observe a improvement in the running time or between 6 and reduction in communication. For Beaver triple generation using Ring LPN, our techniques have the potential for substantial amortization due to the high concrete overhead of the Ring LPN noise generation.
Neo: Lattice-based folding scheme for CCS over small fields and pay-per-bit commitments
This paper introduces Neo, a new lattice-based folding scheme for CCS, an NP-complete relation that generalizes R1CS, Plonkish, and AIR. Neo's folding scheme can be viewed as adapting the folding scheme in HyperNova (CRYPTO'24), which assumes elliptic-curve based linearly homomorphic commitments, to the lattice setting. Unlike HyperNova, Neo can use “small” prime fields (e.g., over the Goldilocks prime). Additionally, Neo provides plausible post-quantum security.
Prior to Neo, folding schemes in the lattice setting, notably LatticeFold (ePrint 2024/257), worked with constraint systems defined over a cyclotomic polynomial ring. This structure allows packing a fixed batch of constraint systems over a small prime field into a single constraint system over a polynomial ring. However, it introduces significant overheads, both for committing to witnesses (e.g., the commitment scheme cannot take advantage of bit-width of values), and within the folding protocol itself (e.g., the sum-check protocol is run over cyclotomic polynomial rings). Additionally, the required ring structure places restrictions on the choice of primes (e.g., LatticeFold is not compatible with the Goldilocks field).
Neo addresses these problems, by drawing inspiration from both HyperNova and LatticeFold. A key contribution is a folding-friendly instantiation of Ajtai's commitments, with "pay-per-bit" commitment costs i.e., the commitment costs scale with the bit-width of the scalars (e.g., committing to a vector of bits is cheaper than committing to a vector of -bit values). This scheme commits to vectors over a small prime field. It does so by transforming the provided vector into a matrix and committing to that matrix. We prove that this commitment scheme provides the desired linear homomorphism for building a folding scheme. Additionally, like HyperNova, Neo runs a single invocation of the sum-check protocol, where in HyperNova it is over the scalar field of an elliptic curve and in Neo it is over an extension of a small prime field.
Anamorphic-Resistant Encryption; Or Why the Encryption Debate is Still Alive
Ever since the introduction of encryption, society has been divided over whether the government (or law enforcement agencies) should have the capability to decrypt private messages (with or without a war- rant) of its citizens. From a technical viewpoint, the folklore belief is that semantic security always enables some form of steganography. Thus, adding backdoors to semantically secure schemes is pointless: it only weakens the security of the “good guys”, while “bad guys” can easily circumvent censorship, even if forced to hand over their decryption keys.
In this paper we put a dent in this folklore. We formalize three worlds: Dictatoria (“dictator wins”: no convenient steganography, no user co- operation needed), Warrantland (“checks-and-balances”: no convenient steganography, but need user’s cooperation) and Privatopia (“privacy wins”: built-in, high-rate steganography, even if giving away the decryption key). We give strong evidence that all these worlds are possible, thus reopening the encryption debate on a technical level.
Our main novelty is the definition and design of special encryption schemes we call anamorphic-resistant (AR). In contrast to so called “anamorphic schemes”, — which were studied in the literature and form the basis of Privatopia, — any attempt to steganographically communicate over an AR-encryption scheme will be either impossible or hugely slow (depending on the definitional details).
Tight Lower Bounds and New Upper Bounds For Evolving CDS
Komargodski et. al. defined evolving secret-sharing schemes with an unbounded number of parties. In this model, parties arrive one after the other and the number of parties that will arrive is not known.
Another cryptographic primitive related to secret-sharing is conditional disclosure of secrets protocols (CDS) that defined by Gertner et. al.
A CDS protocol for a Boolean function involves servers and a referee. Each server holds a common secret , a common random string , and a private input ; using these , each server locally computes one message and sends it to the referee. The referee, knowing the inputs and the messages, should be able to compute if . Otherwise, the referee should not learn information about the secret. In a sense, this is a non-monotone version of secret sharing schemes.
Peter (ISC 23'), defines evolving CDS, implementing in particular evolving predicate (he handles somewhat more general predicates for larger input domains, but generalizing to other input domains is not hard, and we focus on boolean predicates). In this setting, the number of parties is unbounded, where the parties arrive in sequential order. Each party, when arrives, sends one random message to a referee, that depending on its input, the secret, and a common randomness. He also devise evolving CDS protocols for a general evolving predicate via a black-box reduction to evolving secret-sharing scheme for a related access structure.
He analyzes this construction for general access structures, as well as other classes of functions, which yields message complexity for the worst predicates.
In this work we provide new upper and lower bounds for evolving CDS.
Observing that CDS has the potential for improved efficiency, as it is not restricted to monotone operations, we devise a new evolving general CDS construction.
In particular, our construction relies on representing the evolving predicate via an infinite branching program - LINBP, generalizing the monotone infinite branching program based construction of Alon et. al of evolving secret sharing schemes.
We obtain nontrivial ( for ) message complexity for branching programs of larger width than Alon et al's construction (even when restricting attention to monotone predicates), as well as Peter's construction for certain (but not all) 's.
Indeed, we prove that our construction, as well as Peter's article (ISC 23’) is tight for a certain evolving predicate -- as for evolving secret-sharing, (so called strong) evolving CDS also requires share complexity of . This is unlike the state of affairs for the finite setting, where the best known CDS constructions are much more efficient than the best known secret sharing schemes (for the hardest monotone functions).
The latter bound is proved based on an adaptation of Mazor's lower bound (in turns based on Csirmaz's lower bounding technique) to the CDS setting. It relies on a reduction from secret sharing for a certain class of infinite access structures -- the so called partite access structures to evolving CDS for a related (not necessarily monotone) function. Then, a partite evolving access structure is crafted using the Csirmaz-type argument.
A Note on Adaptive Security in Hierarchical Identity-Based Encryption
We present the first construction for adaptively secure HIBE, that does not rely on bilinear pairings or random oracle heuristics. Notably, we design an adaptively secure HIBE from any selectively secure IBE system in the standard model. Combining this with known results, this gives the first adaptively secure HIBE system from a wide variety of standard assumptions such as CDH/Factoring/LWE/LPN. We also extend our adaptively secure HIBE system to satisfy full anonymity, giving the first adaptively secure anonymous HIBE under CDH/LWE assumption. All our HIBE systems support unbounded length identities as well as unbounded number of recursive delegation operations.
Dynamic Decentralized Functional Encryption: Generic Constructions with Strong Security
Dynamic Decentralized Functional Encryption (DDFE) is a generalization of Functional Encryption which allows multiple users to join the system dynamically without interaction and without relying on a trusted third party. Users can independently encrypt their inputs for a joint evaluation under functions embedded in functional decryption keys; and they keep control on these functions as they all have to contribute to the generation of the functional keys.
In this work, we present new generic compilers which, when instantiated with existing schemes from the literature, improve over the state-of-the-art in terms of security, computational assumptions and functionality. Specifically, we obtain the first adaptively secure DDFE schemes for inner products in both the standard and the stronger function-hiding setting which guarantees privacy not only for messages but also for the evaluated functions. Furthermore, we present the first DDFE for inner products whose security can be proven under the LWE assumption in the standard model. Finally, we give the first construction of a DDFE for the attribute-weighted sums functionality with attribute-based access control (with some limitations). All prior constructions guarantee only selective security, rely on group-based assumptions on pairings, and cannot provide access control.
Significantly Improved Cryptanalysis of Salsa20 With Two-Round Criteria
Over the past decade and a half, cryptanalytic techniques for Salsa20 have been increasingly refined, largely following the overarching concept of Probabilistically Neutral Bits (PNBs) by Aumasson et al. (FSE 2008). In this paper, we present a novel criterion for choosing key- pairs using certain 2-round criteria and connect that with clever tweaks of existing techniques related to Probabilistically Independent bits (earlier used for ARX ciphers, but not for Salsa20) and well-studied PNBs. Through a detailed examination of the matrix after initial rounds of Salsa20, we introduce the first-ever cryptanalysis of Salsa20 exceeding rounds. Specifically, Salsa20/ , consisting of secret key bits, can be cryptanalyzed with a time complexity of and data amounting to . Further, the sharpness of our attack can be highlighted by showing that Salsa20/ can be broken with time and data , which is a significant improvement over the best-known result of Coutinho et al. (Journal of Cryptology, 2023, time and data ). Here, the refinements related to backward biases for PNBs are also instrumental in achieving the improvements. We also provide certain instances of how these ideas improve the cryptanalysis on -bit versions. In the process, a few critical points are raised on some existing state-of-the-art works in this direction, and in those cases, their estimates of time and data are revisited to note the correct complexities, revising the incorrect numbers.
How to Securely Implement Cryptography in Deep Neural Networks
The wide adoption of deep neural networks (DNNs) raises the question of how can we equip them with a desired cryptographic functionality (e.g, to decrypt an encrypted input, to verify that this input is authorized, or to hide a secure watermark in the output). The problem is that cryptographic primitives are typically designed to run on digital computers that use Boolean gates to map sequences of bits to sequences of bits, whereas DNNs are a special type of analog computer that uses linear mappings and ReLUs to map vectors of real numbers to vectors of real numbers. This discrepancy between the discrete and continuous computational models raises the question of what is the best way to implement standard cryptographic primitives as DNNs, and whether DNN implementations of secure cryptosystems remain secure in the new setting, in which an attacker can ask the DNN to process a message whose "bits" are arbitrary real numbers.
In this paper we lay the foundations of this new theory, defining the meaning of correctness and security for implementations of cryptographic primitives as ReLU-based DNNs. We then show that the natural implementations of block ciphers as DNNs can be broken in linear time by using such nonstandard inputs. We tested our attack in the case of full round AES-128, and had success rate in finding randomly chosen keys. Finally, we develop a new method for implementing any desired cryptographic functionality as a standard ReLU-based DNN in a provably secure and correct way. Our protective technique has very low overhead (a constant number of additional layers and a linear number of additional neurons), and is completely practical.
A reduction from Hawk to the principal ideal problem in a quaternion algebra
In this article we present a non-uniform reduction from rank-
2 module-LIP over Complex Multiplication fields, to a variant of the
Principal Ideal Problem, in some fitting quaternion algebra. This reduction
is classical deterministic polynomial-time in the size of the inputs. The
quaternion algebra in which we need to solve the variant of the principal
ideal problem depends on the parameters of the module-LIP problem,
but not on the problem’s instance. Our reduction requires the knowledge
of some special elements of this quaternion algebras, which is why it is
non-uniform.
In some particular cases, these elements can be computed in polynomial
time, making the reduction uniform. This is the case for the Hawk
signature scheme: we show that breaking Hawk is no harder than solving
a variant of the principal ideal problem in a fixed quaternion algebra
(and this reduction is uniform).
Verifiable Computation for Approximate Homomorphic Encryption Schemes
We address the problem of proving the validity of computation on ciphertexts of homomorphic encryption (HE) schemes, a feature that enables outsourcing of data and computation while ensuring both data privacy and integrity.
We propose a new solution that handles computations in RingLWE-based schemes, particularly the CKKS scheme for approximate arithmetic. Our approach efficiently handles ciphertext arithmetic in the polynomial ring without emulation overhead and manages ciphertexts maintenance operations, such as modulus switching, key switching, and rescaling, with small cost.
Our main result is a succinct argument that efficiently handles arithmetic computations and range checks over the ring . To build this argument system, we construct new polynomial interactive oracle proofs (PIOPs) and multilinear polynomial commitments supporting polynomials over , unlike prior work which focused on finite fields. We validate the concrete complexity of our approach through implementation and experimentation. Compared to the current state-of-the-art on verifiable HE for RNS schemes, we present similar performance for small circuits while being able to efficiently scale to larger ones, which was a major challenge for previous constructions as it requires verifying procedures such as relinearization.
MicroCrypt Assumptions with Quantum Input Sampling and Pseudodeterminism: Constructions and Separations
We investigate two natural relaxations of quantum cryptographic assumptions. First, we examine primitives such as pseudorandom generators ( s) and pseudorandom states ( s), extended with quantum input sampling, which we term and . In these primitives, the input is sampled via a quantum algorithm rather than uniformly at random. The second relaxation, -pseudodeterminism, allows the generator to output on an inverse-polynomial fraction of inputs.
We demonstrate an equivalence between (bounded-query) logarithmic-sized , logarithmic-sized , and . Notably, such an equivalence remains unknown for the uniform key sampling versions of these primitives. Furthermore, we establish that can be constructed from -pseudodeterministic s ( s).
To further justify our exploration, we present two separation results. First, we examine the relationship between -pseudodeterministic notions and their deterministic counterparts. We show that there does not exist a black-box construction of a one-way state generator from a , indicating that -pseudodeterministic primitives may be inherently weaker than their deterministic counterparts. Second, we explore the distinction between quantum and uniform input sampling. We prove that there does not exist a black-box construction of a -psuedodeterministic from a , suggesting that primitives relying on quantum input sampling may be weaker than those using traditional uniform sampling. Given the broad cryptographic applicability of s and s, these separation results yield numerous new insights into the hierarchy of primitives within MicroCrypt.
S2DV: Scalable and Secure DAO Voting
Decentralized Autonomous Organization operates without a central entity, being owned and governed collectively by its members. In this organization, decisions are carried out automatically through smart contracts for routine tasks, while members vote for unforeseen issues. Scalability in decision-making through voting on proposals is essential to accommodate a growing number of members without sacrificing security. This paper addresses this challenge by introducing a scalable and secure DAO voting system that ensures security through Groth16 zk-SNARKs and exponential ElGamal encryption algorithm while achieving scalability by verifiably delegating heavy computations to untrusted entities. While offline computation on the exponential ElGamal homomorphic encryption algorithm is enabled to reduce the computational cost of the blockchain, Groth16 is allowed to maintain robust off-chain calculation without revealing any further details. Specifically, the Groth16 proof guarantees that (i) the encrypted votes accurately reflect the voter's voting power, ensuring no unauthorized weight manipulation; (ii) only valid non-negative vote values are encrypted, preventing unintended or malicious vote tampering; and (iii) the homomorphic summation is performed correctly. The implementation shows that the proofs are verified remarkably fast, making the S2DV protocol highly suitable for scalable DAO voting, while preserving the security of the election.
Honest Majority MPC with Communication in Minicrypt
In this work, we consider the communication complexity of MPC protocols in honest majority setting achieving malicious security in both information-theoretic setting and computational setting. On the one hand, we study the possibility of basing honest majority MPC protocols on oblivious linear evaluation (OLE)-hybrid model efficiently with information-theoretic security. More precisely, we instantiate preprocessing phase of the recent work Sharing Transformation (Goyal, Polychroniadou, and Song, CRYPTO 2022) assuming random OLE correlations. Notably, we are able to prepare packed Beaver triples with malicious security achieving amortized communication of field elements plus a number of OLE correlations per packed Beaver triple, which is the best known result. To further efficiently prepare random OLE correlations, we resort to IKNP-style OT extension protocols (Ishai et al., CRYPTO 2003) in random oracle model.
On the other hand, we derive a communication lower bound for preparing OLE correlations in the information-theoretic setting based on negative results due to Damgård, Larsen, and Nielsen (CRYPTO 2019).
Combining our positive result with the work of Goyal, Polychroniadou, and Song (CRYPTO 2022), we derive an MPC protocol with amortized communication of elements per gate in random oracle model achieving malicious security, where denotes the length of a field element and is the security parameter.
Transistor: a TFHE-friendly Stream Cipher
Fully Homomorphic Encryption (FHE) allows computations on encrypted data without requiring decryption, ensuring data privacy during processing. However, FHE introduces a significant expansion of ciphertext sizes compared to plaintexts, which results in higher communication. A practical solution to mitigate this issue is transciphering, where only the master key is homomorphically encrypted, while the actual data is encrypted using a symmetric cipher, usually a stream cipher. The server then homomorphically evaluates the stream cipher to convert the encrypted data into a homomorphically encrypted form.
We introduce Transistor, a stream cipher specifically designed for efficient homomorphic evaluation within the TFHE scheme, a widely-used FHE framework known for its fast bootstrapping and ability to handle low-precision data. Transistor operates on which is chosen to optimize TFHE performances. Its components are carefully engineered to both control noise growth and provide strong security guarantees. First, a simple TFHE-friendly implementation technique for LFSRs allows us to use such components to cheaply increase the state size. At the same time, a small Finite State Machine is the only part of the state updated non-linearly, each non-linear operation corresponding in TFHE to a rather expensive Programmable Bootstrapping. This update is done using an AES-round-like transformation. But, in contrast to other stream ciphers like SNOW or LEX, our construction comes with information-theoretic security arguments proving that an attacker cannot obtain any information about the secret key from three or fewer consecutive keystream outputs. These information-theoretic arguments are then combined with a thorough analysis of potential correlations to bound the minimal keystream length required for recovering the secret key.
Our implementation of Transistor significantly outperforms the state of the art of TFHE transciphering, achieving a throughput of over 60 bits/s on a standard CPU, all while avoiding the need for an expensive initialization process.
Securely Instantiating 'Half Gates' Garbling in the Standard Model
Garbling is a fundamental cryptographic primitive, with numerous theoretical and practical applications. Since the first construction by Yao (FOCS’82, ’86), a line of work has concerned itself with reducing the communication and computational complexity of that construction. One of the most efficient garbling schemes presently is the ‘Half Gates’ scheme by Zahur, Rosulek, and Evans (Eurocrypt’15). Despite its widespread adoption, the provable security of this scheme has been based on assumptions whose only instantiations are in idealized models. For example, in their original paper, Zahur, Rosulek, and Evans showed that hash functions satisfying a notion called circular correlation robustness (CCR) suffice for this task, and then proved that CCR secure hash functions can be instantiated in the random permutation model. In this work, we show how to securely instantiate the Half Gates scheme in the standard model. To this end, we first show how this scheme can be securely instantiated given a (family of) weak CCR hash function, a notion that we introduce. Furthermore, we show how a weak CCR hash function can be used to securely instantiate other efficient garbling schemes, namely the ones by Rosulek and Roy (Crypto’21) and Heath (Eurocrypt’24). Thus we believe this notion to be of independent interest. Finally, we construct such weak CCR hash functions using indistinguishability obfuscation and one-way functions. The security proof of this construction constitutes our main technical contribution. While our construction is not practical, it serves as a proof of concept supporting the soundness of these garbling schemes, which we regard to be particularly important given the recent initiative by NIST to standardize garbling, and the optimizations in Half Gates being potentially adopted.
Cryptanalysis of rank-2 module-LIP: a single real embedding is all it takes
The rank- module-LIP problem was introduced in cryptography by (Ducas, Postlethwaite, Pulles, van Woerden, Asiacrypt 2022), to construct the highly performant HAWK scheme. A first cryptanalytic work by (Mureau, Pellet--Mary, Pliatsok, Wallet, Eurocrypt 2024) showed a heuristic polynomial time attack against the rank- module-LIP problem over totally real number fields. While mathematically interesting, this attack focuses on number fields that are not relevant for cryptography. The main families of fields used in cryptography are the highly predominant cyclotomic fields (used for instance in the HAWK scheme), as well as the NTRU Prime fields, used for instance in the eponymous NTRU Prime scheme (Bernstein, Chuengsatiansup, Lange, van Vredendaal, SAC 2017).
In this work, we generalize the attack of Mureau et al. against rank- module-LIP to the family of all number fields with at least one real embedding, which contains the NTRU Prime fields. We present three variants of our attack, firstly a heuristic one that runs in quantum polynomial time. Secondly, under the extra assumption that the defining polynomial of has a -transitive Galois group (which is the case for the NTRU Prime fields), we give a provable attack that runs in quantum polynomial time. And thirdly, with the same -transitivity assumption we give a heuristic attack that runs in classical polynomial time. For the latter we use a generalization of the Gentry--Szydlo algorithm to any number field which might be of independent interest.
Context-Dependent Threshold Decryption and its Applications
We initiate the study of high-threshold public-key decryption, along with an enhanced security feature called context-dependent decryption.
Our study includes definitions, constructions, security proofs, and applications.
The notion of high-threshold decryption has received almost no attention in the literature. The enhanced security feature of context-dependent encryption is entirely new, and plays an important role in many natural applications of threshold decryption.
New Techniques for Random Probing Security and Application to Raccoon Signature Scheme
The random probing model formalizes a leakage scenario where each wire in a circuit leaks with probability . This model holds practical relevance due to its reduction to the noisy leakage model, which is widely regarded as the appropriate formalization for power and electromagnetic side-channel attacks.
In this paper, we present new techniques for designing efficient masking schemes that achieve tighter random probing security with lower complexity. First, we introduce the notion of \emph{cardinal random probing composability} (Cardinal-RPC), offering a new trade-off between complexity and security for composing masking gadgets. Next, we propose a novel refresh technique based on a simple iterative process: randomly selecting and updating two shares with fresh randomness. While not perfectly secure in the standard probing model, this method achieves arbitrary cardinal-RPC security, making it a versatile tool for constructing random-probing secure circuits. Using this refresh, we develop additional basic gadgets (e.g., linear multiplication, addition, and copy) that satisfy the cardinal-RPC notion. Despite the increased complexity, the gains in security significantly outweigh the overhead, with the number of iterations offering useful flexibility.
To showcase our techniques, we apply them to lattice-based signatures. Specifically, we introduce a new random-probing composable gadget for sampling small noise, a key component in various post-quantum algorithms. To assess security in this context, we generalize the random probing security model to address auxiliary inputs and public outputs. We apply our findings to Raccoon, a masking-friendly signature scheme originally designed for standard probing security. We prove the secure composition of our new gadgets for key generation and signature computation, and show that our masking scheme achieves a superior security-performance tradeoff compared to previous approaches based on random probing expansion. To our knowledge, this is the first fully secure instantiation of a post-quantum algorithm in the random probing model.
Tighter Control for Distributed Key Generation: Share Refreshing and Expressive Reconstruction Policies
The secure management of private keys is a fundamental challenge, particularly for the general public, as losing these keys can result in irreversible asset loss. Traditional custodial approaches pose security risks, while decentralized secret sharing schemes offer a more resilient alternative by distributing trust among multiple parties. In this work, we extend an existing decentralized, verifiable, and extensible cryptographic key recovery scheme based on Shamir's secret sharing. We introduce a refresh phase that ensures proactive security, preventing long-term exposure of secret shares. Our approach explores three distinct methods for refreshing shares, analyzing and comparing their security guarantees and computational complexity. Additionally, we extend the protocol to support more complex access structures, with a particular focus on threshold access trees, enabling fine-grained control over key reconstruction.
Finding and Protecting the Weakest Link: On Side-Channel Attacks on Masked ML-DSA
NIST has standardized ML-KEM and ML-DSA as replacements for pre-quantum key exchanges and digital signatures. Both schemes have already seen analysis with respect to side-channels, and first fully masked implementations of ML-DSA have been published. Previous attacks have focused on unprotected implementations or assumed only hiding countermeasures to be in-place. Thus, in contrast to ML-KEM, the threat of side-channel attacks for protected implementations of ML-DSA is mostly unclear.
In this work, we analyze the side-channel vulnerability of masked ML-DSA implementations. We first systematically assess the vulnerability of several potential points of attacks in different leakage models using information theory. Then, we explain how an adversary could launch first, second, and higher-order attacks using a recently presented framework for side-channel information in lattice-based schemes. In this context, we propose a filtering technique that allows the framework to solve for the secret key from a large number of hints; this had previously been prevented by numerical instabilities. We simulate the presented attacks and discuss the relation to the information-theoretic analysis.
Finally, we carry out relevant attacks on physical devices, discuss recent masked implementations, and instantiate a countermeasure against the most threatening attacks. The countermeasure mitigates the attacks with the highest noise-tolerance while having very little overhead. The results on the physical devices validate our simulations.
Error-Simulatable Sanitization for TFHE and Applications
We show that the randomized TFHE bootstrapping technique of Bourse and Izabechéne provides a form of sanitization which is error-simulatable. This means that the randomized bootstrap can be used not only for sanitization of ciphertexts (i.e. to hide the function that has been computed), but that it can also be used in server-assisted threshold decryption. Thus we extend the server-assisted threshold decryption method of Passelégue and Stehlé (ASIACRYPT '24) to FHE schemes which have small ciphertext modulus (such as TFHE). In addition the error-simulatable sanitization enables us to obtain FuncCPA security for TFHE essentially for free.
Post-Quantum Blind Signatures from Matrix Code Equivalence
We construct a novel code-based blind signature scheme, us- ing the Matrix Equivalence Digital Signature (MEDS) group action. The scheme is built using similar ideas to the Schnorr blind signature scheme and CSI-Otter, but uses additional public key and commitment informa- tion to overcome the difficulties that the MEDS group action faces: lack of module structure (present in Schnorr), lack of a quadratic twist (present in CSI-Otter), and non-commutativity of the acting group. We address security concerns related to public key validation, and prove the security of our protocol in the random oracle model, using the security framework of Kastner, Loss, and Xu, under a variant of the Inverse Matrix Code Equivalence problem and a mild heuristic assumption.
Clustering Approach for Higher-Order Deterministic Masking
We present a novel scheme for securely computing the AND operation, without requiring additional online randomness. Building on the work of Nikova et al., our construction extends security beyond the first order while ensuring a uniform output distribution and resilience against glitches up to a specified threshold. This result addresses a longstanding open problem in side-channel-resistant masking schemes.
Our approach is based on a new method of share clustering, inspired by finite affine geometry, enabling simultaneous consideration of both security and uniformity. Furthermore, we demonstrate how this clustering-based framework can be applied to higher-order protection of ciphers like Ascon under a fully deterministic masking regime.
By eliminating the need for online randomness within the protected circuit, our work expands the practical scope of efficient and higher-order masking schemes for resource constraint applications.
X-Transfer: Enabling and Optimizing Cross-PCN Transactions
Blockchain interoperability solutions allow users to hold and transfer assets among different chains, and in so doing reap the benefits of each chain. To fully reap the benefits of multi-chain financial operations, it is paramount to support interoperability and cross-chain transactions also on Layer-2 networks, in particular payment channel networks (PCNs). Nevertheless, existing works on Layer-2 interoperability solutions still involve on-chain events, which limits their scalability and throughput. In this work, we present X-Transfer, the first secure, scalable, and fully off-chain protocol that allows payments across different PCNs. We formalize and prove the security of X-Transfer against rational adversaries with a game theoretic analysis. In order to boost efficiency and scalability, X-Transfer also performs transaction aggregation to increase channel liquidity and transaction throughput while simultaneously minimizing payment routing fees. Our empirical evaluation of X-Transfer shows that X-Transfer achieves at least twice as much throughput compared to the baseline of no transaction aggregation, confirming X-Transfer's efficiency.
Unconditional foundations for supersingular isogeny-based cryptography
In this paper, we prove that the supersingular isogeny problem (Isogeny), endomorphism ring problem (EndRing) and maximal order problem (MaxOrder) are equivalent under probabilistic polynomial time reductions, unconditionally.
Isogeny-based cryptography is founded on the presumed hardness of these problems, and their interconnection is at the heart of the design and analysis of cryptosystems like the SQIsign digital signature scheme. Previously known reductions relied on unproven assumptions such as the generalized Riemann hypothesis. In this work, we present unconditional reductions, and extend this network of equivalences to the problem of computing the lattice of all isogenies between two supersingular elliptic curves (HomModule).
For cryptographic applications, one requires computational problems to be hard on average for random instances. It is well-known that if Isogeny is hard (in the worst case), then it is hard for random instances. We extend this result by proving that if any of the above-mentionned classical problems is hard in the worst case, then all of them are hard on average. In particular, if there exist hard instances of Isogeny, then all of Isogeny, EndRing, MaxOrder and HomModule are hard on average.
A Decomposition Approach for Evaluating Security of Masking
Masking is a common countermeasure against side-channel attacks that encodes secrets into multiple shares, each of which may be subject to leakage. A key question is under what leakage conditions, and to what extent, does increasing the number of shares actually improve the security of these secrets. Although this question has been studied extensively in low-SNR regimes, scenarios where the adversary obtains substantial information—such as on low-noise processors or through static power analysis—have remained underexplored.
In this paper, we address this gap by deriving necessary and sufficient noise requirements for masking security in both standalone encodings and linear gadgets. We introduce a decomposition technique that reduces the relationship between an extended-field variable and its leakage into subproblems involving linear combinations of the variable’s bits. By working within binary subfields, we derive optimal bounds and then lift these results back to the extended field.
Beyond binary fields, we also present a broader framework for analyzing masking security in other structures, including prime fields. As an application, we prove a conjecture by Dziembowski et al. (TCC 2016), which states that for an additive group with its largest subgroup , a -noisy leakage satisfying ensures that masking enhances the security of the secret.
10-Party Sublinear Secure Computation from Standard Assumptions
Secure computation enables mutually distrusting parties to jointly compute a function on their secret inputs, while revealing nothing beyond the function output. A long-running challenge is understanding the required communication complexity of such protocols – in particular, when communication can be sublinear in the circuit representation size of the desired function. While several techniques have demonstrated the viability of sublinear secure computation in the two-party setting, known methods for the corresponding multi-party setting rely either on fully homomorphic encryption, non-standard hardness assumptions, or are limited to a small number of parties. In this work, we expand the study of multi-party sublinear secure computation by demonstrating sublinear-communication 10-party computation from various combinations of standard hardness assumptions. In particular, our contributions show:
– 8-party homomorphic secret sharing under the hardness of (DDH or DCR), the superpolynomial hardness of LPN, and the existence of constant-depth pseudorandom generators;
– A general framework for achieving (N + M )-party sublinear secure computation using M-party homomorphic secret sharing for NC1 and correlated symmetric PIR.
Together, our constructions imply the existence of a 10-party MPC protocol with sublinear computation. At the core of our techniques lies a novel series of computational approaches based on homomorphic secret sharing.
𝜔(1/𝜆)-Rate Boolean Garbling Scheme from Generic Groups
Garbling schemes are a fundamental cryptographic tool for enabling private computations and ensuring that nothing leaks beyond the output. As a widely studied primitive, significant efforts have been made to reduce their size. Until recently, all such schemes followed the Lindell and Pinkas paradigm for Boolean circuits (JoC 2009), where each gate is represented as a set of ciphertexts computed using only symmetric-key primitives. However, this approach is inherently limited to 𝑂(𝜆) bits per gate, where 𝜆 is the security parameter. Recently, it has been shown that achieving smaller garbled circuit size is possible under stronger assumptions, such as variants of Learning with Errors (LWE) or Indistinguishability Obfuscation (iO). In addition to requiring high-end cryptography, none of these constructions is black-box in the underlying cryptographic primitives, a key advantage of prior work. In this paper, we
present the first approach to garbling Boolean circuits that makes a black-box use of a group and uses 𝑜(𝜆) bits per gate.
Building on a novel application of the Reverse Multiplication-Friendly Embeddings (RMFE) paradigm (Cascudo et al., CRYPTO 2018), We introduce a new packing mechanism for garbling schemes, that packs boolean values into integers and leverage techniques for arithmetic garbling over integer rings. Our results introduce two new succinct schemes that achieve improved rates by a factor of√︁ log 𝜆, retaining the black-box usage. (1) Our first scheme is proven in the Generic Group model (GGM) for circuits with Ω(√︁ log 𝜆) width, obtaining a garbled circuit size of 𝜆 · |C|/√︁ log(𝜆). (2) Our second scheme is proven in the plain model under the Power-DDH assumption, attaining a garbled circuit size of 𝜆 · (|C|/√︁ log(𝜆) + poly(𝜆) · depth(C), but is restricted to layered circuits. Our schemes are the first to achieve sublinear (in 𝜆) cost per gate under assumptions that do not imply fully homomorphic encryption; in addition, our scheme is also the first to achieve this while making a black-box use of cryptography.
Authentication and sole control at a high level of assurance on widespread smartphones with threshold signatures
How to be assured that a user entered their PIN on their smartphone? The question is especially relevant when deploying remotely secured services such as with mobile wallets for digital identity and banking, which typically deploy a server side backed by a hardware security module (HSM). As long as the server can be trusted, authentication can be performed with high assurance, but it is challenging to guarantee sole control. This report defines an approach in terms of an abstract security problem and a concrete solution based on threshold signatures. It can be applied to use cases such as HSM-backed mobile identity wallets and other identification means.
Memory-Efficient BKW Algorithm for Solving the LWE Problem
The study of attack algorithms for the Learning with Errors (LWE) problem is crucial for the cryptanalysis of LWE-based cryptosystems. The BKW algorithm has gained significant attention as an important combinatorial attack for solving LWE. However, its exponential time and memory requirements severely limit its practical applications, even with medium-sized parameters. In this paper, we present a memory-efficient BKW algorithm for LWE, which extends Bogos's work [Asiacrypt'16] on the Learning Parity with Noise (LPN) problem. While their work improved efficiency, it overlooked the high memory demands of the BKW algorithm. We address this with two key improvements. First, we propose an efficient reduction technique for low-memory regimes, -sum-PCS-reduce, which combines the -sum technique with Parallel Collision Search (PCS) to achieve a better time-memory trade-off. Second, we present an improved memory-optimized finite automaton for our optimized BKW algorithm by incorporating several efficient memory-saving reduction techniques and pruning potential high-memory paths. Our algorithm, using graphs as a meta tool, can automatically identify the optimal reduction path within the graph, aiming to reduce both time and memory complexities. Compared to the state-of-the-art coded-BKW in the lattice-estimator, our algorithm achieves time complexity improvements ranging from to . Furthermore, memory complexity is improved, with reductions ranging from to .
White-Box Watermarking Signatures against Quantum Adversaries and Its Applications
Software watermarking for cryptographic functionalities enables embedding an arbitrary message (a mark) into a cryptographic function. An extraction algorithm, when provided with a (potentially unauthorized) circuit, retrieves either the embedded mark or a special symbol unmarked indicating the absence of a mark. It is difficult to modify or remove the embedded mark without destroying the functionality of a marked function. Previous works have primarily employed black-box extraction techniques, where the extraction algorithm requires only input-output access to the circuit rather than its internal descriptions (white-box extraction). Zhandry (CRYPTO 2021) identified several challenges in watermarking public-key encryption (PKE) with black-box extraction and introduced the notion of privacy for white-box watermarking against classical adversaries. Kitagawa and Nishimaki (Journal of Cryptology 37(3)) extended watermarking techniques to pseudorandom functions (PRFs) and PKE in the presence of quantum adversaries, enabling extraction from pirate quantum circuits but failing to achieve privacy.
In this work, we investigate white-box watermarking for digital signatures secure against quantum adversaries. Our constructions enable the extraction of embedded marks from the description of a pirate quantum circuit that produces valid signatures while ensuring that black-box access to a marked signing function does not reveal information about the embedded mark. We define and construct white-box watermarking signatures that are secure against quantum adversaries, leveraging the leaning with errors (LWE) assumption and quantum fully homomorphic encryption. Furthermore, we highlight that privacy concerns are even more critical in the context of signatures than in PKE. We also present a compelling practical application of white-box watermarking signatures.
Additionally, we explore the concept of universal copy protection for signatures. We define universal copy protection as a mechanism that transforms any quantumly secure signature scheme into a copy-protected variant without altering the verification key or verification algorithm. This approach is preferable to developing specific copy-protected signature schemes, as it allows existing schemes to be secured without modifying their published verification keys. We demonstrate that universal copy protection for all quantum secure signatures is impossible by leveraging our white-box watermarking signatures secure against quantum adversaries.
Dazzle: Improved Adaptive Threshold Signatures from DDH
The adaptive security of threshold signatures considers an adversary that adaptively corrupts users to learn their secret key shares and states. Crites, Komlo, and Maller (Crypto 2023) proposed Sparkle, the first threshold signature scheme in the pairing-free discrete-log setting to be proved adaptively secure. However, its proof of full adaptive security requires the algebraic group model (AGM) and is based on an interactive assumption. Bacho, Loss, Tessaro, Wagner, and Zhu (Eurocrypt 2024) proposed Twinkle, whose full adaptive security can be based on the standard DDH assumption only.
We propose Dazzle and Dazzle-T, adaptively secure threshold signature schemes based on DDH without the AGM, the same assumption and model as Twinkle. Our schemes improve upon Twinkle in signature size, round complexity, and/or security tightness. In particular, Dazzle and Dazzle-T both have signatures that are shorter than Twinkle by one group element. Regarding the round complexity and tightness, Twinkle is three-round and non-tight. Our Dazzle is two-round and has the same security loss as Twinkle, while Dazzle-T is three-round and fully tight.
We achieve our improvements by optimizing the underlying single-party signature scheme and showing that the single-party scheme can be transformed to a threshold scheme by a simpler transformation than that of Twinkle.
Transparent SNARKs over Galois Rings
Recently, there is a growing need for SNARKs to operate over a broader range of algebraic structures, and one important structure is Galois ring. We present transparent SNARK schemes over arbitrary Galois rings. Compared with Rinocchio scheme in Ganesh et al. (J Cryptol 2023), our SNARK schemes do not require a trusted third party to establish a structured reference string (SRS).
In this paper, we present the expander code over arbitrary Galois rings, which can be encoded in time. Using this expander code, we then extend the Brakedown commitment scheme in Golovnev et al. (CRYPTO 2023) to Galois rings. By combining the Libra framework in Xie et al. (CRYPTO 2019), we present a transparent SNARK for log-space uniform circuits over Galois rings, achieving prover time, proof size, and verifier time. And by combining HyperPlonk in Chen et al. (EUROCRYPT 2023), we present a transparent SNARK for NP circuits over Galois rings, with prover time, proof size, and verifier time.
PKE and ABE with Collusion-Resistant Secure Key Leasing
Secure key leasing (SKL) is an advanced encryption functionality that allows a secret key holder to generate a quantum decryption key and securely lease it to a user. Once the user returns the quantum decryption key (or provides a classical certificate confirming its deletion), they lose their decryption capability. Previous works on public key encryption with SKL (PKE-SKL) have only considered the single-key security model, where the adversary receives at most one quantum decryption key. However, this model does not accurately reflect real-world applications of PKE-SKL. To address this limitation, we introduce collusion-resistant security for PKE-SKL (denoted as PKE-CR-SKL). In this model, the adversary can adaptively obtain multiple quantum decryption keys and access a verification oracle which validates the correctness of queried quantum decryption keys. Importantly, the size of the public key and ciphertexts must remain independent of the total number of generated quantum decryption keys. We present the following constructions:
- A PKE-CR-SKL scheme based on the learning with errors (LWE) assumption.
- An attribute-based encryption scheme with collusion-resistant SKL (ABE-CR-SKL), also based on the LWE assumption.
- An ABE-CR-SKL scheme with classical certificates, relying on multi-input ABE with polynomial arity.
HasteBoots: Proving FHE Bootstrapping in Seconds
Fully Homomorphic Encryption (FHE) enables computations on encrypted data, ensuring privacy for outsourced computation. However, verifying the integrity of FHE computations remains a significant challenge, especially for bootstrapping, the most computationally intensive operation in FHE. Prior approaches, including zkVM-based solutions and general-purpose SNARKs, suffer from inefficiencies, with proof generation times ranging from several hours to days. In this work, we propose HasteBoots, a succinct argument tailored for FHE operations. By designing customized polynomial interactive oracle proofs and optimized polynomial commitment schemes, HasteBoots achieves proof generation in a few seconds for FHE bootstrapping, significantly outperforming existing methods. Our approach demonstrates the potential for scalable and efficient verifiable FHE, paving the way for practical, privacy-preserving computations.
Quantum Security Evaluation of ASCON
Grover's algorithm, which reduces the search complexity of symmetric-key ciphers and hash functions, poses a significant security challenge in cryptography. Recent research has focused on estimating Grover's search complexity and assessing post-quantum security. This paper analyzes a quantum circuit implementation of ASCON, including ASCON-AEAD, hash functions, and ASCON-80pq, in alignment with NIST’s lightweight cryptography standardization efforts. We place particular emphasis on circuit depth, which directly impacts execution time, and analyze the quantum resource costs associated with Grover’s algorithm-based key recovery and collision attacks. Additionally, we estimate the resources required to assess the quantum-resistant security strength of ASCON, based on security levels and the latest research trends.
Improved Resultant Attack against Arithmetization-Oriented Primitives
In the last decade, the introduction of advanced cryptographic protocols operating on large finite fields has raised the need for efficient cryptographic primitives in this setting, commonly referred to as Arithmetization-Oriented (AO). The cryptanalysis of AO hash functions is essentially done through the study of the CICO problem on the underlying permutation. Two recent works at Crypto 2024 and Asiacrypt 2024 managed to solve the CICO problem much more efficiently than traditional Gröbner basis methods, using respectively advanced Gröbner basis techniques and resultants.
In this paper, we propose an attack framework based on resultants that applies to a wide range of AO permutations and improves significantly upon these two recent works. Our improvements mainly come from an efficient reduction procedure that we propose and rigorously analyze, taking advantage of fast multivariate multiplication. We present the most efficient attacks on Griffin, Arion, Anemoi, and Rescue. We show that most variants of Griffin, Arion and Anemoi fail to reach the claimed security level. For the first time, we successfully break a parameter set of Rescue, namely its -bit security variant. The presented theory and complexity estimates are backed up with experimental attacks. Notably, we practically find CICO solutions for out of rounds of Griffin, out of rounds of Anemoi, out of rounds of Rescue, improving by respectively , and rounds on the previous best practical attacks.
MPC with Publicly Identifiable Abort from Pseudorandomness and Homomorphic Encryption
Publicly identifiable abort is a critical feature for ensuring accountability in outsourced computations using secure multiparty computation (MPC). Despite its importance, no prior work has specifically addressed identifiable abort in the context of outsourced computations. In this paper, we present the first MPC protocol that supports publicly identifiable abort with minimal overhead for external clients. Our approach minimizes client-side computation by requiring only a few pseudorandom function evaluations per input. On the server side, the verification process involves lightweight linear function evaluations using homomorphic encryption. This results in verification times of a few nanoseconds per operation for servers, with client overhead being approximately two orders of magnitude lower. Additionally, the public verifiability of our protocol reduces client input/output costs compared to SPDZ-based protocols, on which we base our protocol. For example, in secure aggregation use cases, our protocol achieves over twice the efficiency during the offline phase and up to an 18 % speedup in the online phase, significantly outperforming SPDZ.
TFHE Gets Real: an Efficient and Flexible Homomorphic Floating-Point Arithmetic
Floating-point arithmetic plays a central role in computer science and is used in various domains where precision and computational scale are essential. One notable application is in machine learning, where Fully Homomorphic Encryption (FHE) can play a crucial role in safeguarding user privacy. In this paper, we focus on TFHE and develop novel homomorphic operators designed to enable the construction of precise and adaptable homomorphic floating-point operations. Integrating floating-point arithmetic within the context of FHE is particularly challenging due to constraints such as small message space and the lack of information during computation. Despite these challenges, we were able to determine parameters for common precisions (e.g., 32-bit, 64-bit) and achieve remarkable computational speeds, with 32-bit floating-point additions completing in 2.5 seconds and multiplications in approximately 1 second in a multi-threaded environment. These metrics provide empirical evidence of the efficiency and practicality of our proposed methods, which significantly outperform previous efforts. Our results demonstrate a significant advancement in the practical application of FHE, making it more viable for real-world scenarios and bridging the gap between theoretical encryption techniques and practical usability.
Inaccessible Entropy for Watermarking Generative Agents
In this work, we construct distortion-free and unforgeable watermarks for language models and generative agents. The watermarked output cannot be forged by a adversary nor removed by the adversary without significantly degrading model output quality. That is, the watermarked output is distortion-free: the watermarking algorithm does not noticeably change the quality of the model output and without the public detection key, no efficient adversary can distinguish output that is watermarked from outputs which are not. The core of the watermarking schemes involve embedding a message and publicly-verifiable digital signature in the generated model output. The message and signature can be extracted during the detection phase and verified by any authorized entity that has a public key. We show that, assuming the standard cryptographic assumption of one-way functions, we can construct distortion-free and unforgeable watermark schemes. Our framework relies on analyzing the inaccessible entropy of the watermarking schemes based on computational entropy notions derived from the existence of one-way functions.
Tighter Security Notions for a Modular Approach to Private Circuits
To counteract side-channel attacks, a masking scheme splits each intermediate variable into shares and transforms each elementary operation (e.g., field addition and multiplication) to the masked correspondence called gadget, such that intrinsic noise in the leakages renders secret recovery infeasible in practice. A simple and efficient security notion is the probing model ensuring that any shares are independently distributed from the secret input. One requirement of the probing model is the noise in the leakages should increase with the number of shares, largely restricting the side-channel security in the low-noise scenario. Another security notion for masking, called the random probing model, allows each variable to leak with a probability . While this model reflects the physical reality of side channels much better, it brings significant overhead. At Crypto 2018, Ananth et al. proposed a modular approach that can provide random probing security for any security level by expanding small base gadgets with share recursively, such that the tolerable leakage probability decreases with while the security increases exponentially with the recursion depth of expansion. Then, Belaïd et al. provided a formal security definition called Random Probing Expandability (RPE) and an explicit framework using the modular approach to construct masking schemes at Crypto 2020.
In this paper, we investigate how to tighten the RPE definition via allowing the dependent failure probabilities of multiple inputs, which results in a new definition called related RPE. It can be directly used for the expansion of multiplication gates and reduce the complexity of the base multiplication gadget from proposed at Asiacrypt 2021 to and maintain the same security level. Furthermore, we describe a method to expand any gates (rather than only multiplication) with the related RPE gadgets.
Besides, we denote another new RPE definition called Multiple inputs RPE used for the expansion of multiple-input gates composed with any gates. Utilizing these methods, we reduce the complexity of 3-share circuit compiler to , where is the size of the unprotected circuit and the protection failure probability of the global circuit is . In comparison, the complexity of the state-of-the-art work, proposed at Eurocrypt 2021, is for the same value of . Additionally, we provide the construction of a 5-share circuit compiler with a complexity .
Garbled Lookup Tables from Homomorphic Secret Sharing
Garbled Circuit (GC) is a fundamental tool in cryptography, especially in secure multiparty computation. Most garbling schemes follow a gate-by-gate paradigm. The communication cost is proportional to the circuit size times the security parameter .
Recently, Heath, Kolesnikov and Ng (Eurocrypt 2024) partially transcend the circuit size barrier by considering large gates. To garble an arbitrary -input -output gate, their scheme requires bits of communication. The security relies on circular correlation robust hash functions (CCRH).
We further improve the communication cost to , removing the exponential term. The computation cost is , dominated by exponentiations. Our construction is built upon recent progress in DCR-based Homomorphic Secret Sharing (HSS), so it additionally relies on the decisional composite residuosity (DCR) assumption.
As an intermediate step, we construct programmable distributed point functions with decomposable keys, relying on the DCR assumption. Previously, such primitive can only be constructed from multilinear maps or sub-exponential lattice assumptions.
Adaptively Secure IBE from Lattices with Asymptotically Better Efficiency
Current adaptively secure identity-based encryption (IBE) constructions from lattices are unable to achieve a good balance among the master public key size, secret key size, modulus and reduction loss. All existing lattice-based IBE schemes share a common restriction: the modulus is quadratic in the trapdoor norm.
In this work, we remove this restriction and present a new adaptively secure IBE scheme from lattices in the standard model, which improves the state-of-the-art construction proposed by Abla et al. (TCC 2021) and achieves asymptotically better efficiency. More precisely, we achieve the asymptotically minimal number of public vectors among all the existing schemes, along with a significantly smaller modulus compared to the scheme by Abla et al. (TCC 2021). Furthermore, our scheme enjoys the smallest Gaussian width of the secret key among all existing schemes and has the same tightness as Abla et al.'s scheme.
We propose a novel cross-multiplication design for our IBE scheme, along with several novel tools and techniques, including: (a) a homomorphic computation algorithm that outputs BGG+-style encoding with two distinct-norm trapdoors; (b) a sampling algorithm with hybrid Gaussian outputs; and (c) a partial rerandomization algorithm. These new tools and techniques are general and could find rich applications in lattice-based cryptography.
Chiplet-Based Techniques for Scalable and Memory-Aware Multi-Scalar Multiplication
This paper presents a high-performance architecture for accelerating Multi-Scalar Multiplication (MSM) on ASIC platforms, targeting cryptographic applications with high throughput demands. Unlike prior MSM accelerators that focus solely on efficient processing elements (PEs), our chiplet-based design optimally balances area, power, and computational throughput. We identify a mixed window configuration of 12- and 13-bit windows that enables an efficient multi-PE integration of 10 PEs per chiplet. Our single-PE design achieves a 1.37x speedup and 1.3x area reduction over prior works, while the multi-PE chiplet design improves the area-time product by 2.2x, offering scalability, lower production costs, and higher manufacturing yields.
Verifiable Streaming Computation and Step-by-Step Zero-Knowledge
We propose a new incrementally computable proof system, called Incrementally Verifiable Computation (IVsC). IVsC enables computing incremental proofs of correct execution for any RAM program on a input . Input is called a input if it is only available on-the-fly as part of an ongoing data generation/streaming process, and not available at once. We also propose a new notion of zero-knowledge features for IVsC that guarantees the proof can be incrementally verified w.r.t.~an encrypted digest, where the proof and digest hide the full data stream. We design zero-knowledge IVsC from a wide variety of standard falsifiable assumptions (such as decision-linear/sub-exponential DDH/LWE).
We also introduce a new notion of non-interactive zero-knowledge proofs, that we call zero-knowledge protocols. Such protocols have strong zero-knowledge guarantees, wherein the prover's entire internal memory is simulatable at any point during proof generation. That is, unlike standard zero-knowledge proofs, where only the final proof is simulatable, we can also simulate prover's state at every step of the computation. Such proof systems will be useful in settings where an adversary could corrupt an honest prover even before it generates the full proof. We show that a zero-knowledge IVsC system can be used (almost) as a black-box to design step-by-step zero-knowledge proof systems, therefore secure under standard assumptions.
The Round Complexity of Black-Box Post-Quantum Secure Computation
We study the round-complexity of secure multi-party computation (MPC) in the post-quantum regime where honest parties and communication channels are classical but the adversary can be a quantum machine. Our focus is on the black-box setting where both the construction as well as the security reduction are black-box in nature. In this context, Chia, Chung, Liu, and Yamakawa [FOCS'22] demonstrated the infeasibility of achieving standard simulation-based security within constant rounds, unless . This outcome leaves crucial feasibility questions unresolved. Specifically, it remains unknown whether black-box constructions are achievable within polynomial rounds; additionally, the existence of constant-round constructions with respect to - , a relaxed yet useful alternative to the standard simulation notion, remains unestablished.
This work provides positive answers to the aforementioned questions. We introduce the first black-box construction for post-quantum MPC in polynomial rounds, from the minimal assumption of post-quantum semi-honest oblivious transfers. In the two-party scenario, our construction requires only rounds. These results have already found application in the oracle separation between classical-communication quantum MPC and in the recent work of Kretschmer, Qian, and Tal [STOC'25].
As for -simulation, Chia, Chung, Liang, and Yamakawa [CRYPTO'22] resolved the issue for the two-party setting, leaving the general multi-party setting as an open question. We complete the picture by presenting the first black-box and constant-round construction in the multi-party setting. Our construction can be instantiated using various standard post-quantum primitives including lossy public-key encryption, linearly homomorphic public-key encryption, or dense cryptosystems.
En route, we obtain a black-box and constant-round post-quantum commitment that achieves a weaker version of the standard 1-many non-malleability, from the minimal assumption of post-quantum one-way functions. Besides its utility in our post-quantum MPC construction, this commitment scheme also reduces the assumption used in the lower bound of quantum parallel repetition recently established by Bostanci, Qian, Spooner, and Yuen [STOC'24]. We anticipate that it will find more applications in the future.
cuFalcon: An Adaptive Parallel GPU Implementation for High-Performance Falcon Acceleration
The rapid advancement of quantum computing has ushered in a new era of post-quantum cryptography, urgently demanding quantum-resistant digital signatures to secure modern communications and transactions. Among NIST-standardized candidates, Falcon—a compact lattice-based signature scheme—stands out for its suitability in size-sensitive applications. In this paper, we present cuFalcon, a high-throughput GPU implementation of Falcon that addresses its computational bottlenecks through adaptive parallel strategies. At the operational level, we optimize Falcon key components for GPU architectures through memory-efficient FFT, adaptive parallel ffSampling, and a compact computation mode. For signature-level optimization, we implement three versions of cuFalcon: the raw key version, the expanded key version, and the balanced version, which achieves a trade-off between efficiency and memory usage. Additionally, we design batch processing, streaming mechanisms, and memory pooling to handle multiple signature tasks efficiently. Ultimately, performance evaluations show significant improvements, with the raw key version achieving 172k signatures per second and the expanded key version reaching 201k. Compared to the raw key version, the balanced version achieves a 7% improvement in throughput, while compared to the expanded key version, it reduces memory usage by 70%. Furthermore, our raw key version implementation outperforms the reference implementation by 36.75 and achieves a 2.94 speedup over the state-of-the-art GPU implementation.
New Exchanged Boomerang Distinguishers for 5-Round AES
In block ciphers, the attacker should not be able to distinguish a block cipher from a random permutation, making the existence of a distinguisher important. Cryptanalysis of the reduced-round variants of block ciphers is also important in cryptographic design. AES is the most widely used block cipher, and currently, the best-known distinguisher for 5-round AES has a data and time complexity of with a success probability of 55\%. In this paper, we propose the fully active exchanged boomerang and multiple exchanged boomerang distinguishers for 5-round AES, based on the retracing boomerang key-recovery attack. The fully active exchanged boomerang distinguisher utilizes the probability that either each byte of the diagonal of the returned plaintext pair is fully active, or the diagonal is inactive for all diagonals. This probability is very high, but we enhance it using the friends pair technique to distinguish a block cipher from a random permutation. The multiple exchanged boomerang distinguisher utilizes the fact that there are three trails where the probability of one diagonal of the returned plaintext pair being inactive is higher than the random probability, and one trail where it is equal to the random probability. This 5-round distinguisher has a complexity of and a success probability of 82\%, which represents a new best-known result for the secret-key distinguisher on 5-round AES.
LatticeFold+: Faster, Simpler, Shorter Lattice-Based Folding for Succinct Proof Systems
Folding is a technique for building efficient succinct proof systems. Many existing folding protocols rely on the discrete-log based Pedersen commitment scheme, and are therefore not post-quantum secure and require a large (256-bit) field. Recently, Boneh and Chen constructed LatticeFold, a folding protocol using lattice-based commitments which is plausibly post-quantum secure and can operate with small (64-bit) fields. For knowledge soundness, LatticeFold requires the prover to provide a range proof on all the input witnesses using bit-decomposition, and this slows down the prover. In this work we present LatticeFold+, a very different lattice-based folding protocol that improves on LatticeFold in every respect: the prover is five to ten times faster, the verification circuit is simpler, and the folding proofs are shorter. To do so we develop two novel lattice techniques. First, we develop a new purely algebraic range proof which is much more efficient than the one in LatticeFold, and may be of independent interest. We further shrink the proof using double commitments (commitments of commitments). Second, we show how to fold statements about double commitments using a new sumcheck-based transformation.
Towards Optimal Early Stopping Agreement Protocols
Early stopping agreement protocols guarantee termination based on the actual number of malicious parties, , encountered during execution, rather than assuming the worst-case scenario of many corruptions. The lower bound on the round complexity for such protocols is known to be many rounds. In this work, we substantially improve the state of the art for cryptographic early stopping protocols in the honest majority setting where . In this scenario, the best known protocol due to Elsheimy, Loss, and Papamanthou (ASIACRYPT `24) achieves a round complexity of for some constant . We begin by introducing an improvement to the Elsheimy et al. approach which can be used to obtain the first polynomial-time early stopping agreement protocols in the corruption regime which achieve a complexity of . We then instantiate our generic framework with refined components to obtain a very concretely efficient protocol with a round complexity of only and polynomial communication complexity. Finally, we show that it is possible to come within one round of the optimal round complexity by presenting a broadcast protocol based on the exponential information gathering approach, which achieves a round complexity of , albeit with exponential communication complexity.
Silent Circuit Relinearisation: Sublinear-Size (Boolean and Arithmetic) Garbled Circuits from DCR
We introduce a general template for building garbled circuits with low communication, under the decisional composite residuosity (DCR) assumption. For the case of layered Boolean circuits, we can garble a circuit of size with communication proportional to bits, plus an additive factor that is polynomial in the security parameter. For layered arithmetic circuits with -bounded integer computation, we obtain a similar result: the garbled arithmetic circuit has size bits, where is the security parameter. These are the first constructions of general-purpose, garbled circuits with sublinear size, without relying on heavy tools like indistinguishability obfuscation or attribute-based and fully homomorphic encryption.
To achieve these results, our main technical tool is a new construction of a form of homomorphic secret sharing where some of the inputs are semi-private, that is, known to one of the evaluating parties. Through a new relinearisation technique that allows performing arbitrary additions and multiplications on semi-private shares, we build such an HSS scheme that supports evaluating any function of the form , where is any polynomially-sized circuit applied to the semi-private input , and is a restricted-multiplication (or, NC1) circuit applied to the private input . This significantly broadens the expressiveness of prior known HSS constructions.
Provable Speedups for SVP Approximation Under Random Local Blocks
We point out if assuming every local block appearing in the slide reduction algorithms [ALNS20] is `random' (as usual in the cryptographic background), then the combination of the slide reduction algorithms [ALNS20] and Pouly-Shen 's algorithm [PoSh24] yields exponentially faster provably correct algorithms for -approximate SVP for all approximation factors , which is the regime most relevant for cryptography.
K-Linkable Ring Signatures and Applications in Generalized Voting
Rational Secret Sharing with Competition
The rational secret sharing problem (RSS) considers incentivizing rational parties to share their received information to reconstruct a correctly shared secret. Halpern and Teague (STOC'04) demonstrate that solving the RSS problem deterministically with explicitly bounded runtime is impossible, if parties prefer learning the secret than not learning, and they prefer fewer other parties to learn.
To overcome this impossibility result, we propose RSS with competition. We consider a slightly different yet sensible preference profile: Each party prefers to learn the secret early and prefers fewer parties learning before them. This preference profile changes the information-hiding dynamics among parties in prior works: First, those who have learned the secret are indifferent towards or even prefer informing others later; second, the competition to learn the secret earlier among different access groups in the access structure facilitates information sharing inside an access group. As a result, we are able to construct the first deterministic RSS algorithm that terminates in at most two rounds. Additionally, our construction does not employ any cryptographic machinery (being fully game-theoretic and using the underlying secret-sharing scheme as a black-box) nor requires the knowledge of the parties' exact utility function. Furthermore, we consider general access structures.
IBE-IBE: Intent-Based Execution through Identity-Based Encryption and Auctions
This paper introduces a decentralized and leaderless sealed bid auction model for dynamic pricing of intents across blockchain networks. We leverage Multi-Party Computation (MPC) and Identity-Based Encryption (IBE) to improve pricing while ensuring fairness and decentralization. By addressing the vulnerabilities of current centralized or static pricing mechanisms, our approach fosters transparent, secure, and competitive price discovery. We further enhance the confidentiality of intents through Multi-Party Computation (MPC), Fully Homomorphic Encryption (FHE), and Trusted Execution Environments (TEE). Our novel methodology mitigates the risks of frontrunning and centralization while preserving the rapid settlement times essential to decentralized finance (DeFi).
Robust Non-Interactive Zero-Knowledge Combiners
A -out-of- robust non-interactive zero-knowledge (NIZK) combiner is a construction that, given access to candidate instantiations of a NIZK for some language, itself implements a NIZK for the same language. Moreover, the combiner is secure, assuming at least of the given candidates are secure.
In this work, we provide the first definition of combiners for NIZK, and prove that no robust NIZK combiner exists assuming (unless the polynomial hierarchy collapses).
On the positive side, we provide different constructions of robust NIZK combiners for . In particular, we show how to obtain:
1) A black-box combiner working for a special class of {\em homomorphic} languages where are polynomial and .
2) A non-black-box combiner working for any language, where are constant and .
3) A non-black-box combiner working for any language, where are polynomial and .
DART: Decentralized, Anonymous, and Regulation-friendly Tokenization
We introduce DART, a fully anonymous, account-based payment system designed to address a comprehensive set of real-world considerations, including regulatory compliance, while achieving constant transaction size. DART supports multiple asset types, enabling users to issue on-chain assets such as tokenized real-world assets. It ensures confidentiality and anonymity by concealing asset types, transaction amounts, balances, and the identities of both senders and receivers, while guaranteeing unlinkability between transactions. Our design provides a mechanism for asset-specific auditing. Issuers can designate asset-specific auditors for the assets they issue, with the system preserving the privacy of the auditor’s identity to achieve asset type privacy. Only the designated auditor is authorized to decrypt transactions related to their associated asset, and users efficiently prove the association between the (hidden) asset type and the (hidden) designated auditor in their transactions.
DART supports non-interactive payments, allowing an online sender to submit a transaction even when the receiver is offline, while still incorporating a receiver affirmation mechanism that captures the real-world compliance consideration where the receiver must confirm (or deny) an incoming transaction. To the best of our knowledge, this is the first scheme of this kind in the permissionless setting. To accommodate all eventualities, DART also incorporates a reversibility mechanism, enabling senders to reclaim funds from pending transactions if the receiver’s affirmation is not yet provided. Finally, it offers a privacy-preserving proof of balance (per asset type) mechanism.
Our system achieves full anonymity while supporting concurrent incoming and outgoing transactions, resolving a common issue that plagues many account-based anonymous systems. We further demonstrate how our system supports multi-party transactions, allowing payment to multiple receivers in one transaction efficiently. We provide a full formal model in the Universal Composition (UC) setting, as well as a UC protocol realization.
On the Power of Polynomial Preprocessing: Proving Computations in Sublinear Time, and More
Cryptographic proof systems enable a verifier to be convinced of of a computation's correctness without re-executing it; common efficiency requirements include both succinct proofs and fast verification.
In this work we put forth the general study of cryptographic proof systems with sublinear proving time (after a preprocessing).
Prior work has achieved sublinear proving only for limited computational settings (e.g., vector commitments and lookup arguments), relying on specific assumptions or through non-black-box use of cryptographic primitives. In this work we lift many of these limitations through the systematic study of a specific object: polynomial commitments (PC) with sublinear proving time, a choice motivated by the crucial role that PC play in the design of efficient cryptographic schemes.
Our main result is a simple construction of a PC with sublinear prover based on any vector commitment scheme (VC) and any preprocessing technique for fast polynomial evaluation. We prove that this PC satisfies evaluation binding, which is the standard security notion for PC, and show how to expand our construction to achieve the stronger notion of knowledge soundness (extractability).
The first application of our result is a construction of "index-efficient" SNARKs meaning that the prover is sublinear, after preprocessing, in the size of the index (i.e., the NP-relation describing the proven statement). Our main technical contribution is a method to transform a class of standard Polynomial Interactive Oracle Proofs (PIOPs) into index-efficient PIOPs. Our construction of index-efficient SNARKs makes black-box use of such index-efficient PIOPs and a PC with sublinear prover.
As a corollary, this yields the first lookup argument for unstructured tables in which the prover is sublinear in the size of the table, while making only black-box use of a VC and thus allowing instantiations from generic assumptions such as collision-resistant hash functions. Prior lookup arguments with sublinear provers were only known with non-black-box use of cryptographic primitives, or from pairings. Finally, our last application is a transformation that builds UC-secure SNARKs from simulation-extractable ones, with an approximately linear overhead in proving time (as opposed to quadratic in prior work).
UC-Security of Encrypted Key Exchange: A Tutorial
Password-Authenticated Key Exchange (PAKE) is a type of key exchange protocols secure against man-in-the-middle adversaries, in the setting where the two parties only agree upon a low-entropy "password" in advance. The first and arguably most well-studied PAKE protocol is Encrypted Key Exchange (EKE) (Bellovin and Marritt, 1992), and the standard security notion for PAKE is in the Universal Composability (UC) framework (Canetti et al., 2005). While the UC-security of EKE has been "folklore" knowledge for many years, a satisfactory formal proof has long been elusive.
In this work, we present a UC-security proof for the most common instantiation of EKE, which is based on hashed Diffie–Hellman. Our proof is in the random oracle + ideal cipher models, and under the computational Diffie–Hellman assumption. We thoroughly discuss the UC-security definition for PAKE, subtleties and pitfalls in the security proof, how to write a UC proof, and flaws in existing works; along the way we also present some philosophical discussions on security definitions and security proofs in general. In this way, we hope to draw attention to several understudied, underexplained or underappreciated aspects of the UC-security of EKE.
This tutorial can be viewed as a simplified version of the recent work by Januzelli, Roy and Xu (2025); however, we completely rewrite most of the materials there to make them much more approachable to beginners who have just learned the UC framework.
Diamond iO: A Straightforward Construction of Indistinguishability Obfuscation from Lattices
Indistinguishability obfuscation (iO) has seen remarkable theoretical progress, yet it remains impractical due to its high complexity and inefficiency. A common bottleneck in recent iO schemes is the reliance on bootstrapping techniques from functional encryption (FE) into iO, which requires recursively invoking the FE encryption algorithm for each input bit—creating a significant barrier to practical iO schemes.
In this work, we propose diamond iO, a new lattice-based iO construction that replaces the costly recursive encryption process with lightweight matrix operations. Our construction is proven secure under the learning with errors (LWE) and evasive LWE assumptions, as well as our new assumption—all-product LWE—in the pseudorandom oracle model. By leveraging the FE scheme for pseudorandom functionalities introduced by Agrawal et al. (ePrint’24) in a non-black-box manner, we remove the reliance on prior FE-to-iO bootstrapping techniques and thereby significantly reduce complexity. A remaining challenge is to reduce our new assumption to standard assumptions such as LWE, further advancing the goal of a practical and sound iO construction.
Doubly Efficient Cryptography: Commitments, Arguments and RAM MPC
Can a sender commit to a long input without even reading all of it? Can a prover convince a verifier that an NP statement holds without even reading the entire witness? Can a set of parties run a multiparty computation (MPC) protocol in the RAM model, without necessarily even reading their entire inputs? We show how to construct such "doubly efficient" schemes in a setting where parties can preprocess their input offline, but subsequently they can engage in many different protocol executions over this input in sublinear online time. We do so in the plain model, without any common setup. Our constructions rely on doubly efficient private information retrieval (DEPIR) as a building block and can be instantiated based on Ring LWE.
In more detail, we begin by constructing doubly efficient (interactive) commitments, where the sender preprocesses the input offline, and can later commit to this input to arbitrary receivers in sublinear online time. Moreover, the sender can open individual bits of the committed input in sublinear time. We then use these commitments to implement doubly succinct (interactive) arguments, where the prover preprocesses the statement/witness offline, and can subsequently run many proof protocols to convince arbitrary verifiers of the statement's validity in sublinear online time. Furthermore, we augment these to get a doubly efficient "commit, prove and locally open" protocol, where the prover can commit to a long preprocessed input, prove that it satisfies some global property, and locally open individual bits, all in sublinear time. Finally, we leverage these tools to construct a RAM-MPC with malicious security in the plain model. Each party individually preprocesses its input offline, and can then run arbitrary MPC executions over this input with arbitrary other parties. The online run-time of each MPC execution is only proportional to the RAM run-time of the underlying program, that can be sublinear in the input size.
Merkle Mountain Ranges are Optimal: On witness update frequency for cryptographic accumulators
We study append-only set commitments with efficient updates and inclusion proofs, or cryptographic accumulators. In particular, we examine how often the inclusion proofs (or witnesses) for individual items must change as new items are added to the accumulated set. Using a compression argument, we show unconditionally that to accumulate a set of items, any construction with a succinct commitment ( storage) must induce at least total witness updates as items are sequentially added. In a certain regime, we strengthen this bound to total witness updates. These lower bounds hold not just in the worst case, but with overwhelming probability over a random choice of the accumulated set. Our results show that a close variant of the Merkle Mountain range, an elegant construction that has become popular in practice, is essentially optimal.
Anamorphic Resistant Encryption: the Good, the Bad and the Ugly
Anamorphic encryption (AE), introduced by Persiano, Phan and Yung at Eurocrypt `22, allows to establish secure communication in scenarios where users might be forced to hand over their decryption keys to some hostile authority. Over the last few years, several works have improved our understanding of the primitive by proposing novel realizations, new security notions and studying inherent limitations.
This work makes progress, mainly, on this last line of research.
We show concrete realizations of public key encryption schemes that, provably, cannot be turned anamorphic. These were called Anamorphic Resistant Encryption (ARE, fort short) in a recent work of Dodis and Goldin.
We also show that, under certain conditions, anamorphic encryption is equivalent to algorithm substitution attacks. This allows to positively reinterpret our AREs as PKE schemes provably resistant to subversion attacks. To the best of our knowledge, these seem to be the first IND-CPA secure schemes achieving subversion resistance without trust assumptions or non-black-box decomposition techniques.
Our two AREs heavily rely, among other things, on a direct usage of extremely lossy functions: here the lossyness property is used in the constructions, rather than just in the proofs. The first construction is in the public parameters model and also requires iO. The second construction eliminates the need of both public parameters and iO, but is in the random oracle and relies on the novel concept of robust extremely lossy functions with group structure, a primitive that we define and (show how to) realize in this paper.
Authenticated BitGC for Actively Secure Rate-One 2PC
In this paper, we present a constant-round actively secure two-party computation protocol with small communication based on the ring learning with errors (RLWE) assumption with key-dependent message security. Our result builds on the recent BitGC protocol by Liu, Wang, Yang, and Yu (Eurocrypt 2025) with communication of one bit per gate for semi-honest security. First, we achieve a different manner of distributed garbling, where the global correlation is secret-shared among the two parties. The garbler always and only holds the garbled labels corresponding to the wire values when all inputs are zero, while the evaluator holds the labels corresponding to the real evaluation. In the second phase, we run an authentication protocol that requires some extra communication, which allows two parties to check the correct computation of each gate by treating the ciphertext as commitments, now that the global key is distributed. For layered circuits, the extra communication for authentication is bits per gate, resulting in total communication of bits per gate. For generic circuits, the extra communication cost can be bit per gate in the worst case, and thus, the total communication cost would be 2 bits per gate.
NoIC: PAKE from KEM without Ideal Ciphers
We show a generic compiler from KEM to (Universally Composable) PAKE in the Random Oracle Model (ROM) and without requiring an Ideal Cipher. The compiler is akin to Encrypted Key Exchange (EKE) by Bellovin-Merritt, but following the work of McQuoid et al. it uses only a 2-round Feistel to password-encrypt a KEM public key. The resulting PAKE incurs only insignificant cost overhead over the underlying KEM, and it is a secure UC PAKE if KEM is secure and key-anonymous under the Plaintext-Checking Attack (PCA).
Several KEM-to-PAKE compilers were shown recently, secure under the OW-PCA and ANO-PCA assumptions on KEM, but all used an Ideal Cipher in addition to ROM. While there are techniques for emulating ROM against quantum attackers, it is currently unknown how to extend many of such techniques to the Ideal Cipher Model. Consequently, doing without the Ideal Cipher in protocol design makes the resulting construction a more plausible candidate for post-quantum secure PAKE if instantiated with post-quantum PCA-secure and anonymous KEM, such as the ML-KEM standard itself.
Our construction and proofs build on many of the ideas underlying the KEM-to-PAKE compiler using 2-round Feistel given by McQuoid et al, but our protocol is more efficient and our proofs address limitations in the analysis therein.
Privately Constrained PRFs from DCR: Puncturing and Bounded Waring Rank
A privately constrained pseudorandom function (pCPRF) is a PRF with the additional property that one can derive a constrained key that allows evaluating the PRF only on inputs satisfying a constraint predicate , without revealing itself or leaking information about the PRF’s output on inputs that do not satisfy the constraint.
Existing privately constrained PRFs face significant limitations: either (1) they rely on assumptions known to imply fully-homomorphic encryption or indistinguishability obfuscation, (2) they support only highly restricted classes of constraints—for instance, no known group-based pCPRF even supports the simple class of puncturing constraints (where the constrained key permits evaluation on all but one point while hiding the punctured point), or (3) they are limited to polynomial-size input domains. A long-standing open question has been whether one can construct a privately constrained PRF from group-based assumptions for more expressive classes of constraints. In this work, we present a pCPRF based on the decisional composite residuosity (DCR) assumption that supports a highly expressive class of predicates, namely constraints with polynomially bounded Waring rank, which notably includes puncturing.
From a technical perspective, our work follows the general template of Couteau, Meyer, Passelègue, and Riahinia (Eurocrypt'23), who constructed a pCPRF from group-based homomorphic secret-sharing but were limited to inner-product constraints in the constraint-hiding setting. Leveraging novel techniques for computing with distributed discrete logarithms (DDLog), we enable the non-interactive authentication of powers of linear combinations of shares of some value. This, in turn, allows us to express constraints with polynomially bounded Waring rank.
Our construction is single-key, selectively secure, and supports an exponential-size domain.
ETK: External-Operations TreeKEM and the Security of MLS in RFC 9420
The Messaging Layer Security protocol MLS is standardized in IETF’s RFC 9420 and allows a group of parties to securely establish and evolve group keys even if the servers are malicious. Its core mechanism is based on the TreeKEM protocol, but has gained many additional features and modifications during the development of the MLS standard. Over the last years, several partial security analyses have appeared of incomplete drafts of the protocol. One of the major additions to the TreeKEM design in MLS RFC 9420 (the final version of the standard) are the external operations, i.e., external commits and proposals, which interact deeply with the core TreeKEM protocol. These operations have not been considered in any previous security analysis, leaving their impact on the protocol’s overall security unclear.
In this work, we formalize ETK: External-Operations TreeKEM that includes external commits and proposals. We develop a corresponding ideal functionality and prove that ETK indeed realizes .
Our work is the first cryptographic analysis that considers both the final changes to the standard’s version of TreeKEM as well as external proposals and external commits. Compared to previous works that considered MLS draft versions, our ETK protocol is by far the closest to the final MLS RFC 9420 standard. Our analysis implies that the core of MLS’s TreeKEM variant as defined in RFC 9420 is an ETK protocol that realizes , when used with an SUF-CMA secure signature scheme, such as the IETF variant of Ed25519. We show that contrary to previous claims, MLS does not realize [Crypto2022] when used with signature schemes that only guarantee EUF-CMA, such as ECDSA.
Moreover, we show that the security of the protocol could be further strengthened by adding a functionality to insert PSKs, allowing another form of healing, and give a corresponding construction ETK-PSK and ideal functionality .
Network agnostic consensus in constant time
Network agnostic protocols (Blum, Katz, Loss TCC `19) are consensus or MPC protocols that strike a balance between purely synchronous and asynchronous protocols. Given thresholds that satisfy and , they have the unique property of remaining secure against an adversary that either (1) corrupts up to parties in a synchronous execution where all messages are delivered within a known bound or (2) corrupts up to in an asynchronous execution where messages can be delayed arbitrarily. All existing network agnostic protocols follow a design pattern which first attempts to run a synchronous path, and then switches to an asynchronous path as a fallback option if the synchronous path times out after some time due to the network being asynchronous. Unfortunately, has to be set conservatively to account for the possibility that the synchronous path might take an unusually long time even when the network is synchronous. As a result, for various basic tasks including Byzantine Agreement or MPC, no existing network agnostic protocol is able to terminate for all honest parties within constant expected time in \emph{all possible executions}.
In this work, we introduce a new paradigm to construct network agnostic consensus that, for the first time, overcome this barrier. Using this new design pattern we first present simple protocols for reliable broadcast (RB) and binary agreement (BA)
that are \emph{responsive} when no more than parties are corrupted and run in expected constant time regardless of the network conditions.\footnote{The latter was already possible for RB, which by design requires only constantly many constant rounds, but is not guaranteed to terminate for all parties.} We then extend our results to asynchronous common subset (ACS) and MPC. Notably, our approach \emph{reverses the order} of the synchronous and asynchronous path by designing protocols that are first and foremost asynchronous and only fall back to the synchronous execution path when more than parties are corrupted.
Two Is All It Takes: Asymptotic and Concrete Improvements for Solving Code Equivalence
The Linear Code Equivalence ( ) problem asks, for two given linear codes , to find a monomial mapping into . Algorithms solving crucially rely on a (heuristic) subroutine, which recovers the secret monomial from pairs of codewords satisfying . We greatly improve on this known bound by giving a constructive (heuristic) algorithm that recovers the secret monomial from any \emph{two} pairs of such codewords for any . We then show that this reduction in the number of required pairs enables the design of a more efficient algorithm for solving the problem. Our asymptotic analysis shows that this algorithm outperforms previous approaches for a wide range of parameters, including all parameters proposed across the literature. Furthermore, our concrete analysis reveals significant bit security reductions for suggested parameters. Most notably, in the context of the LESS signature scheme, a second-round contender in the ongoing NIST standardization effort for post-quantum secure digital signatures, we obtain bit security reductions of up to 24 bits.
Improved Subfield Curve Search For Specific Field Characteristics
Isogeny-based cryptography relies its security on the hardness of the supersingular isogeny problem: finding an isogeny between two supersingular curves defined over a quadratic field.
The Delfs-Galbraith algorithm is the most efficient procedure for solving the supersingular isogeny problem with a time complexity of operations. The bottleneck of the Delfs-Galbraith algorithm is the so-called subfield curve search (i.e., finding an isogenous supersingular elliptic curve defined over the base field), which determines the time complexity.
Given that, for efficiency, most recent isogeny-based constructions propose using finite fields with field characteristics equal to for some positive integers and .
This work focuses on primes of that particular form, and it presents two new algorithms for finding subfield curves with a time complexity of operations and a memory complexity polynomial in . Such algorithms exploit the existence of large torsion- points and extend the subfield root detection algorithm of Santos, Costello, and Shi (Crypto 2022) to our case study. In addition, it is worth highlighting that these algorithms easily extend to primes of the form and with being a small integer.
This study also examines the usage of radical -isogenies with the proposed extended subfield root detection algorithm. In this context, the results indicate that the radical -isogeny approach is competitive compared with the state-of-the-art algorithms.
“Check-Before-you-Solve”: Verifiable Time-lock Puzzles
Time-lock puzzles are cryptographic primitives that guarantee to the generator that the puzzle cannot be solved in less than sequential computation steps. They have recently found numerous applications, e.g., in fair contract signing and seal-bid auctions. However, solvers have no a priori guarantee about the solution they will reveal, e.g., about its ``usefulness'' within a certain application scenario. In this work, we propose verifiable time-lock puzzles (VTLPs) that address this by having the generator publish a succinct proof that the solution satisfies certain properties (without revealing anything else about it). Hence solvers are now motivated to ``commit'' resources into solving the puzzle. We propose VTLPs that support proving arbitrary NP relations about the puzzle solution.
At a technical level, to overcome the performance hurdles of the ``naive'' approach of simply solving the puzzle within a SNARK that also checks , our scheme combines the ``classic'' RSA time-lock puzzle of Rivest, Shamir, and Wagner, with novel building blocks for ``offloading'' expensive modular group exponentiations and multiplications from the SNARK circuit. We then propose a second VTLP specifically for checking RSA-based signatures and verifiable random functions (VRFs). Our second scheme does not rely on a SNARK and can have several applications, e.g., in the context of distributed randomness generation. Along the road, we propose new constant-size proofs for modular exponent relations over hidden-order groups that may be of independent interest. Finally, we experimentally evaluate the performance of our schemes and report the findings and comparisons with prior approaches.
Last updated: 2025-02-14
Lightweight Single-Server PIR with Communication
It is well-known that any single-server PIR scheme with sublinear communication necessitates public-key cryptography. Several recent studies, which we collectively refer to as lightweight PIR, demonstrate that this limitation can be circumvented to some extent. However, all such schemes require at least communication per-query, where is the size of the database. Indeed, the celebrated result provided by Ishai et al. (Crypto '24) implies that, with solely symmetric-key cryptography, achieving per-query communication below necessitates more than client storage. Whether this barrier can be overcome with limited use of public-key cryptography remains an open question. In this paper, we tackle this question by presenting the first lightweight single-server PIR scheme with communication while allowing arbitrary (non-zero) client storage.
Building Hard Problems by Combining Easy Ones: Revisited
We establish the following theorem:
Let be random functions from to , . For all polynomial-query-bounded distinguishers making at most queries to each oracle, there exists a poly-time oracle simulator and a constant such that the probability is negligible, that is
A Robust Variant of ChaCha20-Poly1305
The ChaCha20-Poly1305 AEAD scheme is widely used as an alternative for AES-GCM on platforms without AES hardware instructions. Although recent analysis by Degabriele et al. shows that ChaCha20-Poly1305 provides adequate security in the conventional multiuser model, the construction is totally broken when a single nonce is repeated – a real-word scenario that can occur due to faulty implementations or the desire to use random nonces.
We present a new nonce-misuse resistant and key-committing authenticated encryption scheme, called ChaCha20-Poly1305-PSIV, that is based on carefully combining the ChaCha20-Poly1305 building blocks into the NSIV paradigm proposed by Peyrin and Seurin (CRYPTO 2016) without performance loss. We analyze the security of the underlying mode PSIV in the multi-user faulty-nonce model assuming that the underlying permutation is ideal, and prove its key-committing security in the cmt-1 model. Rust and C implementations are provided, and benchmarks confirm that performance is comparable to the ChaCha20-Poly1305 implementation in libsodium.
In terms of security and efficiency (without hardware support), our proposal compares favorably to AES-GCM-SIV. Since we reuse the ChaCha20-Poly1305 building blocks, we expect ChaCha20-Poly1305-PSIV to benefit from existing analysis and to be easy to deploy in practice.
Uniformly Most Powerful Tests for Ad Hoc Transactions in Monero
We introduce a general, low-cost, low-power statistical test for transactions in transaction protocols with small anonymity set authentication (TPSASAs), such as Monero. The test classifies transactions as ad hoc (spontaneously constructed to spend a deterministically selected key) or self-churned (constructed from a probability distribution very close to that of the default wallet software, and with the same sender and receiver). The test is a uniformly most powerful (UMP) likelihood ratio tests (LRT) from the Neyman-Pearson Lemma, and makes no assumptions about user behavior. We extend these tests to expoit prior information about user behavior. We discuss test parameterization, as well as how anonymity set cardinality and user behavior impact test performance. We also describe a maximum-likelihood de-anonymization attack on Monero based on our test.
The Quantum Decoherence Model: Everlasting Composable Secure Computation and More
Quantum cryptography allows to achieve security goals which are unobtainable using classical cryptography alone: it offers the promise of everlasting privacy. Thatis, an adversary trying to attack a protocol must succeed during the run of the protocol.
After the protocol has terminated, security holds unconditionally.
In this work, we initiate the study of a new model which we call the quantum decoherence model (QDM). In a nutshell, this model captures adversaries that are computationally bounded during the run of a protocol (and some time after), but become computationally unbounded long after the protocol terminates. Importantly, once the adversary becomes computationally unbounded, he can only remember a bounded number of qubits from before the computational bound was lifted.
We provide a variant of the Universal Composability framework which captures the new notion of quantum decoherence and augment it with quantum random oracles. As our main contribution, we construct a non-interactive commitment scheme achieving unconditional and statistical security against malicious senders and everlasting security against malicious receivers under our new security notion. Such commitments imply general secure multiparty computation with everlasting security.
Finally, we show that our core technique can be applied to a broader spectrum of problems. We show that it gives rise to everlasting public key encryption and OT in the QDM. Finally, we also consider the weaker notion of incompressible encryption in the setting of quantum decoherence, and show that post-quantum IND-CPA secure public
key encryption is sufficient to realize this notion without resorting to random oracles.
At the technical core of our constructions is a new, conceptually simple yet powerful reverse entropic uncertainty relation.
Slot a la carte: Centralization Issues in Ethereum's Proof-of-Stake Protocol
In this paper, we demonstrate that Ethereum's current proof-of-stake (PoS) consensus mechanism poses a significant threat to decentralisation. Our research focuses on the manipulability of distributed randomness beacons (DRBs) in leader selection. Specifically, we show that RANDAO - Ethereum's DRB - is seriously vulnerable to manipulations in its current form. For example, if a lucrative slot is foreseen, there is a risk that staking entities may temporarily collude to control of the validators, enabling them to execute a series of RANDAO manipulation attacks that secure the target slot with a success rate. The effectiveness of our method stems from the fact that we work with a significantly richer model of the possible attacks compared to previous works. Our manipulative strategies work by missing blocks from the canonical chain - either by withholding blocks in the adversary's own slots or by forking out blocks proposed by others. We argue that while PoS can pave the path in the future for blockchains, Ethereum's current DRB implementation has to be replaced with a more secure mechanism.
LSM Trees in Adversarial Environments
The Log Structured Merge (LSM) Tree is a popular choice for key-value stores that focus on optimized write throughput while maintaining performant, production-ready read latencies. To optimize read performance, LSM stores rely on a probabilistic data structure called the Bloom Filter (BF). In this paper, we focus on adversarial workloads that lead to a sharp degradation in read performance by impacting the accuracy of BFs used within the LSM store. Our evaluation shows up to increase in the read latency of lookups for popular LSM stores. We define adversarial models and security definitions for LSM stores. We implement adversary resilience into two popular LSM stores, LevelDB and RocksDB. We use our implementations to demonstrate how performance degradation under adversarial workloads can be mitigated.
Assumption-Free Fuzzy PSI via Predicate Encryption
We present the first protocol for efficient Fuzzy Private Set Intersection (PSI) that achieves linear communication complexity, does not depend on restrictive assumptions on the distribution of party inputs, and abstains from inefficient fully homomorphic encryption. Specifically, our protocol enables two parties to compute all pairs of elements from their respective sets that are within a given Hamming distance, without constraints on how these sets are structured.
Our key insight is that securely computing the (threshold) Hamming distance between two inputs can be reduced to securely computing their inner product. Leveraging this reduction, we construct a Fuzzy PSI protocol using recent techniques for inner-product predicate encryption. To enable the use of predicate encryption in our setting, we establish that these predicate encryption schemes satisfy a weak notion of simulation security and demonstrate how their internal key derivation can be efficiently distributed without a trusted third party.
As a result, our Fuzzy PSI on top of predicate encryption features not only asymptotically optimal linear communication complexity but is also concretely practical.
Practical Circuit Privacy/Sanitization for TFHE
Fully homomorphic encryption (FHE) enables the computation of arbitrary circuits over encrypted data. A widespread application of FHE is a simple two-party computation (2PC) protocol, where the server evaluates a circuit over the client's encrypted data and its private inputs. However, while the security of FHE guarantees that the client's data is protected from the server, there is no inherent support for the privacy of the server's input and the circuit.
One effective solution to this problem is an additional algorithm for FHE called sanitization, introduced by Ducas and Stehlé (Eurocrypt 2016). Roughly speaking, a sanitization algorithm removes any meaningful information contained in the ciphertext, including previous evaluations of circuits. Following their definition, several constructions for sanitization have been proposed, particularly for TFHE. However, all of these methods were impractical, requiring several bootstrappings or an excessive amount of randomized evaluations.
In this work, we construct a novel sanitization algorithm for TFHE that overcomes these issues. Our method only adds two lightweight randomization steps to the original TFHE bootstrapping, without any modifications to the core algorithms. As a result, our algorithm achieves sanitization with a single bootstrapping and minimal randomization, bringing sanitization closer to practicality.
To empirically evaluate the efficiency of our method, we provide concrete benchmark results based on our proof-of-concept implementation. Our algorithm sanitizes a single TFHE ciphertext in 35.88 ms, which is only 3.4% (1.18 ms) slower than the original TFHE bootstrapping with the same parameters. When directly compared to previous works, our method achieves a speedup by a factor of 4.82 to 209.03.
A note on the genus of the HAWK lattice
The cryptographic scheme and NIST candidate HAWK makes use of a particular module lattice and relies for its security on the assumption that finding module lattice isomorphisms (module LIP) is hard. To support this assumption, we compute the mass of the HAWK lattice, which gives a lower bound on the number of isometry classes of module lattices which cannot be distinguished from the HAWK lattice by an easily computed invariant called the genus. This number turns out to be so large that an attack based on the genus alone seems infeasible.
Rejected Challenges Pose New Challenges: Key Recovery of CRYSTALS-Dilithium via Side-Channel Attacks
Rejection sampling is a crucial security mechanism in lattice-based signature schemes that follow the Fiat-Shamir with aborts paradigm, such as ML-DSA/CRYSTALS-Dilithium. This technique transforms secret-dependent signature samples into ones that are statistically close to a secret-independent distribution (in the random oracle model). While many side-channel attacks have directly targeted sensitive data such as nonces, secret keys, and decomposed commitments, fewer studies have explored the potential leakage associated with rejection sampling. Notably, Karabulut~et~al. showed that leakage from rejected challenges can undermine, but not entirely break, the security of the Dilithium scheme.
Motivated by the above, we convert the problem of key recovery (from the leakage of rejection sampling) to an integer linear programming problem (ILP), where rejected responses of unique Hamming weights set upper/lower constraints of the product between the challenge and the private key. We formally study the worst-case complexity of the problem as well as empirically confirm the practicality of the rejected challenge attack. For all three security levels of Dilithium-2/3/5, our attack recovers the private key in seconds or minutes with a 100% Success Rate (SR).
Our attack leverages knowledge of the rejected challenge and response, and thus we propose methods to extract this information by exploiting side-channel leakage from Number Theoretic Transform (NTT) operations. We demonstrate the practicality of this rejected challenge attack by using real side-channel leakage on a Dilithium-2 implementation running on an ARM Cortex-M4 microcontroller. To the best of our knowledge, it is the first efficient side-channel key recovery attack on ML-DSA/Dilithium that targets the rejection sampling procedure. Furthermore, we discuss some countermeasures to mitigate this security issue.
An Innovative Lightweight Symmetric Encryption Algorithm Integrating NeoAlzette ARX S-box and XCR CSPRNG
This paper introduces "Little OaldresPuzzle_Cryptic," a novel lightweight symmetric encryption algorithm.
At the core of this algorithm are two main cryptographic components: the NeoAlzette permutation S-box based on ARX (Addition-Rotation-XOR) primitives and the innovative pseudo-random number generator XorConstantRotation (XCR), used exclusively in the key expansion process. The NeoAlzette S-box, a non-linear function for 32-bit pairs, is meticulously designed for both encryption strength and operational efficiency, ensuring robust security in resource-constrained environments. During the encryption and decryption processes, a pseudo-randomly selected mixed linear diffusion function, distinct from XCR, is applied, enhancing the complexity and unpredictability of the encryption.
We comprehensively explore the various technical aspects of the Little OaldresPuzzle_Cryptic algorithm.
Its design aims to balance speed and security in the encryption process, particularly for high-speed data transmission scenarios. Recognizing that resource efficiency and execution speed are crucial for lightweight encryption algorithms, without compromising security, we conducted a series of statistical tests to validate the cryptographic security of our algorithm. These tests included assessments of resistance to linear and differential cryptanalysis, among other measures.
By combining the NeoAlzette S-box with sophisticated key expansion using XCR, and integrating the pseudo-randomly selected mixed linear diffusion function in its encryption and decryption processes, our algorithm significantly enhances its capability to withstand advanced cryptographic analysis techniques while maintaining lightweight and efficient operation. Our test results demonstrate that the Little OaldresPuzzle_Cryptic algorithm effectively supports the encryption and decryption needs of high-speed data, ensuring robust security and making it an ideal choice for various modern cryptographic application scenarios.
Keywords: Symmetric Encryption Algorithm, Lightweight Cryptography, ARX Primitives, PRNG, NeoAlzette S-boxes, XorConstantRotation, Diffusion and Confusion Layers, Cryptographic Security, Statistical Tests, Resource-Constrained Environments.
Constructing Quantum Implementations with the Minimal T-depth or Minimal Width and Their Applications
With the rapid development of quantum computers, optimizing the quantum implementations of symmetric-key ciphers, which constitute the primary components of the quantum oracles used in quantum attacks based on Grover and Simon's algorithms, has become an active topic in the cryptography community. In this field, a challenge is to construct quantum circuits that require the least amount of quantum resources. In this work, we aim to address the problem of constructing quantum circuits with the minimal T-depth or width (number of qubits) for nonlinear components, thereby enabling implementations of symmetric-key ciphers with the minimal T-depth or width. Specifically, we propose several general methods for obtaining quantum implementation of generic vectorial Boolean functions and multiplicative inversions in GF(2^n), achieving the minimal T-depth and low costs across other metrics. As an application, we present a highly compact T-depth-3 Clifford+T circuit for the AES S-box. Compared to the T-depth-3 circuits presented in previous works (ASIACRYPT 2022, IEEE TC 2024), our circuit has significant reductions in T-count, full depth and Clifford gate count. Compared to the state-of-the-art T-depth-4 circuits, our circuit not only achieves the minimal T-depth but also exhibits reduced full depth and closely comparable width. This leads to lower costs for the DW-cost and T-DW-cost. Additionally, we propose two methods for constructing minimal-width implementations of vectorial Boolean functions. As applications, for the first time, we present a 9-qubit Clifford+T circuit for the AES S-box, a 16-qubit Clifford+T circuit for a pair of AES S-boxes, and a 5-qubit Clifford+T circuit for the chi function of SHA3. These circuits can be used to derive quantum circuits that implement AES or SHA3 without ancilla qubits.
Prior-Based Label Differential Privacy via Secure Two-Party Computation
Differential privacy (DP) is a fundamental technique used in machine learning (ML) training for protecting the privacy of sensitive individual user data. In the past few years, a new approach for combining prior-based Local Differential Privacy (LDP) mechanisms with a relaxed DP criterion, known as Label DP, has shown great promise in increasing the utility of the final trained model without compromising on the DP privacy budget. In this work, we identify a crucial privacy gap in the current implementations of these prior-based LDP mechanisms, namely the leakage of sensitive priors. We address the challenge of implementing such LDP mechanisms without leaking any information about the priors while preserving the efficiency and accuracy of the current insecure implementations. To that end, we design simple and efficient secure two-party computation (2PC) protocols for addressing this challenge, implement them, and perform end-to-end testing on standard datasets such as MNIST, CIFAR-10. Our empirical results indicate that the added security benefit essentially comes almost for free in the sense that the gap between the current insecure implementations and our proposed secure version, in terms of run-time overhead and accuracy degradation, is minimal. E.g., for CIFAR-10, with strong DP privacy parameter, the additional runtime due to 2PC is over WAN with decrease in accuracy over an insecure (non-2PC) approach.
Practical Keyword Private Information Retrieval from Key-to-Index Mappings
This paper introduces practical schemes for keyword Private Information Retrieval (keyword PIR), enabling private queries on public databases using keywords. Unlike standard index-based PIR, keyword PIR presents greater challenges, since the query's position within the database is unknown and the domain of keywords is vast. Our key insight is to construct an efficient and compact key-to-index mapping, thereby reducing the keyword PIR problem to standard PIR. To achieve this, we propose three constructions incorporating several new techniques. The high-level approach involves (1) encoding the server's key-value database into an indexable database with a key-to-index mapping and (2) invoking standard PIR on the encoded database to retrieve specific positions based on the mapping. We conduct comprehensive experiments, with results showing substantial improvements over the state-of-the-art keyword PIR, ChalametPIR (CCS'24), i.e., a reduction in communication and runtime improvement, depending on database size and entry length. Our constructions are practical, executing keyword PIR in just 47 ms for a database containing 1 million 32-byte entries.
Last updated: 2025-02-14
NovaTEE: Private Clearing and Settlement on Trusted Execution Hardware
NovaTEE is a novel private multilateral settlement network designed to address critical inefficiencies in both traditional financial markets and cryptocurrency trading. The current clearing landscape suffers from fragmented capital allocation, restrictive prime brokerage relationships, and prolonged settlement timeframes in traditional finance, while cryptocurrency markets face challenges with over-collateralization, siloed lending pools, and security risks from centralized exchanges.
We introduce a settlement system that leverages Trusted Execution Environments (TEEs) and threshold cryptography to enable secure, private, and efficient settlement of obligations between multiple parties. The system utilizes a distributed key generation model and novel clearing mechanisms to optimize capital efficiency through multilateral netting, while maintaining strong privacy guarantees and regulatory compliance capabilities. By combining TEE-based security with advanced cryptographic protocols, including zero-knowledge proofs and sparse Merkle trees for data verification, our solution enables efficient cross-venue and cross-chain settlement while protecting sensitive trading information. This approach significantly reduces capital requirements for market participants, optimizes transaction costs, and provides institutional-grade clearing infrastructure without compromising on security or privacy. The system's architecture ensures that no single party has complete access to transaction details while maintaining auditability through a distributed backup network, offering a practical solution for institutional adoption of on-chain settlement.
Reductions Between Code Equivalence Problems
In this paper we present two reductions between variants of the Code Equivalence problem. We give polynomial-time Karp reductions from Permutation Code Equivalence (PCE) to both Linear Code Equivalence (LCE) and Signed Permutation Code Equivalence (SPCE). Along with a Karp reduction from SPCE to the Lattice Isomorphism Problem (LIP) proved in a paper by Bennett and Win (2024), our second result implies a reduction from PCE to LIP.
Efficient Mixed Garbling from Homomorphic Secret Sharing and GGM-Tree
We present new techniques for garbling mixed arithmetic and boolean circuits, utilizing the homomorphic secret sharing scheme introduced by Roy \& Singh (Crypto 2021), along with the half-tree protocol developed by Guo et al (Eurocrypt 2023). Compared to some two-party interactive protocols, our mixed garbling only requires several times more communication cost.
We construct the bit decomposition/composition gadgets with communication cost for integers in the range , requiring computations for the GGM-tree. Our approach is compatible with constant-rate multiplication protocols, and the cost decreases as increases. Even for a small , the concrete efficiency ranges from ( bits) to ( bits) per decomposition/composition. In addition, we develop the efficient gadgets for mod and unsigned truncation based on bit decomposition and composition.
We construct efficient arithmetic gadgets over various domains. For bound integers, we improve the multiplication rate in the work of Meyer et al. (TCC 2024) from to . We propose new garbling schemes over other domains through bounded integers with our modular and truncation gadgets, which is more efficient than previous constructions. For , additions and multiplication can be garbled with a communication cost comparable to our bit decomposition. For general finite field , particularly for large values of and , we garble the addition and multiplication at the cost of , where . For applications to real numbers, we introduce an ``error-based'' truncation that makes the cost of multiplication dependent solely on the desired precision.
Revisiting the Differential-Linear Attacks on ChaCha from IEEE TIT and INDOCRYPT 2024 (Extended Abstract)
The ChaCha stream cipher has become one of the best known ARX-based ciphers because of its widely use in several systems, such as in TLS, SSH and so on. In this paper, we find some errors in the attacks on ChaCha256 from IEEE TIT and INDOCRYPT 2024, and then corrected cryptanalytic attacks on ChaCha256 are given. However, the corrected attacks have extremely large time and data complexities. The corrected results show that the technique proposed in IEEE TIT may not be able to obtain improved differential-linear attacks on ChaCha.
Addressing Scalability Issues of Blockchains with Hypergraph Payment Networks
Payment channels are auspicious candidates in layer-2 solutions to reduce the number of on-chain transactions on traditional blockchains and increase transaction throughput. To construct payment channels, peers lock funds on 2-of-2 multisig addresses and open channels between one another to transact via instant peer-to-peer transactions. Transactions between peers without a direct channel are made possible by routing the payment over a series of adjacent channels. In certain cases, this can lead to relatively low transaction success rates and high transaction fees. In this work, we introduce pliability to constructing payment channels and graft edges with more than two endpoints into the payment graph. We refer to these constructions as hyperedges. We present hyperedge-based topologies to form hypergraphs and compare them to Bitcoin's Lightning network and other state-of-the-art solutions. The results demonstrate that hyperedge-based implementations can both increase transaction success rate, in addition to decreasing the network cost by more than 50% compared to that of the Lightning Network.