Papers updated in last 183 days (1754 results)
Diamond iO: A Straightforward Construction of Indistinguishability Obfuscation from Lattices
Indistinguishability obfuscation (iO) has seen remarkable theoretical progress, yet it remains impractical due to its high complexity and inefficiency. A common bottleneck in recent iO schemes is the reliance on bootstrapping techniques from functional encryption (FE) into iO, which requires recursively invoking the FE encryption algorithm for each input bit—creating a significant barrier to practical iO schemes.
In this work, we propose diamond iO, a new lattice-based iO construction that replaces the costly recursive encryption process with lightweight matrix operations. Our construction is proven secure under the learning with errors (LWE) and evasive LWE assumptions, as well as our new assumption—all-product LWE—in the pseudorandom oracle model. By leveraging the FE scheme for pseudorandom functionalities introduced by Agrawal et al. (ePrint’24) in a non-black-box manner, we remove the reliance on prior FE-to-iO bootstrapping techniques and thereby significantly reduce complexity. We further show that a variant of the all-product LWE assumption reduces to standard LWE, and we argue that known attacks on evasive LWE do not threaten our construction.
DewTwo: a transparent PCS with quasi-linear prover, logarithmic verifier and 4.5KB proofs from falsifiable assumptions
We construct the first polynomial commitment scheme (PCS) that has a transparent setup, quasi-linear prover time, verifier time, and proof size, for multilinear polynomials of size . Concretely, we have the smallest proof size amongst transparent PCS, with proof size less than KB for . We prove that our scheme is secure entirely under falsifiable assumptions about groups of unknown order. The scheme significantly improves on the prior work of Dew (PKC 2023), which has super-cubic prover time and relies on the Generic Group Model (a non-falsifiable assumption). Along the way, we make several contributions that are of independent interest: PoKEMath, a protocol for efficiently proving that an arbitrary predicate over committed integer vectors holds; SIPA, a bulletproofs-style inner product argument in groups of unknown order; we also distill out what prior work required from the Generic Group Model and frame this as a falsifiable assumption.
Somewhat Homomorphic Encryption from Linear Homomorphism and Sparse LPN
We construct somewhat homomorphic encryption from the sparse learning-parities-with-noise problem, along with any assumption that implies linearly homomorphic encryption (e.g., the decisional Diffie-Hellman or decisional composite residuosity assumptions). Our resulting schemes support an a-priori bounded number of homomorphic operations: multiplications followed by poly( ) additions, where is a security parameter. These schemes have compact ciphertexts: before and after homomorphic evaluation, the bit length of each ciphertext is a fixed polynomial in the security parameter , independent of the number of homomorphic operations that the scheme supports. This gives the first constructions of somewhat homomorphic encryption that can evaluate the class of bounded-degree polynomials without relying on lattice assumptions or bilinear maps.
Our new encryption schemes are conceptually simple: much as in Gentry, Sahai, and Waters’ fully homomorphic encryption scheme, ciphertexts in our scheme are matrices, homomorphic addition is matrix addition, and homomorphic multiplication is matrix multiplication. Moreover, when encrypting many messages at once and performing many homomorphic evaluations at once, the bit length of the ciphertexts in (some of) our schemes can be made arbitrarily close to the bit length of the plaintexts. The main limitation of our schemes is that they require a large evaluation key, whose size scales with the complexity of the homomorphic computation performed, though this key can be re-used across any polynomial number of encryptions and evaluations. Our construction builds on recent work of Dao, Ishai, Jain, and Lin, who construct a homomorphic secret-sharing scheme from the sparse-LPN assumption.
Conjunctive Searchable Symmetric Encryption from Hard Lattices
Searchable Symmetric Encryption (SSE) supports efficient keyword searches over encrypted outsourced document collections while minimizing information leakage. All practically efficient SSE schemes supporting conjunctive queries rely crucially on quantum-broken cryptographic assumptions (such as discrete-log hard groups) to achieve compact storage and fast query processing. On the other hand, quantum-safe SSE schemes based on purely symmetric-key crypto-primitives either do not support conjunctive searches, or are practically inefficient. In particular, there exists no quantum-safe yet practically efficient conjunctive SSE scheme from lattice-based hardness assumptions.
We solve this open question by proposing Oblivious Post-Quantum Secure Cross Tags (OQXT) – the first lattice-based practically efficient and highly scalable conjunctive SSE scheme. The technical centerpiece of OQXT is a novel oblivious cross-tag generation protocol with provable security guarantees derived from lattice-based hardness assumptions. We prove the post-quantum simulation security of OQXT with respect to a rigorously defined and thoroughly analyzed leakage profile. We then present a prototype implementation of OQXT and experimentally validate its practical efficiency and scalability over extremely large real-world databases. Our experiments show that OQXT has competitive end-to-end search latency when compared with the best (quantum-broken) conjunctive SSE schemes.
MultiReg-FE: Registered FE for Unbounded Inner-Product and Attribute-Weighted Sums
Francati et al. (Asiacrypt 2023) provided the first registered functional encryption (Reg-FE) beyond predicates. Reg-FE addresses the key escrow problem in functional encryption by allowing users to generate their own key pairs, effectively replacing the traditional private-key generator with a key curator. The key curator holds no secret information and runs deterministic algorithms to generate master public key for encryption and helper keys for decryption. However, existing Reg-FE schemes under standard assumptions require fixed data sizes, which limits their practicality in real-world applications.
In this work, we introduce Multi-Function Registered Functional Encryption for Inner-Product (MultiReg-FE for IP), a novel extension of Reg-FE. It enables users to register multiple functions under a single public key. With MultiReg-FE, we achieve both Reg-FE for Unbounded Inner-Product (Unbounded IP), which removes the need to predetermine vector lengths, and Reg-FE for Attribute-Weighted Sums (with Inner-Product) (AWSw/IP), allowing computations over arbitrary numbers of attribute-value pairs.
Specifically, we present:
· MultiReg-FE for Inner-Product, which supports unbounded number of function vectors from each user and achieves adaptive-IND-security;
· Reg-FE for Unbounded Inner-Product, removing the need for preset vector lengths and achieves adaptive-IND-security;
· The Reg-FE for AWSw/IP in public-key settings with very selective IND-security.
Adaptive Garbled Circuits and Garbled RAM from Non-Programmable Random Oracles
Garbled circuit techniques that are secure in the adaptive setting -- where inputs are chosen after the garbled program is sent -- are motivated by practice, but they are notoriously difficult to achieve. Prior adaptive garbling is either impractically expensive or encrypts the entire garbled program with the output of a programmable random oracle (PRO), a strong assumption.
We present a simple framework for proving adaptive security of garbling schemes in the non-programmable random oracle (NPRO) model. NPRO is a milder assumption than PRO, and it is close to the assumption required by the widely used Free XOR extension. Our framework is applicable to a number of existing GC techniques, which are proved adaptively secure without modification.
As our main application of the framework, we construct and prove adaptively secure a garbling scheme for tri-state circuits, a recently proposed circuit model that captures both Boolean circuits and RAM programs (Heath et al., Crypto 2023). For a given TSC , our garbling of is at most bits long, where is the security parameter. This implies both an adaptively secure garbled Boolean circuit scheme, as well an adaptively secure garbled RAM scheme where the garbling of steps of a RAM program has size bits.
Our scheme is concretely efficient: its Boolean circuit handling matches the performance of half-gates, and it is adaptively secure from NPRO.
Adaptive Distributional Security: A Framework for Input-Adaptive Cryptography
It is often desirable to break cryptographic primitives into two components: an input-independent offline component, and a cheap online component used when inputs arrive. Security of such online/offline primitives must be proved in the input-adaptive setting: the adversary chooses its input adaptively, based on what it sees in the offline-phase. Proving security in the input-adaptive setting can be difficult, particularly when one wishes to achieve simulation security and avoid idealized objects like a random oracle (RO).
This work proposes a framework for reasoning about input-adaptive primitives: adaptive distributional security (ADS). Roughly, an ADS primitive provides security when it is used with inputs drawn from one of two distributions that are themselves hard to distinguish. ADS is useful as a framework for the following reasons:
- An ADS definition can often circumvent impossibility results imposed on the corresponding simulation-based definition. This allows us to decrease the online-cost of primitives, albeit by using a weaker notion of security.
- With care, one can typically upgrade an ADS-secure object into a simulation-secure object (by increasing cost in the online-phase).
- ADS is robust, in the sense that (1) it enables a form of composition and (2) interesting ADS primitives are highly interconnected in terms of which objects imply which other objects.
- Many useful ADS-secure objects are plausibly secure from straightforward symmetric-key cryptography.
We start by defining the notion of an ADS encryption (ADE) scheme.
A notion of input-adaptive encryption can be easily achieved from RO, and the ADE definition can be understood as capturing the concrete property provided by RO that is sufficient to achieve input-adaptivity. From there, we use ADE to achieve ADS variants of garbled circuits and oblivious transfer, to achieve simulation-secure garbled circuits, oblivious transfer, and two-party computation, and prove interconnectedness of these primitives. In sum, this results in a family of objects with extremely cheap online-cost.
Consensus Under Adversary Majority Done Right
A specter is haunting consensus protocols—the specter of adversary majority. Dolev and Strong in 1983 showed an early possibility for up to 99% adversaries. Yet, other works show impossibility results for adversaries above 50% under synchrony, seemingly the same setting as Dolev and Strong's. What gives? It is high time that we pinpoint a key culprit for this ostensible contradiction: the modeling details of clients. Are the clients sleepy or always-on? Are they silent or communicating? Can validators be sleepy too? We systematize models for consensus across four dimensions (sleepy/always-on clients, silent/communicating clients, sleepy/always-on validators, and synchrony/partial-synchrony), some of which are new, and tightly characterize the achievable safety and liveness resiliences with matching possibilities and impossibilities for each of the sixteen models. To this end, we unify folklore and earlier results, and fill gaps left in the literature with new protocols and impossibility theorems.
Proof of Exponentiation: Enhanced Prover Efficiency for Algebraic Statements
Recent years have seen the widespread adoption of zkSNARKs constructed over small fields, including but not limited to, the Goldilocks field, small Mersenne prime fields, and tower of binary fields. Their appeal stems primarily from their efficacy in proving computations with small bit widths, which facilitates efficient proving of general computations and offers significant advantages, notably yielding remarkably fast proving efficiency for tasks such as proof of knowledge of hash preimages. Nevertheless, employing these SNARKs to prove algebraic statements (e.g., RSA, ECDSA signature verification) presents efficiency challenges.
To address this problem, we first define a new circuit model: arithmetic circuits with additional \textit{exponentiation gates}. These gates serve as fundamental building blocks for establishing more intricate algebraic relations. Then we present a \textit{Hash-committed Commit-and-Prove (HCP)} framework to construct Non-interactive Zero-knowledge (NIZK) proofs for the satisfiability of these circuits. Specifically, when proving knowledge of group exponentiations in discrete logarithm hard groups and RSA groups, compared to verifying complex group exponentiations within SNARK circuits, our approach requires proving only more lightweight computations within the SNARK, such as zk-friendly hash functions (e.g., Poseidon hash function). The number of these lightweight computations depends solely on the security parameter. This differentiation leads to substantial speedups for the prover relative to direct SNARK methods, while maintaining competitive proof size and verification cost.
Starfish: A high throughput BFT protocol on uncertified DAG with linear amortized communication complexity
Current DAG-based BFT protocols face a critical trade-off: certified DAGs provide strong security guarantees but require additional rounds of communication to progress the DAG construction, while uncertified DAGs achieve lower latency at the cost of either reduced resistance to adversarial behaviour or higher communication costs.
This paper presents Starfish, a partially synchronous DAG-based BFT protocol that achieves the security properties of certified DAGs, the efficiency of uncertified approaches and linear amortized communication complexity. The key innovation is Encoded Cordial Dissemination, a push-based dissemination strategy that combines Reed-Solomon erasure coding with Data Availability Certificates (DACs). Each of the validators disseminates complete transaction data for its own blocks while distributing encoded shards for others' blocks, enabling efficient data reconstruction with just shards. Building on the previous uncertified DAG BFT commit rule, Starfish extends it to efficiently verify data availability through committed leader blocks serving as DACs. For large enough transaction data, this design allows Starfish to achieve amortized communication complexity per committed transaction byte. The average and worst-case end-to-end latencies for Starfish are rigorously proven to be bounded by and in the steady state, where denotes the actual network delay.
Experimental evaluation against state-of-the-art DAG BFT protocols demonstrates Starfish's robust performance under steady-state and Byzantine scenarios.
Our results show that strong Byzantine fault tolerance, high performance, and low communication complexity can coexist in DAG BFT protocols, making Starfish particularly suitable for large-scale distributed ledger deployments.
Succinct Witness Encryption for Batch Languages and Applications
Witness encryption allows one to encrypt a message to an relation and a statement . The corresponding decryption key is any valid witness . In a succinct witness encryption scheme, we require that the size of the ciphertext be sublinear in the size of the relation. Currently, all realizations of succinct witness encryption for rely on strong assumptions such as pseudorandom obfuscation, extractable witness encryption, or differing-inputs obfuscation. Notably, none of these notions are known from standard assumptions.
In this work, we consider a relaxation of succinct witness encryption for to the setting of batch . In this setting, one encrypts to an relation together with statements . In the basic version, one can decrypt if they have a witness for all statements. The succinctness requirement is that the size of the ciphertext should be sublinear in the number of instances , but is allowed to grow with the size of the relation (i.e., the size of a single instance). More generally, we can also impose a (monotone) policy over the instances. In this case, decryption is possible only if there exists such that .
In this work, we initiate a systematic study of succinct witness encryption for batch languages. We start with two simple constructions that support CNF and DNF policies based on plain witness encryption in conjunction with a somewhere statistically sound batch argument for or a function-binding hash function. Then, using indistinguishability obfuscation, we show how to support policies that can be computed by read-once bounded-space Turing machines. The latter construction is in fact a unique witness map for monotone-policy batch , and as such, also gives a SNARG for monotone-policy batch where the size of the common reference string is sublinear in the number of instances.
Finally, we demonstrate some immediate applications of succinct witness encryption for batch languages. We construct new succinct computational secret sharing schemes for CNFs, DNFs, and weighted threshold policies. We also show how to build distributed monotone-policy encryption, a notion that generalizes recent trustless primitives like distributed broadcast encryption and threshold encryption with silent setup.
On the Adaptive Security of Key-Unique Threshold Signatures
In this work, we investigate the security assumptions required to prove the adaptive security of threshold signatures. Adaptive security is a strong notion of security that allows an adversary to corrupt parties at any point during the execution of the protocol, and is of practical interest due to recent standardization efforts for threshold schemes. Towards this end, we give two different impossibility results.
We begin by formalizing the notion of a key-unique threshold signature scheme, where public keys have a unique correspondence to secret keys and there is an efficient algorithm for checking that public keys are well-formed. Key-uniqueness occurs in many threshold schemes that are compatible with standard, single-party signatures used in practice, such as BLS, ECDSA, and Schnorr signatures.
Our first impossibility result demonstrates that it is impossible to prove the adaptive security of any key-unique threshold signature scheme under any non-interactive computational assumption for a broad class of reductions, in the range , where is the total number of parties, is the number of corrupted parties, and is the threshold. We begin by ruling out full adaptive security (i.e., corruptions) for key-unique threshold signatures under non-interactive computational assumptions, including, but not limited to, the discrete logarithm (DL), computational Diffie-Hellman (CDH), and q-Strong Diffie-Hellman (q-SDH) assumptions. We then generalize this impossibility result for all such that .
Our second impossibility result applies specifically to key-unique threshold Schnorr signatures, currently an active area of research. We demonstrate that, even under the interactive computational assumptions One-More Discrete Logarithm (OMDL) and Algebraic OMDL (AOMDL), it is impossible to prove adaptive security for in the programmable ROM with rewinding.
Taken together, our results underscore the difficulty of achieving adaptive security for key-unique threshold signatures. However, we believe this work can open a new line of research, by indicating assumptions and properties to aim for when constructing adaptively secure threshold schemes.
Faster VOLEitH Signatures from All-but-One Vector Commitment and Half-Tree
Post-quantum digital signature schemes have recently received increased attention due to the NIST standardization project for additional signatures. MPC-in-the-Head and VOLE-in-the-Head are general techniques for constructing such signatures from zero-knowledge proof systems. A common theme between the two is an all-but-one vector commitment scheme which internally uses GGM trees. This primitive is responsible for a significant part of the computational time during signing and verification.
A more efficient technique for constructing GGM trees is the half-tree technique, introduced by Guo et al. (Eurocrypt 2023). Our work builds an all-but-one vector commitment scheme from the half-tree technique, and further generalizes it to an all-but- vector commitment scheme. Crucially, our work avoids the use of the random oracle assumption in an important step, which means our binding proof is non-trivial and instead relies on the random permutation oracle. Since this oracle can be instantiated using fixed-key AES which has hardware support, we achieve faster signing and verification times.
We integrate our vector commitment scheme into FAEST (faest.info), a round one candidate in the NIST standardization process, and demonstrate its performance with a prototype implementation. Our implementation is between and faster across all parameter sets.
Zero-knowledge Authenticator for Blockchain: Policy-private and Obliviously Updateable
Transaction details and participant identities on the blockchain are often publicly exposed. In this work, we posit that blockchain's transparency should not come at the cost of privacy. To that end, we introduce zero-knowledge authenticators (zkAt), a new cryptographic primitive for privacy-preserving authentication on public blockchains. zkAt utilizes zero-knowledge proofs to enable users to authenticate transactions, while keeping the underlying authentiction policies private.
Prior solutions for such {policy-private authentication} required the use of threshold signatures, which can only hide the threshold access structure itself. In comparison, zkAt provides privacy for arbitrarily complex authentication policies, and offers a richer interface even within the threshold access structure by, for instance, allowing for the combination of signatures under distinct signature schemes.
In order to construct zkAt, we design a compiler that transforms the popular Groth16 non-interactive zero knowledge (NIZK) proof system into a NIZK with equivocable verification keys, a property that we define in this work. Then, for any zkAt constructed using proof systems with this new property, we show that all public information must be independent of the policy, thereby achieving policy-privacy.
Next, we give an extension of zkAt, called zkAt+ wherein, assuming a trusted authority, policies can be updated obliviously in the sense that a third-party learns no new information when a policy is updated by the policy issuer. We also give a theoretical construction for zkAt+ using recursive NIZKs, and explore the integration of zkAt into modern blockchains. Finally, to evaluate their feasibility, we implement both our schemes for a specific threshold access structure. Our findings show that zkAt achieves comparable performance to traditional threshold signatures, while also attaining privacy for significantly more complex policies with very little overhead.
Computationally Secure Aggregation and Private Information Retrieval in the Shuffle Model
The shuffle model has recently emerged as a popular setting for differential privacy, where clients can communicate with a central server using anonymous channels or an intermediate message shuffler. This model was also explored in the context of cryptographic tasks such as secure aggregation and private information retrieval (PIR). However, this study was almost entirely restricted to the stringent notion of information-theoretic security.
In this work, we study computationally secure aggregation protocols and PIR in the shuffle model. Our starting point is the insight that the previous technique of shuffling additive shares can be improved in the computational setting. We show that this indeed holds under the standard learning parity with noise (LPN) assumption, but even better efficiency follows from plausible conjectures about the multi-disjoint syndrome decoding (MDSD) problem and its binary error variant (beMDSD) that we introduce and study in this work.
We leverage the above towards improving the efficiency of secure aggregation and PIR in the shuffle model. For secure aggregation of long vectors, our protocols require - less communication than the previous information-theoretic solutions. Our PIR protocols enjoy the simplicity and concrete efficiency benefits of multi-server PIR while only requiring a single server to store the database. Under the MDSD assumption, they improve over recent single-server PIR constructions by up to two orders of magnitude.
On the (in)security of Proofs-of-Space based Longest-Chain Blockchains
The Nakamoto consensus protocol underlying the Bitcoin blockchain uses proof of work as a voting mechanism. Honest miners who contribute hashing power towards securing the chain try to extend the longest chain they are aware of. Despite its simplicity, Nakamoto consensus achieves meaningful security guarantees assuming that at any point in time, a majority of the hashing power is controlled by honest parties. This also holds under ``resource variability'', i.e., if the total hashing power varies greatly over time.
Proofs of space (PoSpace) have been suggested as a more sustainable replacement for proofs of work. Unfortunately, no construction of a ``longest-chain'' blockchain based on PoSpace, that is secure under dynamic availability, is known. In this work, we prove that without additional assumptions no such protocol exists. We exactly quantify this impossibility result by proving a bound on the length of the fork required for double spending as a function of the adversarial capabilities. This bound holds for any chain selection rule, and we also show a chain selection rule (albeit a very strange one) that almost matches this bound.
Concretely, we consider a security game in which the honest parties at any point control times more space than the adversary. The adversary can change the honest space by a factor with every block (dynamic availability), and ``replotting'' the space (which allows answering two challenges using the same space) takes as much time as blocks.
We prove that no matter what chain selection rule is used, in this game the adversary can create a fork of length that will be picked as the winner by the chain selection rule.
We also provide an upper bound that matches the lower bound up to a factor . There exists a chain selection rule (albeit a very strange one) which in the above game requires forks of length at least .
Our results show the necessity of additional assumptions to create a secure PoSpace based longest-chain blockchain. The Chia network in addition to PoSpace uses a verifiable delay function.
Our bounds show that an additional primitive like that is necessary.
Achieving "beyond CCA1" security for linearly homomorphic encryption, without SNARKs?
In the wake of Manulis and Nguyen's Eurocrypt'24 paper, new CCA security notions, vCCA and vCCAD, and associated construction blueprints have been proposed to leverage either CPA or CPAD secure FHE beyond the CCA1 security barrier. These two notions are the strongest CCA security notions so far achievable, respectively, by correct and approximate homomorphic schemes. However, the only known construction strategies intimately require advanced SNARK machinery, undermining their practicality. In this context, this paper is an attempt to achieve these advanced CCA security notions in the restricted case of linearly homomorphic encryption, without resorting to SNARKs. To do so, we investigate the relationship between the Linear-Only Homomorphism (LOH) assumption, an assumption that has been used for more than a decade at the core of several proof-of-knowledge constructions, and these two recent security notions (vCCA and vCCAD). On the bright side, when working under the correctness assumption, we establish that the LOH property is sufficient to achieve vCCA security in both the private and public key settings. In the public key setting, we further show that a surprisingly simple and previously known Paillier-based construction also achieves this level of security, at only twice the cost of the baseline scheme. We then turn our attention to LWE-based schemes for which the Pandora box of decryption errors opens up. In the private key setting, we are able to achieve CPAD and vCCAD security but only in a fairly restrictive non-adaptive setting, in which vCCAD collapses onto a weak relaxation of CCA1. Finally, we eventually achieve adaptive vCCAD security provided that the number of ciphertexts given to the adversary is suitably restricted. While bridging the gap towards credible practicality requires further work, this is a first step towards obtaining linear homomorphic schemes achieving these recent CCA security notions by means only of relatively lightweight machinery.
Secret-Key PIR from Random Linear Codes
Private information retrieval (PIR) allows to privately read a chosen bit from an -bit database with bits of communication. Lin, Mook, and Wichs (STOC 2023) showed that by preprocessing into an encoded database , it suffices to access only bits of per query. This requires , and even larger server circuit size.
We consider an alternative preprocessing model (Boyle et al. and Canetti et al., TCC 2017), where the encoding depends on a client's short secret key. In this secret-key PIR (sk-PIR) model we construct a protocol with communication, for any constant , from the Learning Parity with Noise assumption in a parameter regime not known to imply public-key encryption. This is evidence against public-key encryption being necessary for sk-PIR.
Under a new conjecture related to the hardness of learning a hidden linear subspace of with noise, we construct sk-PIR with similar communication and encoding size in which the server is implemented by a Boolean circuit of size . This is the first candidate PIR scheme with such a circuit complexity.
Special Genera of Hermitian Lattices and Applications to HAWK
In its decisional form, the module-Lattice Isomorphism Problem (decision module-LIP) has received first attention in a paper by Ling, Liu and Mendelsohn. The authors gave a polynomial-time algorithm to distinguish spinor genera within the genus of a quadratic binary -lattice, assuming that is a principal ideal domain. However, this algorithm would not impact cryptographic schemes based on decision module-LIP for lattices such as those employed in HAWK, i.e., for binary -lattices equipped with an Hermitian form (with a cyclotomic number field). Motivated by HAWK's framework, we investigate a concept that serves as an analogue of the spinor genus for Hermitian lattices, called special genus. This notion was studied by Shimura who provided a complete set of invariants for describing special genera. Building on this result, we propose an algorithm to determine whether two Hermitian lattices belong to the same special genus. Specifically for HAWK's lattice and sibblings, our algorithm runs in classical polynomial-time. Nevertheless we provide numerical evidence suggesting that the ability to distinguish special genera does not, in practice, constitute a significative advantage for solving decision module-LIP.
On the security of one certificateless aggregate signature scheme with dynamic revocation in vehicular ad-hoc networks
We show that the certificateless signature scheme [Veh. Commun. 47: 100763 (2024)] is insecure, because an adversary can launch forgery attack for any message. The signer's certificateless public key is not tightly bound to the system public key. The inherent flaw results in that the adversary can find an efficient signing algorithm functionally equivalent to the valid signing algorithm. The findings in this note could be helpful for newcomers who are not familiar with the designing techniques for certificateless signatures.
Black-Box (and Fast) Non-Malleable Zero Knowledge
Non-malleable zero-knowledge (NMZK), originally introduced in the seminal work of Dolev, Dwork, and Naor (STOC 91), is a fundamental concept for modeling the security of proof systems against man-in-the-middle attacks.
Recently, Kim, Liang, and Pandey (CRYPTO 2022) presented the first efficient constant-round NMZK argument system based solely on symmetric-key cryptography. Their construction relies on a non-black-box use of the involved cryptographic primitives and on multiple executions of Ligero (CCS 2017) that affect both the round complexity and the computational efficiency of their protocol. Their work left open the natural important challenge of achieving NMZK using the underlying primitives only in a black-box fashion (regardless of the number of rounds and actual efficiency).
In this paper, we solve the aforementioned open problem by presenting the first NMZK argument system based on the black-box use of cryptographic primitives. Our work is optimal in the use of primitives since we only need one-way functions, and asymptotically optimal in the number of rounds since we only require a constant number of rounds. Our argument system is non-malleable with respect to the strong "simulation-extractability" flavor of non-malleability.
Furthermore, we also show that our construction can be efficiently instantiated in Minicrypt, significantly improving upon the work of Kim et al., both in terms of round complexity and computational efficiency.
PSYLOCKE: Provably Secure Logic Locking with Practical Efficiency
Logic locking is an obfuscation technique designed to protect the intellectual property of hardware designs and has attracted considerable attention for over a decade. However, most logic locking schemes have been developed heuristically, leading the field into a cat-and-mouse game between attackers and defenders. Indeed, several proposed schemes have already been broken. While recent works have introduced provably secure logic locking, they often incur impractical overhead or fail to support the ASIC design paradigm while offering strong theoretical security guarantees.
In this work, we propose PSYLOCKE, a provably secure and practically efficient logic locking scheme that balances formal security guarantees with implementation feasibility. We introduce a new security paradigm that formalizes logic locking under predetermined allowable leakage, such as circuit topology, and we provide refined definitions of resilience against specific attacks. Our analysis bridges general security definitions and attack resilience, quantifying how leakage impacts the success of real-world attacks. PSYLOCKE is provably secure under topology leakage and achieves significant efficiency improvement compared to existing provably secure logic locking schemes. Alongside our theoretical analysis, we demonstrate through quantitative evaluations using widely-used benchmark circuits that PSYLOCKE strikes a favorable balance between practical performance and security. Concretely, PSYLOCKE reduced the Area-Power-Delay overhead by an order of magnitude while achieving the same security level, compared to the existing provably secure logic locking scheme.
A proof of P≠NP (New symmetric encryption algorithm against any linear attacks and differential attacks)
P vs NP problem is the most important unresolved problem in the field of computational complexity. Its impact has penetrated into all aspects of algorithm design, especially in the field of cryptography. The security of cryptographic algorithms based on short keys depends on whether P is equal to NP. In fact, Shannon strictly proved that the one-time-pad system meets unconditional security, but because the one-time-pad system requires the length of key to be at least the length of plaintext, how to transfer the key is a troublesome problem that restricts the use of the one-time-pad system in practice. Cryptography algorithms used in practice are all based on short key, and the security of the short key mechanism is ultimately based on one-way assumption. In fact, the existence of one-way function can directly lead to the important conclusion P≠NP.
In this paper, we originally constructed a short-key block cipher algorithm. The core feature of this algorithm is that, when any plaintext-ciphertext pairs are known, the problem of cracking its key is equilavent to the problem of two independent encryption system cracking their keys separately when only knowing their respective ciphertexts. In addition, for these two independent encryption systems, verifying a possible plaintext is also exponentially computationally complexity when only the ciphertext is known.
Based on the above feature, we construct a problem and theoretically prove that the problem satisfies the properties of one-way functions, thereby solving the problem of the existence of one-way functions, that is, directly proving that P≠NP.
Scalable Time-Lock Puzzles
Time-Lock Puzzles (TLPs) enable a client to lock a message such that a server can unlock it only after a specified time. They have diverse applications, such as scheduled payments, secret sharing, and zero-knowledge proofs. In this work, we present a scalable TLP designed for real-world scenarios involving a large number of puzzles, where clients or servers may lack the computational resources to handle high workloads. Our contributions are both theoretical and practical. From a theoretical standpoint, we formally define the concept of a "Delegated Time-Lock Puzzle (D-TLP)", establish its fundamental properties, and introduce an upper bound for TLPs, addressing a previously overlooked aspect. From a practical standpoint, we introduce the "Efficient Delegated Time-Lock Puzzle" (ED-TLP) protocol, which implements the D-TLP concept. This protocol enables both the client and server to securely outsource their resource-intensive tasks to third-party helpers. It enables realtime verification of solutions and guarantees their delivery within predefined time limits by integrating an upper bound and a fair payment algorithm. ED-TLP allows combining puzzles from different clients, enabling a solver to process them sequentially, significantly reducing computational resources, especially for a large number of puzzles or clients. ED-TLP is the first protocol of its kind. We have implemented ED-TLP and conducted a comprehensive analysis of its performance for up to 10,000 puzzles. The results highlight its significant efficiency in TLP applications, demonstrating that EDTLP securely delegates 99% of the client’s workload and 100% of the server’s workload with minimal overhead.
XS-circuits in Block Ciphers
XS-circuits describe block ciphers that utilize 2 operations: X) bitwise modulo 2 addition of binary words and S) substitution of words using key-dependent S-boxes with possibly complicated internal structure. We propose a model of XS-circuits which, despite the simplicity, covers a rather wide range of block ciphers. In our model, several instances of a simple round circuit, which contains only one S~operation, are linked together and form a compound circuit called a cascade. S operations of a cascade are interpreted as independent round oracles. We deal with diffusion characteristics of cascades. These characteristics are related to the cryptographic strength of corresponding block ciphers. We obtain results on invertibility, transitivity and 2-transitivity of mappings induced by round circuits and their cascades. We provide estimates on the first and second activation times where the i-th activation time is the minimum number of rounds which guarantees that at least i round oracles get different queries while processing two different cascade's inputs. The activation times are related to differential cryptanalysis. We introduce the similarity and duality relations between round circuits. Cascades of related circuits have the same or dual diffusion characteristics. We find canonical representatives of classes of similar circuits and show that the duality between circuits is related to duality between differential and linear attacks against corresponding block ciphers. We discuss families of circuits with growing number of inputs. Such families can be used to build wide-block ciphers.
LeOPaRd: Towards Practical Post-Quantum Oblivious PRFs via Interactive Lattice Problems
In this work, we introduce a more efficient post-quantum oblivious PRF (OPRF) design, called LeOPaRd. Our proposal is round-optimal and supports verifiability and partial obliviousness, all of which are important for practical applications. The main technical novelty of our work is a new method for computing samples of MLWE (Module Learning With Errors) in a two-party setting. To do this, we introduce a new family of interactive lattice problems, called interactive MLWE with re-use (iMLWE-RU). We rigorously study the hardness of iMLWE-RU and reduce it (under Gaussian error sampling) to a more standard MLWE-like problem where the adversary is additionally given access to a randomized MLWE PRF oracle. We believe iMLWE-RU problem family can be of independent interest for other interactive protocols.
LeOPaRd exploits an iMLWE-RU assumption to realize a lattice-based OPRF design without relying on heavy machinery such as noise flooding and fully homomorphic encryption used in earlier works. LeOPaRd can feature around 136 KB total communication, compared to 300+ KB in earlier works. We also identify gaps in some existing constructions and models, and propose appropriate fixes.
Attacking Poseidon via Graeffe-Based Root-Finding over NTT-Friendly Fields
This paper explores the algebraic structure of the Poseidon and Poseidon2 permutations
over NTT-friendly finite fields, with a focus on preimage recovery via root-finding
techniques. We introduce an algorithm for efficiently identifying single roots of high-degree
univariate polynomials that emerge from these constructions, based on the Graeffe transform
and the tangent Graeffe method. Our approach is evaluated on reduced-round bounty
instances of these permutations at various security levels, as proposed by the Ethereum
Foundation, demonstrating practical effectiveness. These results yield new insights into the
security of permutation-based cryptographic primitives instantiated over NTT-friendly prime
fields.
Justvengers: Batched VOLE ZK Disjunctions in Communication
Recent progress on zero-knowledge proofs (ZKPs) based on vector oblivious linear evaluation (VOLE) offers a promising paradigm for scaling ZKPs over extremely large statements. In particular, VOLE-based ZK is currently the best choice in terms of end-to-end execution time. However, VOLE-based ZK incurs high communication overhead — it usually scales linearly with the circuit size.
To mitigate this, existing literature considers VOLE-based ZK over structured statements. In this work, we focus on the batched disjunctive statement — and agree on fan-in circuits over a field ; each circuit is of size with inputs. 's goal is to demonstrate the knowledge of witnesses , for each s.t. where neither nor is revealed. Batched disjunctive statements are effective, e.g., in emulating the CPU execution inside ZK. Note, the naïve solution results in a circuit of size .
To prove such a statement using VOLE-based ZK, the prior state-of-the-art protocol (Weng et al., CCS'22) incurred communication by additionally relying on AHE, whereas (Yang et al., CCS'23) achieved communication using only VOLE.
In this work, we combine these two protocols non-trivially and present a novel protocol — targeting the batched disjunctive statement — that incurs only communication and computation for prover, using AHE and VOLE.
Admissible Parameters for the Crossbred Algorithm and Semi-regular Sequences over Finite Fields
Multivariate public key cryptography (MPKC) is one of the most promising alternatives to build quantum-resistant signature schemes, as evidenced in NIST's call for additional post-quantum signature schemes. The main assumption in MPKC is the hardness of the Multivariate Quadratic (MQ) problem, which seeks for a common root to a system of quadratic polynomials over a finite field. Although the Crossbred algorithm is among the most efficient algorithm to solve MQ over small fields, its complexity analysis stands on shaky ground. In particular, it is not clear for what parameters it works and under what assumptions.
In this work, we provide a rigorous analysis of the Crossbred algorithm over any finite field. We provide a complete explanation of the series of admissible parameters proposed in previous literature and explicitly state the regularity assumptions required for its validity. Moreover, we show that the series does not tell the whole story, hence we propose an additional condition for Crossbred to work. Additionally, we define and characterize a notion of regularity for systems over a small field, which is one of the main building blocks in the series of admissible parameters.
CL-SCA: A Contrastive Learning Approach for Profiled Side-Channel Analysis
Side-channel analysis (SCA) based on machine learning, particularly neural networks, has gained considerable attention in recent years. However, previous works predominantly focus on establishing connections between labels and related profiled traces. These approaches primarily capture label-related features and often overlook the connections between traces of the same label, resulting in the loss of some valuable information. Besides, the attack traces also contain valuable information that can be used in the training process to assist model learning. In this paper, we propose a profiled SCA approach based on contrastive learning named CL-SCA to address these issues. This approach extracts features by emphasizing the similarities among traces, thereby improving the effectiveness of key recovery while maintaining the advantages of the original SCA approach. Through experiments of different datasets from different platforms, we demonstrate that CL-SCA significantly outperforms other approaches. Moreover, by incorporating attack traces into the training process using our approach, we can further enhance its performance. This extension can improve the effectiveness of key recovery, which is fully verified through experiments on different datasets.
Side-channel safe conditional moves and swaps
Constant-time implementations are a cornerstone of secure
cryptographic systems, particularly in the context of key exchange protocols and digital signature schemes. These implementations are designed to eliminate timing side-channel vulnerabilities by ensuring that the program’s execution time is independent of secret data. A fundamental building block for achieving constant-time behavior is the conditional move operation. Unlike traditional branching constructs (such as if statements), which may introduce data-dependent timing variations, conditional moves allow developers to write logic that behaves identically at the hardware level regardless of input values. As a result, they are widely used in cryptographic libraries and standards to ensure both functional correctness and resistance to timing attacks. In this work, we describe our efforts to implement elliptic curve cryptography with some immunity against certain power leakage side-channel attacks, using standard C and Rust code.
Diving Deep Into UC: Uncovering and Resolving Issues in Universal Composability
Introduced by Canetti in 2001, Universal Composability (UC) is a widely adopted security model that enables the specification and proof of security for a broad range of protocols, offering strong security guarantees.
At its core lies the universal composition theorem (UC theorem), which ensures that protocols proven secure within the framework remain secure even when deployed in real-world environments with multiple instances of them.
In this work, we present two key contributions. First, we identify several problems with the UC framework, in particular the UC Theorem. They include counterexamples, limitations that make it unusable for important classes of protocols, and weaknesses in its proof. These problems reveal flaws in nearly all the fundamental concepts of UC.
Secondly, we update the main concepts of UC to address these problems. Although these revisions are nontrivial, our updated definitions are intended to stay as closely aligned with the original model as possible, while providing greater simplicity overall. To ensure the validity of these updates, we present a proof of the updated UC theorem, which is more detailed and modular than the original.
Fast elliptic curve scalar multiplications in SN(T)ARK circuits
Proof systems of arbitrary computations have found many applications in recent years. However, the proving algorithm has a consequent complexity tied to the size of the computation being proved. Thus, proving large computations is quite inefficient. One of these large computations is the scalar multiplication over an elliptic curve. In this work, we provide new techniques for reducing the time corresponding to proving a scalar multiplication, using integer lattice reduction or a (half) extended Euclidean algorithm in a ring of integers. We investigate optimizations in the case of small (complex multiplication) discriminant curves, and its generalization for multi scalar multiplications as used in signature verification. We provide an optimized Golang implementation for different elliptic curves in different proof systems settings. The speed-up in proving time is between 22% and 53% compared to the previous state-of-the-art.
A Formal Analysis of Apple’s iMessage PQ3 Protocol
We present the formal verification of Apple’s iMessage PQ3, a highly performant, device-to-device messaging protocol offering strong security guarantees even against an adversary with quantum computing capabilities. PQ3 leverages Apple’s identity services together with a custom, post-quantum secure initialization phase and afterwards it employs a double ratchet construction in the style of Signal, extended to provide post-quantum, post-compromise security.
We present a detailed formal model of PQ3, a precise specification of its fine-grained security properties, and machine-checked security proofs using the TAMARIN prover. Particularly novel is the integration of post-quantum secure key encapsulation into the relevant protocol phases and the detailed security claims along with their complete formal analysis. Our analysis covers both key ratchets, including unbounded loops, which was believed by some to be out of scope of symbolic provers like TAMARIN (it is not!).
Integral cryptanalysis in characteristic
Integral and ultrametric integral cryptanalysis are generalized to finite rings of prime characteristic that are isomorphic to a product of fields. This extends, for instance, the complete state of the art in integral cryptanalysis from to , for all prime powers . A compact representation of transition matrices, based on convex polyhedra, is introduced to ensure that the proposed methods are computationally efficient even for large .
Automated tools are developed and applied to a few generic and several concrete primitives. The analysis shows that previous degree estimates for Feistel-GMiMC, HadesMiMC, AES-Prime, small-pSquare and mid-pSquare are overly optimistic. Furthermore, except for AES-Prime, these primitives do not meet their design criteria unless their number of rounds is substantially increased.
CTng: Secure Certificate and Revocation Transparency
We present CTng, an evolutionary and practical PKI design that efficiently addresses multiple key challenges faced by deployed PKI systems. CTng ensures strong security properties, including guaranteed transparency of certificates and guaranteed, unequivocal revocation, achieved under NTTP-security, i.e., without requiring trust in any single CA, logger, or relying party. These guarantees hold even in the presence of arbitrary corruptions of these entities, assuming only a known bound (f) of corrupt monitors (e.g., f=8), with minimal performance impact.
CTng also enables offline certificate validation and preserves relying-party privacy, while providing scalable and efficient distribution of revocation updates. Furthermore, CTng is post-quantum ready, maintaining efficiency even with high-overhead quantum-secure signature schemes.
These properties significantly improve upon current PKI designs. In particular, while Certificate Transparency (CT) aims to eliminate single points of trust, the existing specification still assumes benign loggers. Addressing this through log redundancy is possible, but rather inefficient, limiting deployed configurations to f ≤ 2.
We present a security analysis and an evaluation of our open-source CTng prototype, showing that it is efficient and scalable under realistic deployment conditions.
Practical Zero-Trust Threshold Signatures in Large-Scale Dynamic Asynchronous Networks
Threshold signatures have become a critical tool in cryptocurrency systems, offering enhanced security by distributing the signing process among multiple signers. In this work, we distribute this process between a client and a permissionless decentralized blockchain, and present novel protocols for ECDSA and EdDSA/Schnorr signatures in this setting. Typical threshold access architectures used by trusted custodians suffer from the honeypot problem, wherein the more assets the custodian holds, the greater the incentive of compromising it.
Implementing threshold signatures over permissionless blockchains poses a few challenges.
First, existing networks typically work over an asynchronous reliable broadcast communication channel. Accordingly, our protocol is implemented over such a channel. As a result, it also benefits from identifiable abort, public verifiability, and guaranteed output delivery, and the client benefits from censorship resistance of blockchain systems.
Second, upon signing each block, the participating quorum may dynamically change and is post-determined.
Therefore, we design a fluid protocol, that supports a post-determined dynamic quorum in each communication round, thereby complying with existing broadcast channel implementations. Third, in permissionless networks, parties may join, leave, and change their stake. Therefore, we offer protocols for network reconfiguration, with complexity independent of the number of clients in the system, and our protocol efficiently supports a weighted threshold access structure for the network. Specifically, the complexity of distributed key generation and presign only depends on the number of parties and not on the overall weight, and the amortized cost of sign only depends on the individual weight.
Furthermore, our protocol introduces key improvements, including the removal of zero-knowledge proofs towards the client, and presigns with a non-interactive client. For Schnorr, the presigns are client-independent, and can be collected by the blockchain in a common pool, available for all clients in the system. These optimizations reduce communication overhead and improve the system's ability to handle traffic spikes during high-demand periods.
Our protocol is UC-secure, and is therefore natively designed for multiple clients to use the system in parallel. Notably, we propose a novel assumption, Slightly-Enhanced ECDSA Unforgeability, offering concrete security for 256-bit elliptic curves for threshold ECDSA with support for parallel execution of presigns.
In addition to securing cryptocurrency wallets, we demonstrate how our protocol enables various cross-chain applications, such as decentralized bridges, future transactions, andwallet transfer. Our system is designed for interoperability across multiple blockchains, enhancing security, scalability, and flexibility for decentralized finance (DeFi) ecosystems.
Lightning Fast Secure Comparison for 3PC PPML
Privacy-preserving machine learning (PPML) techniques have gained significant popularity in the past years. Those protocols have been widely adopted in many real-world security-sensitive machine-learning scenarios. Secure comparison is one of the most important non-linear operations in PPML.
In this work, we focus on maliciously secure comparison in the 3-party MPC over ring setting. In particular, we propose a novel constant round sign-bit extraction protocol in the preprocessing model. The communication of its semi-honest version is only 12.5% of the state-of-the-art (SOTA) constant-round semi-honest comparison protocol by Zhou et al.(Bicoptor, S&P 2023); communication complexity of its malicious version are approximately 25% of the SOTA by Patra and Suresh (BLAZE, NDSS 2020), for .
Finally, the resulting ReLU protocol outperforms the SOTA secure ReLU evaluation solution (Bicoptor, S&P 2023) by in the semi-honest setting and in the malicious setting, respectively.
On Gaussian Sampling for -ary Lattices and Linear Codes with Lee Weight
We show that discrete Gaussian sampling for a -ary lattice is equivalent to codeword sampling for a linear code over with the Lee weight. This insight allows us to derive the theta series of a -ary lattice from the Lee weight distribution of the associated code. We design a novel Gaussian sampler for -ary lattices assuming an oracle that computes the symmetrized weight enumerator of the associated code.
We apply this sampler to well-known lattices, such as the , Barnes-Wall, and Leech lattice, highlighting both its advantages and limitations, which depend on the underlying code properties. For certain root lattices, we show that the sampler is indeed efficient, forgoing the need to assume an oracle. We also discuss applications of our results in digital signature schemes and the Lattice Isomorphism Problem. In many cases, our sampler achieves a significant speed-up compared to state-of-the-art sampling algorithms in cryptographic applications.
Towards Building Scalable Constant-Round MPC from Minimal Assumptions via Round Collapsing
In this work, we study the communication complexity of constant-round secure multiparty computation (MPC) against a fully malicious adversary and consider both the honest majority setting and the dishonest majority setting. In the (strong) honest majority setting (where for a constant ), the best-known result without relying on FHE is given by Beck et al. (CCS 2023) based on the LPN assumption that achieves communication, where is the security parameter and the achieved communication complexity is independent of the number of participants.
In the dishonest majority setting, the best-known result is achieved by Goyal et al. (ASIACRYPT 2024), which requires bits of communication and is based on the DDH and LPN assumptions.
In this work, we achieve the following results: (1) For any constant , we give the first constant-round MPC in the dishonest majority setting for corruption threshold with communication assuming random oracles and oblivious transfers, where is the circuit depth. (2) We give the first constant-round MPC in the standard honest majority setting (where ) with communication only assuming random oracles.
Unlike most of the previous constructions of constant-round MPCs that are based on multiparty garbling, we achieve our result by letting each party garble his local computation in a non-constant-round MPC that meets certain requirements. We first design a constant-round MPC that achieves communication assuming random oracles in the strong honest majority setting of . Then, we combine the party virtualization technique and the idea of MPC-in-the-head to boost the corruption threshold to for any constant assuming oblivious transfers to achieve our first result. Finally, our second result is obtained by instantiating oblivious transfers using a general honest-majority MPC and the OT extension technique built on random oracles.
Multivalued Broadcast with Optimal Length
A multi-valued broadcast protocol allows a sender to broadcast an -bit message to recipients.
For all relevant models, multi-valued broadcast protocols with asymptotically optimal communication complexity have been published.
Despite their very low communication complexity, these protocols perform poorly in modern networks.
Even if the network allows all parties to send messages at the same time, the execution time of the protocols is proportional to (instead of ).
Even if the network allows to use all bilateral channels at the same time, the execution time is still proportional to (instead of ).
We ask the natural question whether multi-valued broadcast protocols exist which take time proportional to if parties can simultaneously send messages, and even take time proportional to if the bilateral channels can be used simultaneously. We provide a complete characterization of multi-valued broadcast with a two-fold answer:
On the negative side, we prove that for , multi-valued broadcast requires time proportional to , if parties can simultaneously send messages, respectively take time proportional to if bilateral channels can be used simultaneously.
On the positive side, we prove that for (for any fixed ), multi-valued broadcast in time proportional to (when parties can send messages simultaneously), respectively proportional to (if bilateral channels can be used simultaneously) is possible. We provide such protocols both with cryptographic security as well as with statistical security.
SEEC: Memory Safety Meets Efficiency in Secure Two-Party Computation
Secure Multi-Party Computation (MPC) allows multiple parties to perform privacy-preserving computation on their secret data. MPC protocols based on secret sharing have high throughput which makes them well-suited for batch processing, where multiple instances are evaluated in parallel.
So far, practical implementations of secret sharing-based MPC protocols mainly focus on runtime and communication efficiency, so the memory overhead of protocol implementations is often overlooked. Established techniques to reduce the memory overhead for constant-round garbled circuit protocols cannot be directly applied to secret sharing-based protocols because they would increase the round complexity. Additionally, state-of-the-art implementations of secret sharing-based MPC protocols are implemented in C/C++ and may exhibit memory unsafety and memory leaks, which could lead to undefined behavior.
In this paper, we present SEEC: SEEC Executes Enormous Circuits, a framework for secret sharing-based MPC
with a novel approach to address memory efficiency and safety without compromising on runtime and communication efficiency. We realize SEEC in Rust, a language known for memory-safety at close-to-native speed. To reduce the memory footprint, we develop an in-memory representation for sub-circuits. Thus, we never inline sub-circuit calls during circuit evaluation, a common issue that blows up memory usage in MPC implementations.
We compare SEEC with the state-of-the-art secret sharing-based MPC frameworks ABY (NDSS'15), MP-SPDZ (CCS'20), and MOTION (TOPS'22) w.r.t. runtime, memory, and communication efficiency. Our results show that our reliable and memory-safe implementation has competitive or even better performance.
The DROP Protocol: Dispute Resolution via Observation in Public for Verifiable, In-Person Voting
Dispute resolution has been a significant challenge in verifiable election protocols since such protocols were first proposed more than forty years ago. This work explores the problem from a new perspective and offers strong dispute resolution for in-person voting by depending on observers.
It proposes a simple definition of dispute resolution as a property of a voting protocol---a definition that is independent of any other security goal. It also presents the DROP protocol, a verifiable, in-person voting protocol that runs in the presence of observers who will always reach a correct conclusion in the case of a dispute without ever being able to compromise privacy or facilitate coercion.
REACTIVE: Rethinking Effective Approaches Concerning Trustees in Verifiable Elections
For more than forty years, two principal questions have been asked when designing verifiable election systems: how will the integrity of the results be demonstrated and how will the privacy of votes be preserved? Many approaches have been taken towards answering the first question such as use of mixnets and homomorphic tallying. But, in the case of large-scale elections, the second question has always been answered in the same way: decryption capabilities are divided amongst multiple independent "trustees" so that a collusion is required to compromise privacy.
In practice, however, this approach can be fairly challenging to deploy. Even if multiple human trustees are chosen, they typically use software and often also hardware provided by a single voting system supplier, with little options to verify its quality when they have the technical expertise to do so. As a result, we observe that trustees are rarely in a position to exercise independent judgment to maintain privacy.
This paper looks at several aspects of the trustee experience. It begins by surveying and discussing various cryptographic protocols that have been used for key generation in elections, explores their impact on the role of trustees, and notes that even the theory of proper use of trustees is more challenging than it might seem. This is illustrated by showing that one of the only references defining a full threshold distributed key generation (DKG) for elections defines an insecure protocol. Belenios, a broadly used open-source voting system, claims to rely on that reference for its DKG and security proof. Fortunately, it does not inherit the same vulnerability, and we offer a security proof for the Belenios DKG.
The paper then discusses various practical contexts, in terms of humans, software, and hardware, and their impact on the practical deployment of a trustee-based privacy model.
HAWK: Having Automorphisms Weakens Key
The search rank-2 module Lattice Isomorphism Problem (smLIP), over a cyclotomic ring of degree a power of two, can be reduced to an instance of the Lattice Isomorphism Problem (LIP) of at most half the rank if an adversary knows a nontrivial automorphism of the underlying integer lattice. Knowledge of such a nontrivial automorphism speeds up the key recovery attack on HAWK at least quadratically, which would halve the number of security bits.
Luo et al. (ASIACRYPT 2024) recently found an automorphism that breaks omSVP, the initial underlying hardness assumption of HAWK. The team of HAWK amended the definition of omSVP to include this so-called symplectic automorphism in their submission to the second round of NIST's standardization of additional signatures. This work provides confidence in the soundness of this updated definition, assuming smLIP is hard, since there are plausibly no more trivial automorphisms that allow winning the omSVP game easily.
Although this work does not affect the security of HAWK, it opens up a new attack avenue involving the automorphism group that may be theoretically interesting on its own.
Enhancing Meme Token Market Transparency: A Multi-Dimensional Entity-Linked Address Analysis for Liquidity Risk Evaluation
Meme tokens represent a distinctive asset class within the cryptocurrency ecosystem, characterized by high community engagement, significant market volatility, and heightened vulnerability to market manipulation. This paper introduces an innovative approach to assessing liquidity risk in meme token markets using entity-linked address identification techniques. We propose a multi-dimensional method integrating fund flow analysis, behavioral similarity, and anomalous transaction detection to identify related addresses. We develop a comprehensive set of liquidity risk indicators tailored for meme tokens, covering token distribution, trading activity, and liquidity metrics. Empirical analysis of tokens like BabyBonk, NMT, and BonkFork validates our approach, revealing significant disparities between apparent and actual liquidity in meme token markets. The findings of this study provide significant empirical evidence for market participants and regulatory authorities, laying a theoretical foundation for building a more transparent and robust meme token ecosystem.
Fast Fuzzy PSI from Symmetric-Key Techniques
Private set intersection (PSI) enables a sender holding a set and a receiver holding a set to securely compute the intersection . Fuzzy PSI (FPSI) is a PSI variant where the receiver learns the items for which there exists such that with respect to some distance metric. Recently, Gao et al. (ASIACRYPT 2024) proposed the first FPSI protocols for and distance with linear complexity. They summarized their FPSI construction into two steps: fuzzy mapping and fuzzy matching. However, their realizations of the two steps heavily rely on public key operations, namely the DH-key exchange and additively homomorphic encryption, resulting in low efficiency.
In this work, we propose new FPSI protocols for and distances, primarily leveraging symmetric-key primitives.
We revisit the definition of fuzzy mapping and rigorously redefine it as a cryptographic scheme. We further introduce consistency for fuzzy mapping scheme, which could simplify the fuzzy matching step into plain PSI.
We then demonstrate how to execute fuzzy mapping step satisfying consistency. We also propose several new technologies to completely avoid the extensive use of computationally expensive public-key operations burden inherent in existing solutions.
We implement our FPSI protocols and compare them with the state-of-the-art FPSI protocols. Experiments show that our protocols perform better than state-of-the-art under all the parameters we tested. Specifically, our protocols achieve a speedup in running time and shrinking in communication cost, depending on set sizes, dimension and distance threshold.
Polocolo: A ZK-Friendly Hash Function Based on S-boxes Using Power Residues (Full Version)
Conventional hash functions are often inefficient in zero-knowledge proof settings, leading to design of several ZK-friendly hash functions. On the other hand, lookup arguments have recently been incorporated into zero-knowledge protocols, allowing for more efficient handling of ``ZK-unfriendly'' operations, and hence ZK-friendly hash functions based on lookup tables.
In this paper, we propose a new ZK-friendly hash function, dubbed , that employs an S-box constructed using power residues. Our approach reduces the numbers of gates required for table lookups, in particular, when combined with Plonk, allowing one to use such nonlinear layers over multiple rounds. We also propose a new MDS matrix for the linear layer of . In this way, requires fewer Plonk gates compared to the state-of-the-art ZK-friendly hash functions. For example, when , requires less Plonk gates compared to Anemoi, which is currently the most efficient ZK-friendly hash function, where denotes the size of the underlying permutation in blocks of . For , requires less Plonk gates than Reinforced Concrete, which is one of the recent lookup-based ZK-friendly hash functions.
Symphony of Speeds: Harmonizing Classic McEliece Cryptography with GPU Innovation
The Classic McEliece key encapsulation mechanism (KEM), a candidate in the fourth-round post-quantum cryptography (PQC) standardization process by the National Institute of Standards and Technology (NIST), stands out for its conservative design and robust security guarantees. Leveraging the code-based Niederreiter cryptosystem, Classic McEliece delivers high-performance encapsulation and decapsulation, making it well-suited for various applications. However, there has not been a systematic implementation of Classic McEliece on GPU platforms. This paper presents the first high-performance implementation of Classic McEliece on NVIDIA GPUs. Firstly, we present the first GPU-based implementation of Classic McEliece, utilizing a ``CPU-GPU'' heterogeneous approach and a kernel fusion strategy. We significantly reduce global memory accesses, optimizing memory access patterns. This results in encapsulation and decapsulation performance of 28,628,195 ops/s and 3,051,701 ops/s, respectively, for McEliece348864. Secondly, core operations like Additive Fast Fourier Transforms (AFFT), and Transpose AFFT (TAFFT) are optimized. We introduce the concept of the (T)AFFT stepping chain and propose two universal schemes: Memory Access Stepping Strategy (MASS) and Layer-Fused Memory Access Stepping Strategy (LFMASS), which achieve a speedup of 30.56% and 38.37%, respectively, compared to the native
GPU-based McEliece6960119 implementation. Thirdly, extensive experiments on the NVIDIA RTX4090 show significant performance gains, achieving up to 344 higher encapsulation and 125 higher decapsulation compared to the official CPU-based AVX implementation, decisively outperforming existing ARM Cortex-M4 and FPGA implementations.
SCMAC and LOL2.0: An AEAD Design Framework and A New Version of LOL Stream Cipher Design Framework
In this paper, we introduce SCMAC, a general framework that transforms large-memory stream ciphers into AEAD schemes. It represents an intermediate design paradigm between Encrypt-then-MAC and dedicated single-pass AEAD, partially integrating encryption and authentication mechanisms while mitigating the risk of state leakage associated with immediate absorption and squeezing. Consequently, this approach harmonizes high performance with enhanced security. Additionally, we propose LOL2.0, an enhanced version of the blockwise stream cipher design framework LOL. This new framework improves security through modifications to the FSM update and output functions, and increases flexibility in constructing LFSR components. Based on SCMAC}$ and LOL2.0, we present two AEAD ciphers, LOL2.0-Mini and LOL2.0-Double, which support both stream cipher and AEAD modes. These ciphers are tailored to Beyond 5G/6G environments, offering 256-bit key length and resistance to known cryptanalysis methods, including differential, linear, and integral attacks. They also provide 128-bit security against forgery attacks in the nonce-respecting setting. Due to their compatibility with AES-NI and SIMD instructions, LOL2.0-Mini and LOL2.0-Double achieve software performance of 90 Gbps and 144 Gbps in stream cipher mode, respectively. In AEAD mode, they perform at 59 Gbps and 110 Gbps, significantly faster than their predecessor's Encrypt-then-MAC versions.
Card-Based Protocol Counting Connected Components of Graphs
Card-based cryptography is a research area for realizing cryptographic functionality, such as secure multiparty computation and zero-knowledge proofs, by using a deck of physical cards and/or other non-electrical tools. Motivated by zero-knowledge proofs for solutions in pencil puzzles, there is a direction of recent studies on card-based protocols to verify connectivity of a set of cells or edges on lattice-shaped boards. In this paper, we generalize the problem to counting connected components of subsets on any graph, and propose a card-based protocol for the problem.
SPECK: Signatures from Permutation Equivalence of Codes and Kernels
The ongoing search for secure post-quantum cryptographic primitives has led to the development of numerous novel digital signature schemes. In this paper we introduce , a new signature protocol based on the similarities between the Permuted Kernel Problem ( ) and the Permutation Code Equivalence Problem ( ). At its core, is built on the permutation version of LESS, but introduces a key modification to the commitment step. Indeed, instead of committing to an entire permuted code, the prover commits to a random relaxed (that we call , Permutation Equivalence of Codes and Kernel) instance by randomly choosing a codeword from a random permutation of the initial code. In this sense, the secret key is used as a trapdoor to solve the committed instance. The new approach allows for a faster verification that does not involve gaussian elimination, while maintains roughly the same signature size as LESS. We present the protocol in detail and provide a deep analysis of the security of the new introduced assumptions.
This work introduces , a Hypercube-Wise Optimized polynomial commitment scheme based on Lattices over ring Fields.
The scheme achieves succinctness with proof size and verifier time, along with linear prover cost. It supports both univariate and multilinear polynomials under a unified framework.
Inspired by the two-dimensional tensor structure employed in \cite{golovnev2021brakedown} to achieve sublinear efficiency, we generalize the idea to a -dimensional tensor (hypercube) structure and design a -round recursive proof protocol, where each round performs a dimensionality reduction, which results in an overall efficiency of . By setting , our scheme achieves near-optimal asymptotic performance.
is fully transparent and relies only on the standard lattice assumption over rings. In terms of concrete efficiency, for polynomials with coefficients, our scheme yields proof sizes that are smaller than the lattice-based scheme of \cite{cini2024polynomial} (Crypto24), and over smaller than \cite{albrecht2024slap} (Eurocrypt24). Compared to \cite{nguyen2024greyhound} (Crypto24), our proof size is of similar magnitude, while achieving significantly lower verifier time.
SQIsign2D : New SQIsign2D Variant by Leveraging Power Smooth Isogenies in Dimension One
In this paper, we propose SQIsign2D , a novel digital signature scheme within the SQIsign2D family. Unlike other SQIsign2D variants, SQIsign2D employs the prime as the field characteristic, where , and . By leveraging accessible -isogenies, SQIsign2D significantly reduces the degree requirements for two-dimensional isogeny computations, thereby lowering the overall computational overhead compared to other SQIsign2D variants.
We also provide a proof-of-concept implementation of SQIsign2D , and give an efficiency comparison between SQIsign2D and other SQIsign2D variants. In particular, the experimental results demonstrate that the key generation and signing phases of SQIsign2D are more than twice as fast as those of SQIsign2D-East at the NIST-I security level, respectively. Additionally, the verification performance in SQIsign2D exhibits marginally improved efficiency.
Rep3 Reloaded: On the Cost of Function-Dependent Preprocessing in Semi-Honest 3PC with Honest Majority
Rep3 denotes the implementation of semi-honest three-party computation with an honest majority in MP-SPDZ (CCS'20). It uses replicated secret sharing with one message per multiplication and party as proposed by Araki et al. (CCS'16). This approach is rivaled by Astra (CCSW'19) and Trio (PETS'25), which use function-dependent preprocessing. The latter is more involved than, e.g., Beaver triples which can be used as a commodity.
In this work, we present a full implementation of Astra and Trio in MP-SPDZ, and we evaluate the costs of the different approaches. We show the equivalence of the schemes, which implies that a protocol in any of the schemes can be translated to one in another with the same overall communication cost. We also present an improvement to two important building blocks for privacy-preserving computation, namely secure comparison and probabilistic truncation used in fixed-point arithmetic. To evaluate our implementation, we have benchmarked machine learning training and inference in all three schemes, improving on Keller and Sun (ICML'22) by over 30%. Our implementation also highlights the large storage requirements of function-dependent preprocessing as it runs the two phases separately. To the best of our knowledge, this is the first implementation to do so.
The Accidental Computer: Polynomial Commitments from Data Availability
In this paper, we show two simple variations of a data availability scheme which enable it to act as a multilinear polynomial commitment scheme over the data in a block. The first variation enables commitments over all of the block's data with zero prover overhead: the data availability construction simply serves both purposes. The second variation allows commitments over subsets of data with nonzero but still concretely small proving costs, since most work is already done during data encoding. This works especially well for blockchains with a high degree of data parallelism, as data-parallel computation is particularly amenable to efficient GKR proofs. Since, in GKR, opening the polynomial commitment contributes significantly to prover costs, our construction enables the prover to reuse work already done by the data availability scheme, reducing—or wholly removing—work associated with the polynomial commitment scheme.
Jagged Polynomial Commitments (or: How to Stack Multilinears)
Modern SNARK constructions, almost ubiquitously, rely on a polynomial commitment scheme (PCS) --- a method by which a prover can commit to a large polynomial and later provide evaluation proofs of the form "P(x)=y" to the verifier.
In the context of zkVMs (i.e., proof-systems for general-purpose RAM computations), the common design is to represent the computation trace as a sequence of tables, one per CPU instruction, and commit to the these tables, or even their individual columns, as separate polynomials. Committing separately to these polynomials has a large overhead in verification costs, especially in hash-based systems.
In this work we drastically reduce this cost via a new construction which we call the jagged PCS. This PCS enables the prover to commit to the entire computation trace as a single polynomial, but then allows for the verifier to emulate access to the individual table or column polynomials, so that the arithmetization can proceed in the usual manner. The jagged PCS may be thought of as a sparse PCS for a very particular form of sparsity - namely, a "jagged" matrix in which each column has a different height.
Our construction of the jagged PCS is highly performant in practice. In contrast to existing sparse PCS constructions for general sparse polynomials, the jagged PCS does not require the prover to commit to any additional oracles and the prover cost is dominated by 5 finite field multiplications per element in the trace area. Furthermore, we implement the verifier as an arithmetic circuit that depends only on the total trace area - thereby significantly reducing the problem of "combinatorial explosion" often encountered in zkVM recursion.
Automated Verification of Consistency in Zero-Knowledge Proof Circuits
Circuit languages like Circom and Gnark have become essential tools for programmable zero-knowledge cryptography, allowing developers to build privacy-preserving applications. These domain-specific languages (DSLs) encode both the computation to be verified (as a witness generator) and the corresponding arithmetic circuits, from which the prover and verifier can be automatically generated. However, for these programs to be correct, the witness generator and the arithmetic circuit need to be mutually consistent in a certain technical sense, and inconsistencies can result in security vulnerabilities. This paper formalizes the consistency requirement for circuit DSLs and proposes the first automated technique for verifying it. We evaluate the method on hundreds of real-world circuits, demonstrating its utility for both automated verification and uncovering errors that existing tools are unable to detect.
LightCROSS: A Secure and Memory Optimized Post-Quantum Digital Signature CROSS
CROSS is a code-based post-quantum digital signature scheme based on a zero-knowledge (ZK) framework. It is a second-round candidate of the National Institute of Standards and Technology’s additional call for standardizing post-quantum digital signatures. The memory footprint of this scheme is prohibitively large, especially for small embedded devices. In this work, we propose various techniques to reduce the memory footprint of the key generation, signature generation, and verification by as much as 50%, 52%, and 74%, respectively, on an ARM Cortex-M4 device. Moreover, our memory-optimized implementations adapt the countermeasure against the recently proposed (ASIACRYPT-24) fault attacks against the ZK-based signature schemes.
Rock and a Hard Place: Attack Hardness in Neural Network-assisted Side Channel Analysis
Side-channel analysis (SCA) has become a crucial area in ensuring the security of hardware implementations against potential vulnerabilities. With advancements in machine learning (ML) and artificial intelligence (AI), neural network(NN)-assisted techniques for SCA have demonstrated significant effectiveness. However, a fundamental question remains unanswered: are traces corresponding to different subkeys equally hard to attack? This paper addresses this issue by leveraging explainable AI techniques to analyze the hardness levels and, consequently, the root cause of hardness. To systematically investigate this problem, we reintroduce hardness metrics in SCA, which have been known to the ML community. Those metrics include query hardness (QH), log odds (LO), and matrix-based Rényi entropy (MRE). The challenge in this regard is that (virtually all) hardness metrics in ML cannot be adopted as they are. This is because ML and SCA metrics have conflicting goals, namely boosting accuracy and rank. Through careful experimentation, we identify the shortcomings of QH and LO in SCA and recommend MRE as a suitable hardness metric for SCA.
We also study how hardness has been seen in SCA, where recent work has suggested the influence of class “labels” on the attack difficulty. Employing rigorous evaluation, our paper demonstrates that no statistically significant evidence can be found to support this claim. This leads us to the question of how much traces’ time samples affect the inherent hardness in distinguishing key candidates. Our novel explainable AI (XAI) approach not only answers this, but also makes a link between hardness and rank as the common performance metric in SCA. Our findings indicate that hardness values are influenced mainly by time samples rather than specific key labels. Furthermore, we examine whether hardness captures intrinsic properties of the traces, specifically, their lack of separability in feature space due to their inherent similarities. To this end, we introduce, for the first time in the context of SCA, the use of maximum mean discrepancy (MMD) as a principled metric. MMD effectively links trace hardness with attack difficulty by quantifying distributional differences induced by traces’ time samples. In addition to visualization techniques showing the root cause of hardness based on MMD, we employ XAI to explain the connection between MMD and attack hardness. Our proposed methodology enhances the understanding
of attack difficulty in SCA and contributes to developing more robust evaluation metrics for profiling attacks.
Structure-Preserving Compressing Primitives: Vector Commitments and Accumulators and Applications
Compressing primitives such as accumulators and vector commitments, allow to represent large data sets with some compact, ideally constant-size value. Moreover, they support operations like proving membership or non-membership with minimal, ideally also constant-size, storage and communication overhead. In recent years, these primitives have found numerous practical applications, with many constructions based on various hardness assumptions. So far, however, it has been elusive to construct these primitives in a strictly structure-preserving setting, i.e., in a bilinear group in a way that messages, commitments and openings are all elements of the two source groups. Interestingly, backed by existing impossibility results, not even conventional commitments with such constraints are known in this setting. However, in many practical applications it would be convenient or even required to be structure-preserving, e.g., to commit or accumulate group elements.
In this paper we investigate whether strictly structure-preserving compressing primitives can be realized. We close this gap by presenting the first strictly structure-preserving commitment that is shrinking (and in particular constant-size). We circumvent existing impossibility results by employing a more structured message space, i.e., a variant of the Diffie-Hellman message space. Our main results are constructions of structure-preserving vector commitments as well as structure-preserving accumulators. We first discuss generic constructions and then present concrete constructions under the Diffie-Hellman Exponent assumption. To demonstrate the usefulness of our constructions, we discuss various applications.
Most notable, we present the first entirely practical constant-size ring signature scheme in bilinear groups (i.e., the discrete logarithm setting). Concretely, using the popular BLS12-381 pairing-friendly curve, our ring signatures achieve a size of roughly bits.
On the Fiat–Shamir Security of Succinct Arguments from Functional Commitments
We study the security of a popular paradigm for constructing SNARGs, closing a key security gap left open by prior work. The paradigm consists of two steps: first, construct a public-coin succinct interactive argument by combining a functional interactive oracle proof (FIOP) and a functional commitment scheme (FC scheme); second, apply the Fiat–Shamir transformation in the random oracle model. Prior work did not consider this generalized setting nor prove the security of this second step (even in special cases).
We prove that the succinct argument obtained in the first step satisfies state-restoration security, thereby ensuring that the second step does in fact yield a succinct non-interactive argument. This is provided the FIOP satisfies state-restoration security and the FC scheme satisfies a natural state-restoration variant of function binding (a generalization of position binding for vector commitment schemes).
Moreover, we prove that notable FC schemes satisfy state-restoration function binding, allowing us to establish, via our main result, the security of several SNARGs of interest (in the random oracle model). This includes a security proof of Plonk, in the ROM, based on ARSDH (a falsifiable assumption).
Improved differential cryptanalysis of SPEEDY
SPEEDY is a family of lightweight block ciphers designed by Leander et al. Several differential attacks have been reported on the SPEEDY variants. However, nearly all of these attacks are based on differential characteristics with probabilities that differ from their reported values. These discrepancies arise from incorrect calculations of the (key-averaged) probability, particularly in consecutive steps within one round without intermediate key addition. In this paper, we revisit all reported differential characteristics and accurately calculate their key-averaged probabilities using quasidifferential trails. We extend this to also estimate the fixed-key probability. Our analysis reveals several characteristics with zero or significantly altered probability, invalidating several proposed attacks. We further implement a search algorithm and find a 5.5-round differential distinguisher that can be used to mount a full-round key recovery attack with a data complexity of and a time complexity of . The memory complexity varies: in the chosen-plaintext setting, it is , whereas in the chosen-ciphertext setting, it is .
Tweakable Permutation-based Luby-Rackoff Constructions
Liskov, Rivest, and Wagner, in their seminal work, formulated tweakable blockciphers and proposed two blockcipher-based design paradigms, LRW1 and LRW2, where the basic design strategy is to xor the masked tweak to the input and output of a blockcipher. The 2-round cascaded LRW2 and 4-round cascaded LRW1 have been proven to be secure up to queries, but -bit optimal security still remains elusive for these designs. In their paper, Liskov also posed an open challenge of embedding the tweak directly in the blockcipher, and to address this, Goldenberg et al. introduced the tweakable Luby-Rackoff (LR) constructions. They showed that if the internal primitives are random functions, then for tweaks with blocks, the construction needs rounds to be optimally -bit CPA secure and rounds to be optimally -bit CCA secure, where respectively and rounds were required to process the tweaks. Since blockciphers can be designed much more efficiently than pseudorandom functions, in many practical applications the internal primitives of LR ciphers are instantiated as blockciphers, which however would lead to a birthday-bound factor, which is not ideal for say lightweight cryptography.
This paper addresses the following two key questions affirmatively: (1) Can Goldenberg et al.'s results be extended to LR constructions with random permutations as internal primitives without compromising the optimal -bit security? (2) Can the number of rounds required for handling long tweaks be reduced?
We formally define TLR-compatible functions, for processing the tweak, which when composed with 4-rounds and 5-rounds of LR construction with random permutations as internal primitives gives us respectively -bit CPA and CCA secure tweakable permutations. For the security analysis, we proved general Mirror Theory result for three permutations. We also propose instantiating TLR-compatible functions with one round LR where a permutation (resp, two AXU hash functions) is used to mask single-block tweaks (resp., variable-length tweaks), thus proposing the -bit CPA (resp., CCA) secure tweakable permutation candidates, and (resp., and ), using (resp., ) LR rounds, which is a significant reduction from the tweak-length-dependent results of Goldenberg et al. As a corollary, we also show -bit CPA (resp., CCA) security of -rounds (resp. -rounds) permutation-based LR construction, which is quite an improvement over the existing -bit security proved by Guo et al.
Side-Channel Analysis of Integrate-and-Fire Neurons within Spiking Neural Networks
Spiking neural networks gain attention due to low power properties and event-based operation, making them suitable for usage in resource constrained embedded devices. Such edge devices allow physical access opening the door for side-channel analysis. In this work, we reverse engineer the parameters of a feed-forward spiking neural network implementation with correlation power analysis. Localized measurements of electro-magnetic emanations enable our attack, despite inherent parallelism and the resulting algorithmic noise of the network. We provide a methodology to extract valuable parameters of integrate-and-fire neurons in all layers, as well as the layer sizes.
Hidden Number Problems in Fiat-Shamir based Post-Quantum Signatures
Schnorr and (EC)DSA signatures famously become completely insecure once a few bits of the random nonce are revealed to an attacker. In this work, we investigate whether post-quantum signature schemes allow for similar attacks. Specifically, we consider three Fiat-Shamir based signature schemes: Dilithium (lattices), LESS (codes) and CSI-FiSh (isogenies). Analogous to the attacks on Schnorr and (EC)DSA, we assume knowledge of some bits of the commitment randomness used in the underlying ID protocol. Such attacks belong to the broader class of Hidden Number Problems.
In the case of LESS, we give an efficient attack that requires knowledge of a single bit of the randomness in roughly signatures to completely recover the secret key. Our attack is based on the observation that knowledge of a single bit in the randomness is enough to distinguish secret key entries from random ones. Since for proposed parameters there are at most candidates for each entry, we can enumerate all candidates and use the distinguisher to recover the secret key one entry at a time.
For Dilithium we are able to recover the whole secret key using at most 1792 signatures when either 3 or 4 least significant bits of the randomness are known (depending on the parameter set). Here the attack is based on the simple observation that knowledge of the least significant bits of the randomness immediately yields a linear relation in the secret key.
For CSI-FiSh, on the other hand, we obtain only an inefficient attack that requires exponentially many signatures. However, we prove that this attack is essentially optimal. Thus, our results show that CSI-FiSh comes equipped with an inherent leakage resilience. The argument easily extends to a wider class of signature schemes, but notably does not apply to the predecessor SeaSign.
GAPP: Generic Aggregation of Polynomial Protocols
We construct a new bivariate polynomial commitment scheme, bPCLB, with succinct verification, O(m+n) sized public parameters, and O(m + n) cryptographic operations to generate evaluation proofs for bivariate polynomials of degree (n, m). bPCLB commits to polynomials using their Lagrange coefficients. This is in contrast to existing bivariate schemes, which either incur O(mn)-sized public parameters and O(mn) cost for evaluation proofs, or do not natively support committing to polynomials in the Lagrange basis. We present the idea of a packing gadget that allows packing a non-constant number of univariate polynomials into one bivariate polynomial. We implement the packing gadget with bPCLB to achieve the following results.
We propose a generic framework called GAPP for aggregation of polynomial protocols. This allows proving n instances of a polynomial protocol using a single aggregate proof that has O(log n) size, and can be verified using O(log n) operations. We construct an information-theoretic protocol for proving the satisfiability of the bivariate polynomial identity, which is then compiled using any bivariate polynomial commitment scheme (PCS) to yield an argument of knowledge for the aggregation relation. GAPP can be applied to several popular SNARKs over bilinear groups that are modeled as polynomial protocols in a black-box way. Using bPCLB yields an efficient instantiation of GAPP for aggregating PLONK proofs where the prover only performs sublinear cryptographic operations in the evaluation proof.
We illustrate the packing property of bPCLB in two applications. We first construct a new lookup argument which supports tables of m-tuples. Second, we show a generalized grand-product protocol over a non-constant (say m) number of vectors. While existing approaches for both these applications incur O(m) proof size and verification complexity, our constructions achieve O(log m) dependence. We leverage these applications together with our GAPP framework to design an a la carte proof system proof system for non-uniform computations.
High-Throughput Secure Multiparty Computation with an Honest Majority in Various Network Settings
In this work, we present novel protocols over rings for semi-honest secure three-party computation (3PC) and malicious four-party computation (4PC) with one corruption. While most existing works focus on improving total communication complexity, challenges such as network heterogeneity and computational complexity, which impact MPC performance in practice, remain underexplored.
Our protocols address these issues by tolerating multiple arbitrarily weak network links between parties without any substantial decrease in performance. Additionally, they significantly reduce computational complexity by requiring up to half the number of basic instructions per gate compared to related work. These improvements lead to up to twice the throughput of state-of-the-art protocols in homogeneous network settings and up to eight times higher throughput in real-world heterogeneous settings. These advantages come at no additional cost: Our protocols maintain the best-known total communication complexity per multiplication, requiring 3 elements for 3PC and 5 elements for 4PC.
We implemented our protocols alongside several state-of-the-art protocols (Replicated 3PC, ASTRA, Fantastic Four, Tetrad) in a novel open-source C++ framework optimized for high throughput. Five out of six implemented 3PC and 4PC protocols achieve more than one billion 32-bit multiplications or over 32 billion AND gates per second using our implementation in a 25 Gbit/s LAN environment. This represents the highest throughput achieved in 3PC and 4PC so far, outperforming existing frameworks like MP-SPDZ, ABY3, MPyC, and MOTION by two to three orders of magnitude.
Enforcing arbitrary constraints on Bitcoin transactions
The challenge of enforcing constraints on Bitcoin transac-
tions has recently gained a lot of attention. The current approach to
solve this problem falls short in certain aspects, such as privacy and
programmability. We design a new solution that leverages zkSNARKs
and allows enforcing arbitrary constraints on Bitcoin transactions while
maintaining some information private. Our approach also bypasses the
non-Turing completeness of Bitcoin Script, allowing the enforcement of
unbounded constraints, namely constraints that repeat a certain opera-
tion an unbounded number of times.
Fuzzy Private Set Intersection from VOLE
Private set intersection (PSI) is a well-researched cryptographic primitive that allows two parties to compute the intersection of their input sets without revealing any information about items outside of the intersection. Fuzzy private set intersection is a relatively new variant of PSI, where items are not matched exactly but ``fuzzily''. Most commonly, items are points in -dimensional integer space and a point is a fuzzy match to another if it lies within a ball of radius centered at this point, with respect to some distance metric.
Previous works either only support infinity ) distance metric and standard PSI functionality, or support general Minkowski ( , ) distance metrics and realize richer functionalities but rely on expensive homomorphic encryptions. Our work aims to bridge this gap by giving the first construction of a fuzzy PSI protocol for general Minkowski distance metrics relying on significantly cheaper operations during the online phase.
Our main building block is a novel fuzzy matching protocol based on an oblivious pseudorandom function (OPRF), which can be realized very efficiently from vector oblivious linear evaluation (VOLE). Our protocol is able to preserve the asymptotic complexity as well as the simplicity of the fuzzy matching protocol from van Baarsen and Pu (Eurocrypt '24), while being much more concretely efficient. Additionally, we achieve several asymptotic improvements by representing intervals succinctly. Finally, we present the first fuzzy PSI protocol for infinity distance that places no assumptions on the sets of points, while maintaining asymptotic complexities comparable to the state-of-the-art fuzzy PSI protocol.
Efficient Modular Multiplication Hardware for Number Theoretic Transform on FPGA
In this paper, we present a comprehensive analysis of various modular multiplication methods for Number Theoretic Transform (NTT) on FPGA. NTT is a critical and time-intensive component of Fully Homomorphic Encryption (FHE) applications while modular multiplication consumes a significant portion of the design resources in an NTT implementation. We study the existing modular reduction approaches from the literature, and implement particular methods on FPGA. Specifically Word-Level Montgomery (WLM) for NTT friendly primes [19] and K2 RED [3]. For improvements, we explore the trade-offs between the number of available primes in special forms and hardware cost of the reduction methods. We develop a DSP multiplication-optimized version of WLM, which we call WLM-Mixed. We also introduce a subclass of Proth primes, referred to as Proth-𝑙 primes, characterized by a low and fixed signed Hamming Weight. This special class of primes allows us to design multiplication-free shift-add versions of K2 RED and naive Montgomery reduction [20], referred to as K2 RED-Shift and Montgomery-Shift. We provide in-depth evaluations of these five reduction methods in an NTT architecture on FPGA. Our results indicate that WLM-Mixed is highly resource-efficient, utilizing only 3 DSP multiplications for 64-bit coefficient moduli. On the other hand, K2 RED-Shift and Montgomery-Shift offer DSP-free alternatives, which can be beneficial in specific scenarios.
OAE-RUP: A Strong Online AEAD Security Notion and its Application to SAEF
Release of unverified plaintexts (RUP) security is an important target for robustness in AE schemes. It is also highly crucial for lightweight (LW) implementations of online AE schemes on memory-constrained devices. Surprisingly, very few online AEAD schemes come with provable guarantees against RUP integrity and not one with any well-defined RUP confidentiality.
In this work, we first propose a new strong security notion for online AE schemes called OAE-RUP that captures security under blockwise processing of both encryption (which includes nonce-misuse) and decryption (which includes RUP). Formally, OAE-RUP combines the standard RUP integrity notion INT-RUP with a new RUP confidentiality notion sOPRPF (strong Online PseudoRandom Permutation followed by a pseudorandom Function). sOPRPF is based on the concept of "strong online permutations" and can be seen as an extension of the well-known CCA3 notion (Abed et al., FSE 2014) that captures arbitrary-length inputs.
An OAE-RUP-secure scheme is resistant against nonce-misuse as well as leakage of unverified plaintexts where the integrity remains unaffected, and the confidentiality of any encrypted plaintext is preserved up to the leakage of the longest prefix with the leaked plaintexts and the leakage of the length of the longest prefix with the nonce-repeating ciphertexts.
We then prove the OAE-RUP security of the SAEF mode. SAEF is a ForkAE mode (Asiacrypt 2019) that is optimized for authenticated encryption of short messages and processes the message blocks sequentially and in an online manner. At SAC 2020, it was shown that SAEF is also an online nonce misuse-resistant AE (OAE), offering enhanced security against adversaries that make blockwise adaptive encryption queries. It has remained an open question if SAEF also resists attacks against blockwise adaptive decryption adversaries or, more generally, when the decrypted plaintext is released before verification (RUP).
Our proofs are conducted using the coefficients H technique, and they show that, without any modifications, SAEF is OAE-RUP secure up to the birthday bound, i.e., up to processed data blocks, where is the block size of the forkcipher.
Robust Threshold ECDSA with Online-Friendly Design in Three Rounds
Threshold signatures, especially ECDSA, enhance key protection by addressing the single-point-of-failure issue. Threshold signing can be divided into offline and online phases, based on whether the message is required. Schemes with low-cost online phases are referred to as ``online-friendly". Another critical aspect of threshold ECDSA for real-world applications is robustness, which guarantees the successful completion of each signing execution whenever a threshold number of semi-honest participants is met, even in the presence of misbehaving signatories.
The state-of-the-art online-friendly threshold ECDSA without robustness was developed by Doerner et al. in S\&P'24, requiring only three rounds. Recent work by Wong et al. in NDSS'23 (WMY 23) and NDSS'24 (WMC24) achieves robustness but demands additional communication rounds (7 and 4, respectively) or incurs costly operations in the online phase, such as computations over a homomorphic encryption scheme.
This paper presents the first three-round threshold ECDSA scheme with both robustness and an online-friendly design. The online phase of our scheme relies solely on several elliptic-curve group operations, which are 2 to 3 orders of magnitude less computationally intensive than those based on linearly homomorphic encryption schemes. We implement our protocol and conduct a comprehensive comparison with WMY 23 and WMC24. Benchmark results show that the online phase of our scheme is faster than that of WMY 23 and hundreds of times faster than that of WMC24. Lastly, we demonstrate that our techniques can be extended to construct an online-friendly and robust three-round threshold BBS+ scheme.
Energy Consumption Framework and Analysis of Post-Quantum Key-Generation on Embedded Devices
The emergence of quantum computing and Shor's algorithm necessitates an imminent shift from current public key cryptography techniques to post-quantum robust techniques. NIST has responded by standardising Post-Quantum Cryptography (PQC) algorithms, with ML-KEM (FIPS-203) slated to replace ECDH (Elliptic Curve Diffie-Hellman) for key exchange. A key practical concern for PQC adoption is energy consumption. This paper introduces a new framework for measuring the PQC energy consumption on a Raspberry Pi when performing key generation. The framework uses both available traditional methods and the newly standardised ML-KEM algorithm via the commonly utilised OpenSSL library.
SubLogarithmic Linear Time SNARKs from Compressed Sum-Check
We leverage recently proposed multilinear polynomial commitment schemes, with linear time prover and constant proof size to reduce the communication complexity of the classical sum-check protocol for multivariate polynomials. Specifically, for degree multivariate polynomial in variables which can be decomposed into multilinear polynomials, we exhibit a sum-check protocol with prover cost and communication for . This enables us to break the barrier for this ubiquitous primitive used in multilinear SNARKs and implies first construction of prover efficient (with proving cost) SNARKs using multilinear polynomials with proof-size. Currently, lower communication is only obtained by SNARKs based on univariate polynomials which incur prover complexity due to inherent polynomial multiplication.
Concretely, using compressed sum-check in HyperPlonk protocol allows us to reduce the proof size from about KB KB to only about KB, making it competitive with univariate SNARKs like PLONK.
Samaritan: Linear-time Prover SNARK from New Multilinear Polynomial Commitments
We study linear-time prover SNARKs and make the following contributions:
We provide a framework for transforming a univariate polynomial commitment scheme into a multilinear polynomial commitment scheme. Our transformation is generic, can be instantiated with any univariate scheme and improves on prior transformations like Gemini (EUROCRYPT 2022) and Virgo (S&P 2020) in all relevant parameters: proof size, verification complexity, and prover complexity. Instantiating the above framework with the KZG univariate polynomial commitment scheme, we get SamaritanPCS – the first multilinear polynomial commitment scheme with constant proof size and linear-time prover. The proof size is just 368 bytes, which is the smallest among all multilinear polynomial commitment schemes. Our scheme also has excellent batching properties, wherein proving k evaluations over the hypercube of size n incurs O(n + k\sqrt{n}) cryptographic work, resulting in substantially amortized prover work over several evaluations.
We construct LogSpartan – a new multilinear PIOP for R1CS based on recent techniques for lookup arguments. Compiling this PIOP using SamaritanPCS gives Samaritan – a SNARK in the universal and updatable SRS setting. Samaritan has linear-time prover, logarithmic verification and logarithmic proof size. Concretely, its proof size is one of the smallest among other known linear-time prover SNARKs without relying on
concretely expensive proof recursion techniques. For an R1CS instance with 1 million constraints, Samaritan (over the BLS12-381 curve) has a proof size of 6.2KB.
We compare Samaritan with other linear-time prover SNARKs in the updatable setting. We asymptotically improve on the log^2 n proof size of Spartan. Unlike Libra (CRYPTO 2019), the argument size of Samaritan is independent of the circuit depth. Compared to Gemini (EUROCRYPT 2022), Samaritan achieves 3X smaller argument size at 1 million constraints. We are competitive with the very recently proposed MicroSpartan (S&P 2025) and linear-time SNARKs for the Plonkish constraint system such as HyperPlonk (EUROCRYPT 2023).
New Framework for Structure-Aware PSI From Distributed Function Secret Sharing
Private set intersection (PSI) allows two parties to jointly compute the intersection of their private sets without revealing any additional information. Structure-aware PSI (sa-PSI), introduced by Garimella et al. (Crypto'22), is a variant where Alice's input set has a publicly known structure and Bob's input set remains unstructured, enabling new applications like fuzzy PSI. Their construction relies solely on lightweight cryptographic primitives such as symmetric-key primitives and oblivious transfer (OT) extension. Since then, there has been active research on sa-PSI based on lightweight cryptography. Notably, recent work by Garimella et al. (Crypto'24) achieves sa-PSI with both communication and computation costs only scaling with the description size of Alice's set, rather than its potentially large cardinality. However, this line of work remains largely theoretical, lacking efficient concrete implementations.
In this work, we close this gap by presenting a new framework for sa-PSI that achieves practical efficiency. We identify and eliminate a hidden multiplicative overhead proportional to the security parameter (e.g., 128) in prior symmetric-key-based sa-PSI constructions. A key building block of our new framework is a distributed Function Secret Sharing (dFSS) key generation protocol tailored to the structure of Alice's set, which may be of independent interest. To demonstrate the practicality of our framework, we extend our dFSS protocol to support incremental evaluation, introduce new techniques for spatial hashing, and develop several new optimization techniques, including reducing the exponential dependence on dimension and enabling load balancing between the two parties.
We instantiate our framework for structured sets defined by unions of -dimensional balls, and implement our protocols using only lightweight symmetric-key primitives and OT extension. Our experiments show concrete performance improvements of up to speedup in computation and reduction in communication in low-dimensional, large-radius settings compared to existing public-key-based fuzzy PSI protocols by van Baarsen & Pu (Eurocrypt'24) and Gao et al. (Asiacrypt'24).
Covert Attacks on Machine Learning Training in Passively Secure MPC
Secure multiparty computation (MPC) allows data owners to train machine learning models on combined data while keeping the underlying training data private. The MPC threat model either considers an adversary who passively corrupts some parties without affecting their overall behavior, or an adversary who actively modifies the behavior of corrupt parties. It has been argued that in some settings, active security is not a major concern, partly because of the potential risk of reputation loss if a party is detected cheating.
In this work we show explicit, simple, and effective attacks that an active adversary can run on existing passively secure MPC training protocols, while keeping essentially zero risk of the attack being detected. The attacks we show can compromise both the integrity and privacy of the model, including attacks reconstructing exact training data.
Our results challenge the belief that a threat model that does not include malicious behavior by the involved parties may be reasonable in the context of PPML, motivating the use of actively secure protocols for training.
Authenticated Key Exchange Protocol with Remote Randomness
A conventional Authenticated Key Exchange (AKE) protocol consumes fresh random coins from the local random source. However, recent studies of bad randomness expose the vulnerability of some AKE protocols under small subgroup attacks when the random coins are manipulated or being repeated. It is important to ensure the bad randomness of one random source will not affect the security of the AKE protocol as a whole.
Thus, we introduce the notion of remote randomness by introducing additional ephemeral keys generated by a fresh remote random source in the AKE protocol. In this paper, we argue that because of the thrive of cloud computing, it encourages high
speed internal data transfer within server clusters or virtual machines, including entropy pool data used in our remote randomness AKE protocols. We present a remote randomness modification to the HMQV protocol to demonstrate its resilience under the manipulation of local random sources. We then provide a new security model with the consideration of remote randomness and show thatthe modified AKE protocol is secure under our new model.
The Security of ML-DSA against Fault-Injection Attacks
Deterministic signatures are often used to mitigate the risks associated with poor-quality randomness, where the randomness in the signing process is generated by a pseudorandom function that takes a message as input. However, some studies have shown that such signatures are vulnerable to fault-injection attacks. To strike a balance, recent signature schemes often adopt "hedged" randomness generation, where the pseudorandom function takes both a message and a nonce as input. Aranha et al. (EUROCRYPT 2020) investigated the security of hedged Fiat-Shamir signatures against 1-bit faults and demonstrated security for certain types of bit-tampering faults. Grilo et al. (ASIACRYPT 2021) extended this proof to the quantum random oracle model. Last year, NIST standardized the lattice-based signature scheme ML-DSA, which adopts the hedged Fiat-Shamir with aborts. However, existing security proofs against bit-tampering faults do not directly apply, as Aranha et al. left this as an open problem. To address this gap, we analyze the security of ML-DSA against multi-bit fault-injection attacks. We provide a formal proof of security for a specific class of intermediate values, showing that faults at these points cannot be exploited. Furthermore, to highlight the infeasibility of stronger fault resilience, we present key-recovery attacks that exploit signatures generated under fault injection at the other intermediate values.
A Generic Framework for Practical Lattice-Based Non-interactive Publicly Verifiable Secret Sharing
Non-interactive publicly verifiable secret sharing (PVSS) schemes enable the decentralized (re-)sharing of secrets in adversarial environments, allowing anyone to verify the correctness of distributed shares. Such schemes are essential for large-scale decentralized applications, including committee-based systems that require both transparency and robustness. However, existing PVSS schemes rely on group-based cryptography, resulting them vulnerable to quantum attacks and limiting their suitability for post-quantum applications.
In this work, we propose the first practical, fully lattice-based, non-interactive PVSS scheme, grounded on standard lattice assumptions for post-quantum security. At the heart of our design is a generic framework that transforms vector commitments and linear encryption schemes into efficient PVSS protocols. We enhance vector commitments by incorporating functional hiding and proof of smallness, ensuring that encrypted shares are both verifiable and privacy-preserving. Our construction introduces two tailored lattice-based encryption schemes, each supporting efficient proofs of decryption correctness. This framework provides strong verifiability guarantees while maintaining low proof sizes and computational efficiency, making it suitable for systems with large numbers of participants.
Dyna-hinTS: Silent Threshold Signatures for Dynamic Committees
The works of Garg et al. [S&P'24] (aka hinTS) and Das et al. [CCS'23] introduced the notion of silent threshold signatures (STS) - where a set of signers silently perform local computation to generate a public verification key. To sign a message, any set of signers sign the message non-interactively and these are aggregated into a constant-sized signature. This paradigm avoids performing expensive Distributed Key Generation procedure for each set of signers while keeping the public verification key constant-sized.
In this work, we propose the notion of committee-based silent threshold signature (c-STS) scheme. In a c-STS scheme, a set of signers initially perform a one-time setup to generate the verification key, and then a subset of signers are randomly chosen for an epoch to perform the threshold signing while the other signers are not authorized to sign during that epoch. This captures existing systems like Ethereum Altair and Dfinity where only a specific committee is authorized to sign in a designated epoch. The existing STS schemes cannot be extended to the committee setting because the signature verification only attests to the number of signing parties, not which committee they belong to.
So, we upgrade hinTS to the committee setting by proposing Dyna-hinTS. It is the c-STS scheme and it requires a one-time silent setup and generates a one-time public verification key that does not vary with the committee. Assuming a set of 1024 signers (with corrupt 682 signers), hinTS generates an aggregated signature in 1.7s whereas Dyna-hinTS generates it in s within a committee of signers. This yields a improvement over hinTS for signature generation at the cost of increasing signature verification time by over hinTS. Dyna-hinTS supports general access structure, weighted signatures and improves existing multiverse threshold signatures.
Homomorphic Signature-based Witness Encryption and Applications
Signature-based witness encryption (SWE) schemes recently emerged as a viable alternative to instantiate timed-release cryptography in the honest majority setting. In particular, assuming threshold trust in a set of parties that release signatures at a specified time, one can ``encrypt to the future'' using an SWE scheme. Applications of SWE schemes include voting, auctions, distributed randomness beacons, and more. However, the lack of homomorphism in existing schemes reduces efficiency and hinders deployment. In this work, we introduce the notion of homomorphic SWE (HSWE) to improve the practicality of timed-release encryption schemes. We show one can build HSWE using a pair of encryption and signature schemes where the uniqueness of the signature is required when the encryption relies on injective one-way functions. We then build three HSWE schemes in various settings using BLS, RSA, and Rabin signatures and show how to achieve a privacy-preserving variant that only allows extracting the homomorphically aggregated result while keeping the individual plaintexts confidential.
MicroCrypt Assumptions with Quantum Input Sampling and Pseudodeterminism: Constructions and Separations
We investigate two natural relaxations of quantum cryptographic primitives. The first involves quantum input sampling, where inputs are generated by a quantum algorithm rather than sampled uniformly at random. Applying this to pseudorandom generators ( s) and pseudorandom states ( s), leads to the notions denoted as and , respectively. The second relaxation, -pseudodeterminism, relaxes the determinism requirement by allowing the output to be a special symbol on an inverse-polynomial fraction of inputs.
We demonstrate an equivalence between bounded-query logarithmic-size , logarithmic-size , and . Moreover, we establish that can be constructed from - s, which in turn were built from logarithmic-size . Interestingly, these relations remain unknown in the uniform key setting.
To further justify these relaxed models, we present black-box separations. Our results suggest that -pseudodeterministic primitives may be weaker than their deterministic counterparts, and that primitives based on quantum input sampling may be inherently weaker than those using uniform sampling.
Together, these results provide numerous new insights into the structure and hierarchy of primitives within MicroCrypt.
Exclusive Ownership of Fiat-Shamir Signatures: ML-DSA, SQIsign, LESS, and More
Exclusive ownership (EO) security is a feature of signature schemes that prevents adversaries from "stealing" an honestly generated signature by finding a new public key which verifies said signature. It is one of the beyond unforgeability features (BUFF) which were declared to be desirable features by NIST. The BUFF transform allows to generically achieve exclusive ownership (and other properties) at the cost of an increased signature size. In this work, we study the EO security of (different variants of) Fiat-Shamir signatures. As our main result, we show that the commonly used variant of Fiat-Shamir signatures (where signatures consist of challenge-response tuples) with λ-bit challenges, can achieve about λ-bit EO security through its implicit usage of the BUFF transform—this presents a significant improvement to existing results that only provide λ/2-bit of EO security. This benefit of our result comes without an increase in signature size. For other variants of Fiat-Shamir signatures, we show worse bounds, which nevertheless improve upon existing results. Finally, we apply our results to several signature schemes: SQIsign and LESS (both round-2 NIST candidates); ML-DSA (NIST standard); CSI-FiSh; and Schnorr signatures. This shows that all these schemes achieve significantly better bounds regarding their EO security compared to existing results.
A Study of Blockchain Consensus Protocols
When Nakamoto invented Bitcoin, the first generation of cryptocurrencies followed it in applying POW (Proof of Work) consensus mechanism; due to its excessive energy consumption and heavy carbon footprints, new innovations evolved like Proof of Space, POS (Proof of Stake), and a lot more with many variants for each. Furthermore, the emergence of more blockchain applications and kinds beyond just cryptocurrencies needed more consensus mechanisms that is optimized to fit requirements of each application or blockchain kind; examples range from IoT (Internet of Things) blockchains for sustainability applications that often use variants of BFT (Byzantine Fault Tolerance) algorithm, and consensus needed to relay transactions and/or assets between different blockchains in interoperability solutions. Previous studies concentrated on surveying and/or proposing different blockchain consensus rules, on a specific consensus issue like attacks, randomization, or on deriving theoretical results. Starting from discussing most important theoretical results, this paper tries to gather and organize all significant existing material about consensus in the blockchain world explaining design challenges, tradeoffs and research areas. We realize that the topic could fit for a complete textbook, so we summarize the basic concepts and support with tables and appendices. Then we highlight some case examples from interoperability solutions to show how flexible and wide the design space is to fit both general and special purpose systems. The aim is to provide researchers with a comprehensive overview of the topic, along with the links to go deeper into every detail.
Improved Noise Bound in BFV Homomorphic Encryption and Its Application to Multiplication
Fully Homomorphic Encryption (FHE) enables computations on encrypted data without requiring decryption. However, each computation increases the noise level, which can eventually cause decryption failures once a certain threshold is reached. In particular, homomorphic multiplication significantly amplifies noise in the ciphertext. In this work, we revisit Ring-learning-With-Error (RLWE) based encryption proposed by Fan et al. and present an optimized noise growth approach by swapping the sample space for secret key and error distribution. Thereafter, we revisit BFV homomorphic multiplication proposed by Kim et al. (ASIACRYPT'21) and present an optimized noise bound. Later, we empirically check the hardness of proposed scheme using lattice estimator. Our analysis demonstrates that the proposed method achieves more than 128-bit security and achieves a lower noise bound than existing techniques.
MAESTRO: Multi-party AES using Lookup Tables
Secure multi-party computation (MPC) enables multiple distrusting parties to jointly compute a function while keeping their inputs private. Computing the AES block cipher in MPC, where the key and/or the input are secret-shared among the parties is important for various applications, particularly threshold cryptography.
In this work, we propose a family of dedicated, high-performance MPC protocols to compute the non-linear S-box part of AES in the honest majority setting. Our protocols come in both semi-honest and maliciously secure variants.
The core technique is a combination of lookup table protocols based on random one-hot vectors and the decomposition of finite field inversion in into multiplications and inversion in the smaller field , taking inspiration from ideas used for hardware implementations of AES.
We also apply and improve the analysis of a batch verification technique for checking inner products with logarithmic communication.
This allows us to obtain malicious security with almost no communication overhead, and we use it to obtain new, secure table lookup protocols with only communication for a table of size , which may be useful in other applications.
Our protocols have different trade-offs, such as having a similar round complexity as previous state-of-the-art by Chida et al. [WAHC'18] but lower bandwidth costs, or having fewer rounds and lower bandwidth costs.
An experimental evaluation in various network conditions using three party replicated secret sharing shows improvements in throughput between and in the semi-honest setting. For malicious security, we improve throughput by to in LAN and by in WAN due to sublinear batch verification.
Permutation-Based Hashing Beyond the Birthday Bound
It is known that the sponge construction is tightly indifferentiable from a random oracle up to around queries, where is the capacity. In particular, it cannot provide generic security better than half of the underlying permutation size. In this paper, we aim to achieve hash function security beating this barrier. We present a hashing mode based on two -bit permutations named the double sponge. The double sponge can be seen as the sponge embedded within the double block length hashing paradigm, making two permutation calls in parallel interleaved with an efficient mixing function. Similarly to the sponge, the permutation size is split as , and the underlying compression function absorbs bits at a time. We prove that the double sponge is indifferentiable from a random oracle up to around queries. This means that the double sponge achieves security beyond the birthday bound in the capacity. In addition, if , the double sponge beats the birthday bound in the primitive size, to our knowledge being the first hashing mode based on a permutation that accomplices this feature.
Sonikku: Gotta Speed, Keed! A Family of Fast and Secure MACs
Message authentication codes (MACs) are fundamental symmetric key cryptographic functions used to generate a short, secret-key-dependent tag for a given message. This tag ensures both message authenticity and integrity, as computing a valid tag without the secret key is computationally infeasible, thereby revealing any unauthorized modification.
Existing MACs often rely on block ciphers (BCs) and tweakable block ciphers (TBCs). The design of these MACs involves various trade-offs regarding properties such as data processing rate, the number of secret keys, achievable security definitions and concrete margins, the necessity for pre- or post-processing, parallelization capabilities, internal state size, and performance optimization for diverse message lengths.
This work introduces , a new family of MACs based on expanding primitives, comprising three distinct instances: , , and . The MACs offer a compelling combination of advantages: 1) superior speed compared to state-of-the-art TBC-based MACs; 2) security beyond the birthday bound related to the input block size; 3) a smaller internal state than comparable contemporary MACs; and 4) design flexibility considering diverse trade-offs, including pre/post-processing-free operation, parallel processing, a small resource footprint, and suitability for both short and long messages. These characteristics make them highly attractive for widespread applications, including resource-constrained environments like IoT and embedded devices.
Performance evaluations on a Cortex-M4 32-bit microcontroller demonstrate that instantiated with achieves a significant speed-up of at least 2.11x (and up to 4.36x) compared to the state-of-the-art ZMAC instantiated with for 128-bit block sizes and messages up to 95 bytes. Similarly, and instantiated with exhibit speed improvements of at least 1.93x for short messages (up to 95 bytes) and 1.48x for larger messages (up to 64KB), respectively, when benchmarked against ZMAC instantiated with for both 64- and 128-bit block sizes.
Building upon the approach of ZMAC and PMAC2x, we further illustrate the potential of the family by employing to construct SonicAE, a highly efficient, beyond-birthday secure, stateless, and deterministic authenticated encryption scheme.
A New Approach for LPN-based Pseudorandom Functions: Low-Depth and Key-Homomorphic
We give new constructions of pseudorandom functions (PRFs) computable in from (variants of the) Learning Parity with Noise (LPN) assumption. Prior to our work, the only -computable PRF from LPN-style assumptions was due to Boyle et al. (FOCS 2020) who constructed a weak PRF from a new heuristic variant of LPN called variable-density LPN. We give the following three results:
(1) A weak PRF computable in from standard LPN.
(2) A (strong) encoded-input PRF (EI-PRF) computable in from sparse LPN. An EI-PRF is a PRF whose input domain is restricted to an efficiently sampleable and recognizable set. The input encoding can be computed in for any constant , implying a strong PRF computable in .
(3) A (strong) PRF computable in from a (new, heuristic) seeded LPN assumption. In our assumption, columns of the public LPN matrix are generated by an -wise independent distribution. Supporting evidence for the security of the assumption is given by showing resilience to linear tests.
As a bonus, all of our PRF constructions are key-homomorphic, an algebraic property that is useful in many symmetric-cryptography applications. No previously-known LPN-based PRFs are key-homomorphic, even if we completely ignore depth-efficiency. In fact, our constructions support key homomorphism for linear functions (and not only additive), a property that no previously-known PRF satisfies, including ones from LWE.
Additionally, all of our PRF constructions nicely fit into the substitution-permutation network (SPN) design framework used in modern block ciphers (e.g. AES). No prior PRF construction that has a reduction to a standard cryptographic assumptions (let alone LPN) has an SPN-like structure.
Technically, all of our constructions leverage a new recursive derandomization technique for LPN instances, which allows us to generate LPN error terms deterministically. This technique is inspired by a related idea from the LWE literature (Kim, EUROCRYPT 2020) for which devising an LPN analogue has been an outstanding open problem.
Generalizing the Augot-Finiasz PKE to Other Code Classes
The Augot-Finiasz system is a public-key encryption (PKE) scheme based on Reed-Solomon codes and was later followed by analogous versions in the rank metric. Although these schemes were eventually broken, their fundamental idea remains exciting. Notably, these schemes are significantly different from the McEliece system as there is no need to hide the code and, as such, promise much better parameters. Further, they admit a simple description where both the public key and ciphertext are just corrupted codewords of a public code. An interesting question is whether the general idea can be made to work, i.e., resist all known attacks, by using other code classes. This paper shows how to generalize the Augot-Finiasz system to other code families. We reduce the correctness and security of this framework to simple assertions about the code class with which it is instantiated. Specifically, its correctness is equivalent to the existence of an efficient error-erasure decoder, and its security reduces to an easily understood hardness assumption, called "supercode decoding", close to the syndrome decoding problem. We provide a negative answer for various code families by showing that solving the supercode decoding problem is easy for them. It remains an open question whether a secure choice exists.
Row Reduction Techniques for -Party Garbling
Recent advancements in maliciously secure garbling have significantly improved the efficiency of constant-round multi-party computation. Research in the field has primarily focused on reducing communication complexity through row reduction techniques and improvements to the preprocessing phase with the use of simpler correlations.
In this work, we present two contributions to reduce the communication complexity of state of the art multi-party garbling with an arbitrary number of corruptions. First, we show how to achieve full row reduction for -party garbled circuits in HSS17-style protocols (Hazay et al., Asiacrypt'17 & JC'20) and authenticated garbling (Yang et al., CCS'20), reducing the size of the garbled circuit by 25% from to and from to bits per AND gate, respectively. Achieving row reduction in multi-party garbling has been an open problem which was partly addressed by the work of Yang et al. for authenticated garbling. In our work, we show a full row reduction for both garbling approaches, thus addressing this open problem completely.
Second, drawing inspiration from the work of Dittmer et al. (Crypto 2022), we propose a new preprocessing protocol to obtain the required materials for the garbling phase using large field triples that can be generated with sublinear communication. The new preprocessing significantly reduces the communication overhead of garbled circuits. Our optimizations result in up to a reduction in communication compared to HSS17 and a reduction over the state of the art authenticated garbling of Yang et al. for 3 parties in a circuit with 10 million AND gates.
Two Is All It Takes: Asymptotic and Concrete Improvements for Solving Code Equivalence
The Linear Code Equivalence ( ) problem asks, for two given linear codes , to find a monomial mapping into . Algorithms solving crucially rely on a (heuristic) subroutine, which recovers the secret monomial from pairs of codewords satisfying . We greatly improve on this known bound by giving a constructive (heuristic) algorithm that recovers the secret monomial from any \emph{two} pairs of such codewords for fields of sufficiently large prime order . We then show that this reduction in the number of required pairs enables the design of a more efficient algorithm for solving the problem. Our asymptotic analysis shows that this algorithm outperforms previous approaches for a wide range of parameters, including all parameters proposed across the literature. Furthermore, our concrete analysis reveals significant bit security reductions for suggested parameters. Most notably, in the context of the LESS signature scheme, a second-round contender in the ongoing NIST standardization effort for post-quantum secure digital signatures, we obtain bit security reductions of up to 24 bits.
MacaKey: Full-State Keyed Sponge Meets the Summation-Truncation Hybrid
Uncategorized
Uncategorized
The keyed sponge construction has benefited from various efficiency advancements over time, most notably leading to the possibility to absorb over the entire state, as in the full-state keyed sponge. However, squeezing has always remained limited to blocks smaller than the permutation size, as security is determined by the capacity c, the size of the non-squeezed state. In this work, we present Macakey, an improved version of the full-state keyed sponge that not only absorbs over the entire state but also squeezes over the entire state. The scheme combines ideas of the full-state keyed sponge with those of the summation-truncation hybrid of Gunsing and Mennink. We demonstrate that, with no sacrifice in generic security and with only using c bits of extra storage, Macakey can significantly boost performance, particularly in scenarios requiring large amounts of output. For example, using the 320-bit Ascon permutation with a 256-bit capacity, Macakey outputs five times as many bits as the full-state keyed sponge.
Practical cryptanalysis of pseudorandom correlation generators based on quasi-Abelian syndrome decoding
Quasi-Abelian Syndrome Decoding (QA-SD) is a recently introduced generalization of Ring-LPN that uses multivariate polynomials rings. As opposed to Ring-LPN, it enables the use of small finite field such as GF(3) and GF(4). It was introduced by Bombar et al (Crypto 2023) in order to obtain pseudorandom correlation generators for Beaver triples over small fields. This theoretical work was turned into a concrete and efficient protocol called F4OLEage by Bombar et al. (Asiacrypt 2024) that allows several parties to generate Beaver triples over GF(2).
We propose efficient algorithms to solve the decoding problem underlying the QA-SD assumption. We observe that it reduce to a sparse multivariate polynomial interpolation problem over a small finite field where the adversary only has access to random evaluation points, a blind spot in the otherwise rich landscape of sparse multivariate interpolation. We develop new algorithms for this problem: using simple techniques we interpolate polynomials with up to two monomials. By sending the problem to the field of complex numbers and using convex optimization techniques inspired by the field of “compressed sensing”, we can interpolate polynomials with more terms.
This enables us to break in practice parameters proposed by Bombar et al. at Crypto’23 and Asiacrypt’24 as well as Li et al. at Eurocrypt’25 (IACR flagship conferences Grand Slam). In the case of the F4OLEage protocol, our implementation recovers all the secrets in a few hours with probability 60%. This not only invalidates the security proofs, but it yields real-life privacy attacks against multiparty protocols using the Beaver triples generated by the broken pseudorandom correlation generators.
Adaptively Secure IBE from Lattices with Asymptotically Better Efficiency
Current adaptively secure identity-based encryption (IBE) constructions from lattices are unable to achieve a good balance among the master public key size, secret key size, modulus and reduction loss. All existing lattice-based IBE schemes share a common restriction: the modulus is quadratic in the trapdoor norm.
In this work, we remove this restriction and present a new adaptively secure IBE scheme from lattices in the standard model, which improves the state-of-the-art construction proposed by Abla et al. (TCC 2021) and achieves asymptotically better efficiency. More precisely, we achieve the asymptotically minimal number of public vectors among all the existing schemes, along with a significantly smaller modulus compared to the scheme by Abla et al. (TCC 2021). Furthermore, our scheme enjoys the smallest Gaussian width of the secret key among all existing schemes and has the same tightness as Abla et al.'s scheme.
We propose a novel cross-multiplication design for our IBE scheme, along with several novel tools and techniques, including: (a) a homomorphic computation algorithm that outputs BGG+-style encoding with two distinct-norm trapdoors; (b) a sampling algorithm with hybrid Gaussian outputs; and (c) a partial rerandomization algorithm. These new tools and techniques are general and could find rich applications in lattice-based cryptography.
Adaptively Secure Blockchain-Aided Decentralized Storage Networks: Formalization and Generic Construction
This work revisits the current Decentralized Storage Network (DSN) definition to propose a novel general construction based on a UTxO based ledger. To the best of our knowledge, this is the first adaptively secure UTxO blockchain-aided DSN. More concretely, we revisit the currently existing designs to thoroughly formalize the DSN definition and its security. Moreover we present a general construction, which a client delegates data to a DSN that keeps custody of it during a jointly agreed period. Our newly proposed approach, leveraged by the Extended UTxO (EUTxO) Model, neatly allows the storage network to offer automatic verifiability, i.e., without any interaction of the data owner, via proofs published in the blockchain. In summary, this work presents a redesign of the DSN with the aid of a EUTxO based blockchain, by (1) putting forth a formal and rigorous description of a blockchain-aided DSN protocol, (2) offering a thorough description of a practical EUTxO based DSN, and (3) detailing a security analysis showing that our protocol is adaptively secure by providing (rational) security guarantees.
SQIsign2DPush: Faster Signature Scheme Using 2-Dimensional Isogenies
Isogeny-based cryptography is cryptographic schemes whose security is based on the hardness of a mathematical problem called the isogeny problem, and is attracting attention as one of the candidates for post-quantum cryptography. A representative isogeny-based cryptography is the signature scheme called SQIsign, which was submitted to the NIST PQC standardization competition for additional signature. SQIsign has attracted much attention because of its very short signature and key size among candidates for the NIST PQC standardization. Recently, many new signature schemes using high-dimensional isogenies have been proposed, such as: SQIsignHD, SQIsign2D-West, SQIsgn2D-East, and SQIPrime. Last year, SQIsign advanced to Round 2 of the NIST competition and was updated to version 2.0 (we call it SQIsign-v2.0), which is based on SQIsign2D-West. SQIsign-v2.0 achieves smaller signature sizes and faster verification. However, the signing costs are relatively high. In this paper, we propose a new signature scheme `SQIsign2DPush', which has a smaller signing cost than SQIsign-v2.0 while the signature size and the verification cost are almost the same.
Efficient Pseudorandom Correlation Generators for Any Finite Field
Correlated randomness lies at the core of efficient modern secure multi-party computation (MPC) protocols. Costs of generating such correlated randomness required for the MPC online phase protocol often constitute a bottleneck in the overall protocol.
A recent paradigm of {\em pseudorandom correlation generator} (PCG) initiated by Boyle et al. (CCS'18, Crypto'19) offers an appealing solution to this issue. In sketch, each party is given a short PCG seed, which can be locally expanded into long correlated strings, satisfying the target correlation. Among various types of correlations, there is oblivious linear evaluation (OLE), a fundamental and useful primitive for typical MPC protocols on arithmetic circuits.
Towards efficient generating a great amount of OLE, and applications to MPC protocols, we establish the following results:
(i) We propose a novel {\em programmable} PCG construction for OLE over any field . For OLE correlations, we require communication and computation, where is an arbitrary integer . Previous works either have quadratic computation (Boyle et al. Crypto'19), or can only support fields of size larger than (Bombar et al. Crypto'23).
(ii) We extend the above OLE construction to provide various types of correlations for any finite field. One of the fascinating applications is an efficient PCG for two-party {\em authenticated Boolean multiplication triples}. For authenticated triples, we offer PCGs with seed size of bits. To our best knowledge, such correlation has not been realized with sublinear communication and quasi-linear computation ever before.
(iii) In addition, the \emph{programmability} admits efficient PCGs for multi-party Boolean triples, and thus the first efficient MPC protocol for Boolean circuits with {\em silent} preprocessing. In particular, we show -party Boolean multiplication triples can be generated in -bit communication, while the state-of-the-art FOLEAGE (Asiacrypt'24) requires a broadcast channel and takes bits communication.
(iv) Finally, we present efficient PCGs for circuit-dependent preprocessing, matrix multiplications triples, and string OTs etc. Compared to previous works, each has its own right.