All papers (24280 results)
Fuzzy Private Set Intersection from VOLE
Private set intersection (PSI) is a well-researched cryptographic primitive that allows two parties to compute the intersection of their input sets without revealing any information about items outside of the intersection. Fuzzy private set intersection is a relatively new variant of PSI, where items are not matched exactly but ``fuzzily''. Most commonly, items are points in -dimensional integer space and a point is a fuzzy match to another if it lies within a ball of radius centered at this point, with respect to some distance metric.
Previous works either only support infinity ) distance metric and standard PSI functionality, or support general Minkowski ( , ) distance metrics and realize richer functionalities but rely on expensive homomorphic encryptions. Our work aims to bridge this gap by giving the first construction of a fuzzy PSI protocol for general Minkowski distance metrics relying on significantly cheaper operations during the online phase.
Our main building block is a novel fuzzy matching protocol based on an oblivious pseudorandom function (OPRF), which can be realized very efficiently from vector oblivious linear evaluation (VOLE). Our protocol is able to preserve the asymptotic complexity as well as the simplicity of the fuzzy matching protocol from van Baarsen and Pu (Eurocrypt '24), while being much more concretely efficient. Additionally, we achieve several asymptotic improvements by representing intervals succinctly. Finally, we present the first fuzzy PSI protocol for infinity distance that places no assumptions on the sets of points, while maintaining asymptotic complexities comparable to the state-of-the-art fuzzy PSI protocol.
Robust Threshold ECDSA with Online-Friendly Design in Three Rounds
Threshold signatures, especially ECDSA, enhance key protection by addressing the single-point-of-failure issue. Threshold signing can be divided into offline and online phases, based on whether the message is required. Schemes with low-cost online phases are referred to as ``online-friendly". Another critical aspect of threshold ECDSA for real-world applications is robustness, which guarantees the successful completion of each signing execution whenever a threshold number of semi-honest participants is met, even in the presence of misbehaving signatories.
The state-of-the-art online-friendly threshold ECDSA without robustness was developed by Doerner et al. in S\&P'24, requiring only three rounds. Recent work by Wong et al. in NDSS'23 (WMY 23) and NDSS'24 (WMC24) achieves robustness but demands additional communication rounds (7 and 4, respectively) or incurs costly operations in the online phase, such as computations over a homomorphic encryption scheme.
This paper presents the first three-round threshold ECDSA scheme with both robustness and an online-friendly design. The online phase of our scheme relies solely on several elliptic-curve group operations, which are 2 to 3 orders of magnitude less computationally intensive than those based on linearly homomorphic encryption schemes. We implement our protocol and conduct a comprehensive comparison with WMY 23 and WMC24. Benchmark results show that the online phase of our scheme is faster than that of WMY 23 and hundreds of times faster than that of WMC24. Lastly, we demonstrate that our techniques can be extended to construct an online-friendly and robust three-round threshold BBS+ scheme.
Energy Consumption Framework and Analysis of Post-Quantum Key-Generation on Embedded Devices
The emergence of quantum computing and Shor's algorithm necessitates an imminent shift from current public key cryptography techniques to post-quantum robust techniques. NIST has responded by standardising Post-Quantum Cryptography (PQC) algorithms, with ML-KEM (FIPS-203) slated to replace ECDH (Elliptic Curve Diffie-Hellman) for key exchange. A key practical concern for PQC adoption is energy consumption. This paper introduces a new framework for measuring the PQC energy consumption on a Raspberry Pi when performing key generation. The framework uses both available traditional methods and the newly standardised ML-KEM algorithm via the commonly utilised OpenSSL library.
SubLogarithmic Linear Time SNARKs from Compressed Sum-Check
We leverage recently proposed multilinear polynomial commitment schemes, with linear time prover and constant proof size to reduce the communication complexity of the classical sum-check protocol for multivariate polynomials. Specifically, for degree multivariate polynomial in variables which can be decomposed into multilinear polynomials, we exhibit a sum-check protocol with prover cost and communication for . This enables us to break the barrier for this ubiquitous primitive used in multilinear SNARKs and implies first construction of prover efficient (with proving cost) SNARKs using multilinear polynomials with proof-size. Currently, lower communication is only obtained by SNARKs based on univariate polynomials which incur prover complexity due to inherent polynomial multiplication.
Concretely, using compressed sum-check in HyperPlonk protocol allows us to reduce the proof size from about KB KB to only about KB, making it competitive with univariate SNARKs like PLONK.
New Framework for Structure-Aware PSI From Distributed Function Secret Sharing
Private set intersection (PSI) allows two parties to jointly compute the intersection of their private sets without revealing any additional information. Structure-aware PSI (sa-PSI), introduced by Garimella et al. (Crypto'22), is a variant where Alice's input set has a publicly known structure and Bob's input set remains unstructured, enabling new applications like fuzzy PSI. Their construction relies solely on lightweight cryptographic primitives such as symmetric-key primitives and oblivious transfer (OT) extension. Since then, there has been active research on sa-PSI based on lightweight cryptography. Notably, recent work by Garimella et al. (Crypto'24) achieves sa-PSI with both communication and computation costs only scaling with the description size of Alice's set, rather than its potentially large cardinality. However, this line of work remains largely theoretical, lacking efficient concrete implementations.
In this work, we close this gap by presenting a new framework for sa-PSI that achieves practical efficiency. We identify and eliminate a hidden multiplicative overhead proportional to the security parameter (e.g., 128) in prior symmetric-key-based sa-PSI constructions. A key building block of our new framework is a distributed Function Secret Sharing (dFSS) key generation protocol tailored to the structure of Alice's set, which may be of independent interest. To demonstrate the practicality of our framework, we extend our dFSS protocol to support incremental evaluation, introduce new techniques for spatial hashing, and develop several new optimization techniques, including reducing the exponential dependence on dimension and enabling load balancing between the two parties.
We instantiate our framework for structured sets defined by unions of -dimensional balls, and implement our protocols using only lightweight symmetric-key primitives and OT extension. Our experiments show concrete performance improvements of up to speedup in computation and reduction in communication in low-dimensional, large-radius settings compared to existing public-key-based fuzzy PSI protocols by van Baarsen & Pu (Eurocrypt'24) and Gao et al. (Asiacrypt'24).
Covert Attacks on Machine Learning Training in Passively Secure MPC
Secure multiparty computation (MPC) allows data owners to train machine learning models on combined data while keeping the underlying training data private. The MPC threat model either considers an adversary who passively corrupts some parties without affecting their overall behavior, or an adversary who actively modifies the behavior of corrupt parties. It has been argued that in some settings, active security is not a major concern, partly because of the potential risk of reputation loss if a party is detected cheating.
In this work we show explicit, simple, and effective attacks that an active adversary can run on existing passively secure MPC training protocols, while keeping essentially zero risk of the attack being detected. The attacks we show can compromise both the integrity and privacy of the model, including attacks reconstructing exact training data.
Our results challenge the belief that a threat model that does not include malicious behavior by the involved parties may be reasonable in the context of PPML, motivating the use of actively secure protocols for training.
Authenticated Key Exchange Protocol with Remote Randomness
A conventional Authenticated Key Exchange (AKE) protocol consumes fresh random coins from the local random source. However, recent studies of bad randomness expose the vulnerability of some AKE protocols under small subgroup attacks when the random coins are manipulated or being repeated. It is important to ensure the bad randomness of one random source will not affect the security of the AKE protocol as a whole.
Thus, we introduce the notion of remote randomness by introducing additional ephemeral keys generated by a fresh remote random source in the AKE protocol. In this paper, we argue that because of the thrive of cloud computing, it encourages high
speed internal data transfer within server clusters or virtual machines, including entropy pool data used in our remote randomness AKE protocols. We present a remote randomness modification to the HMQV protocol to demonstrate its resilience under the manipulation of local random sources. We then provide a new security model with the consideration of remote randomness and show thatthe modified AKE protocol is secure under our new model.
The Security of ML-DSA against Fault-Injection Attacks
Deterministic signatures are often used to mitigate the risks associated with poor-quality randomness, where the randomness in the signing process is generated by a pseudorandom function that takes a message as input. However, some studies have shown that such signatures are vulnerable to fault-injection attacks. To strike a balance, recent signature schemes often adopt "hedged" randomness generation, where the pseudorandom function takes both a message and a nonce as input. Aranha et al. (EUROCRYPT 2020) investigated the security of hedged Fiat-Shamir signatures against 1-bit faults and demonstrated security for certain types of bit-tampering faults. Grilo et al. (ASIACRYPT 2021) extended this proof to the quantum random oracle model. Last year, NIST standardized the lattice-based signature scheme ML-DSA, which adopts the hedged Fiat-Shamir with aborts. However, existing security proofs against bit-tampering faults do not directly apply, as Aranha et al. left this as an open problem. To address this gap, we analyze the security of ML-DSA against multi-bit fault-injection attacks. We provide a formal proof of security for a specific class of intermediate values, showing that faults at these points cannot be exploited. Furthermore, to highlight the infeasibility of stronger fault resilience, we present key-recovery attacks that exploit signatures generated under fault injection at the other intermediate values.
Rock and a Hard Place: Attack Hardness in Neural Network-assisted Side Channel Analysis
Side-channel analysis (SCA) has become a crucial area in ensuring the security of hardware implementations against potential vulnerabilities. With advancements in machine learning (ML) and artificial intelligence (AI), neural network(NN)-assisted techniques for SCA have demonstrated significant effectiveness. However, a fundamental question remains unanswered: are traces corresponding to different subkeys equally hard to attack? This paper addresses this issue by leveraging explainable AI techniques to analyze the hardness levels and, consequently, the root cause of hardness. To systematically investigate this problem, we reintroduce hardness metrics in SCA, which have been known to the ML community. Those metrics include query hardness (QH), log odds (LO), and matrix-based Rényi entropy (MRE). The challenge in this regard is that (virtually all) hardness metrics in ML cannot be adopted as they are. This is because ML and SCA metrics have conflicting goals, namely boosting accuracy and rank. Through careful experimentation, we identify the shortcomings of QH and LO in SCA and recommend MRE as a suitable hardness metric for SCA.
We also study how hardness has been seen in SCA, where recent work has suggested the influence of class “labels” on the attack difficulty. Employing rigorous evaluation, our paper demonstrates that no statistically significant evidence can be found to support this claim. This leads us to the question of how much traces’ time samples affect the inherent hardness in distinguishing key candidates. Our novel explainable AI (XAI) approach not only answers this, but also makes a link between hardness and rank as the common performance metric in SCA. Our findings indicate that hardness values are influenced mainly by time samples rather than specific key labels. Furthermore, we examine whether hardness captures intrinsic properties of the traces, specifically, their lack of separability in feature space due to their inherent similarities. To this end, we introduce, for the first time in the context of SCA, the use of maximum mean discrepancy (MMD) as a principled metric. MMD effectively links trace hardness with attack difficulty by quantifying distributional differences induced by traces’ time samples. In addition to visualization techniques showing the root cause of hardness based on MMD, we employ XAI to explain the connection between MMD and attack hardness. Our proposed methodology enhances the understanding
of attack difficulty in SCA and contributes to developing more robust evaluation metrics for profiling attacks.
On the Fiat–Shamir Security of Succinct Arguments from Functional Commitments
We study the security of a popular paradigm for constructing SNARGs, closing a key security gap left open by prior work. The paradigm consists of two steps: first, construct a public-coin succinct interactive argument by combining a functional interactive oracle proof (FIOP) and a functional commitment scheme (FC scheme); second, apply the Fiat–Shamir transformation in the random oracle model. Prior work did not consider this generalized setting nor prove the security of this second step (even in special cases).
We prove that the succinct argument obtained in the first step satisfies state-restoration security, thereby ensuring that the second step does in fact yield a succinct non-interactive argument. This is provided the FIOP satisfies state-restoration security and the FC scheme satisfies a natural state-restoration variant of function binding (a generalization of position binding for vector commitment schemes).
Moreover, we prove that notable FC schemes satisfy state-restoration function binding, allowing us to establish, via our main result, the security of several SNARGs of interest (in the random oracle model). This includes a security proof of Plonk, in the ROM, based on ARSDH (a falsifiable assumption).
A Generic Framework for Practical Lattice-Based Non-interactive Publicly Verifiable Secret Sharing
Non-interactive publicly verifiable secret sharing (PVSS) schemes enable the decentralized (re-)sharing of secrets in adversarial environments, allowing anyone to verify the correctness of distributed shares. Such schemes are essential for large-scale decentralized applications, including committee-based systems that require both transparency and robustness. However, existing PVSS schemes rely on group-based cryptography, resulting them vulnerable to quantum attacks and limiting their suitability for post-quantum applications.
In this work, we propose the first practical, fully lattice-based, non-interactive PVSS scheme, grounded on standard lattice assumptions for post-quantum security. At the heart of our design is a generic framework that transforms vector commitments and linear encryption schemes into efficient PVSS protocols. We enhance vector commitments by incorporating functional hiding and proof of smallness, ensuring that encrypted shares are both verifiable and privacy-preserving. Our construction introduces two tailored lattice-based encryption schemes, each supporting efficient proofs of decryption correctness. This framework provides strong verifiability guarantees while maintaining low proof sizes and computational efficiency, making it suitable for systems with large numbers of participants.
Exclusive Ownership of Fiat-Shamir Signatures: ML-DSA, SQIsign, LESS, and More
Exclusive ownership (EO) security is a feature of signature schemes that prevents adversaries from "stealing" an honestly generated signature by finding a new public key which verifies said signature. It is one of the beyond unforgeability features (BUFF) which were declared to be desirable features by NIST. The BUFF transform allows to generically achieve exclusive ownership (and other properties) at the cost of an increased signature size. In this work, we study the EO security of (different variants of) Fiat-Shamir signatures. As our main result, we show that the commonly used variant of Fiat-Shamir signatures (where signatures consist of challenge-response tuples) with λ-bit challenges, can achieve about λ-bit EO security through its implicit usage of the BUFF transform—this presents a significant improvement to existing results that only provide λ/2-bit of EO security. This benefit of our result comes without an increase in signature size. For other variants of Fiat-Shamir signatures, we show worse bounds, which nevertheless improve upon existing results. Finally, we apply our results to several signature schemes: SQIsign and LESS (both round-2 NIST candidates); ML-DSA (NIST standard); CSI-FiSh; and Schnorr signatures. This shows that all these schemes achieve significantly better bounds regarding their EO security compared to existing results.
Improved Noise Bound in BFV Homomorphic Encryption and Its Application to Multiplication
Fully Homomorphic Encryption (FHE) enables computations on encrypted data without requiring decryption. However, each computation increases the noise level, which can eventually cause decryption failures once a certain threshold is reached. In particular, homomorphic multiplication significantly amplifies noise in the ciphertext. In this work, we revisit Ring-learning-With-Error (RLWE) based encryption proposed by Fan et al. and present an optimized noise growth approach by swapping the sample space for secret key and error distribution. Thereafter, we revisit BFV homomorphic multiplication proposed by Kim et al. (ASIACRYPT'21) and present an optimized noise bound. Later, we empirically check the hardness of proposed scheme using lattice estimator. Our analysis demonstrates that the proposed method achieves more than 128-bit security and achieves a lower noise bound than existing techniques.
A New Approach for LPN-based Pseudorandom Functions: Low-Depth and Key-Homomorphic
We give new constructions of pseudorandom functions (PRFs) computable in from (variants of the) Learning Parity with Noise (LPN) assumption. Prior to our work, the only -computable PRF from LPN-style assumptions was due to Boyle et al. (FOCS 2020) who constructed a weak PRF from a new heuristic variant of LPN called variable-density LPN. We give the following three results:
(1) A weak PRF computable in from standard LPN.
(2) A (strong) encoded-input PRF (EI-PRF) computable in from sparse LPN. An EI-PRF is a PRF whose input domain is restricted to an efficiently sampleable and recognizable set. The input encoding can be computed in for any constant , implying a strong PRF computable in .
(3) A (strong) PRF computable in from a (new, heuristic) seeded LPN assumption. In our assumption, columns of the public LPN matrix are generated by an -wise independent distribution. Supporting evidence for the security of the assumption is given by showing resilience to linear tests.
As a bonus, all of our PRF constructions are key-homomorphic, an algebraic property that is useful in many symmetric-cryptography applications. No previously-known LPN-based PRFs are key-homomorphic, even if we completely ignore depth-efficiency. In fact, our constructions support key homomorphism for linear functions (and not only additive), a property that no previously-known PRF satisfies, including ones from LWE.
Additionally, all of our PRF constructions nicely fit into the substitution-permutation network (SPN) design framework used in modern block ciphers (e.g. AES). No prior PRF construction that has a reduction to a standard cryptographic assumptions (let alone LPN) has an SPN-like structure.
Technically, all of our constructions leverage a new recursive derandomization technique for LPN instances, which allows us to generate LPN error terms deterministically. This technique is inspired by a related idea from the LWE literature (Kim, EUROCRYPT 2020) for which devising an LPN analogue has been an outstanding open problem.
SQIsign2DPush: Faster Signature Scheme Using 2-Dimensional Isogenies
Isogeny-based cryptography is cryptographic schemes whose security is based on the hardness of a mathematical problem called the isogeny problem, and is attracting attention as one of the candidates for post-quantum cryptography. A representative isogeny-based cryptography is the signature scheme called SQIsign, which was submitted to the NIST PQC standardization competition for additional signature. SQIsign has attracted much attention because of its very short signature and key size among candidates for the NIST PQC standardization. Recently, many new signature schemes using high-dimensional isogenies have been proposed, such as: SQIsignHD, SQIsign2D-West, SQIsgn2D-East, and SQIPrime. Last year, SQIsign advanced to Round 2 of the NIST competition and was updated to version 2.0 (we call it SQIsign-v2.0), which is based on SQIsign2D-West. SQIsign-v2.0 achieves smaller signature sizes and faster verification. However, the signing costs are relatively high. In this paper, we propose a new signature scheme `SQIsign2DPush', which has a smaller signing cost than SQIsign-v2.0 while the signature size and the verification cost are almost the same.
InstaRand: Instantly Available and Instantly Verifiable On-chain Randomness
Web3 applications, such as on-chain gaming, require unbiased and publicly verifiable randomness that can be obtained quickly and cost-effectively whenever needed. Existing services, such as those based on Verifiable Random Functions (VRF), incur network delays and high fees due to their highly interactive nature. FlexiRand [CCS 2023] addressed these problems by hiding the output of the VRF and using that as a seed to derive many randomnesses locally. These randomnesses are instantly available for usage. However, these randomnesses can not be verified independently (or instantly) without disclosing the seed, leaving scope for malicious actors to cheat.
To solve this problem, we introduce a new notion, called instantly-verifiable VRF (iVRF), which enables the generation of many randomnesses from one VRF output seed, such that each of them is verifiable independently - this enables the solution to generate randomnesses, such that they are and also .
To instantiate we propose a generic construction called InstaRand - it combines any (possibly distributed) VRF at the server's end with another VRF at the client's end to construct an iVRF. Our specific instantiation uses the BLS-based GLOW-DVRF [Euro S&P 2021] at the server's end and the DDH-based VRF of Goldberg et al. [RFC 2023] at the client's end. We use the universal composability framework to analyze the security. Moreover, due to its generality, InstaRand can be instantiated with any post-quantum secure VRF to yield a post-quantum secure iVRF.
Our experiments demonstrate that our instantiation of InstaRand is . The client incurs a cost to generate the seed (server's VRF output) by querying the GLOW-dVRF servers once. Once the seed is set up, the client locally generates the pseudorandom value on demand in , avoiding the client-server round-trip delay. Each value can be independently verified in . This yields a improvement in terms of output generation and improvement in verification cost over existing solutions.
Blinding Post-Quantum Hash-and-Sign Signatures
Blind signature schemes are essential for privacy-preserving applications such as electronic voting, digital currencies or anonymous credentials. In this paper, we revisit Fischlin's framework for round-optimal blind signature schemes and its recent efficient lattice-based instantiations. Our proposed framework compiles any post-quantum hash-and-sign signature scheme into a blind signature scheme. The resulting scheme ensures blindness by design and achieves one-more unforgeability, relying solely on the unforgeability of the underlying signature scheme and the random oracle model.
To achieve this we introduce the notion of commit-append-and-prove (CAP) systems, which generalizes traditional commit-and-prove system by making their commitments updatable before proving. This building block allows us to unlock the technical challenges encountered when generalizing previous variants of the Fischlin's framework to any hash-and-sign signature scheme. We provide efficient CAP system instantiations based on recent MPC-in-the-Head techniques.
We showcase our framework by constructing blind versions of UOV and Wave, thereby introducing the first practical blind signatures based on multivariate cryptography and code-based cryptography. Our blind UOV signatures range from 3.8 KB to 11 KB, significantly outperforming previous post-quantum blind signatures, such as the 22 KB lattice-based blind signatures, which were the most compact until now.
Achieving "beyond CCA1" security for linearly homomorphic encryption, without SNARKs?
In the wake of Manulis and Nguyen's Eurocrypt'24 paper, new CCA security notions, vCCA and vCCAD, and associated construction blueprints have been proposed to leverage either CPA or CPAD secure FHE beyond the CCA1 security barrier. These two notions are the strongest CCA security notions so far achievable, respectively, by correct and approximate homomorphic schemes. However, the only known construction strategies intimately require advanced SNARK machinery, undermining their practicality. In this context, this paper is an attempt to achieve these advanced CCA security notions in the restricted case of linearly homomorphic encryption, without resorting to SNARKs. To do so, we investigate the relationship between the Linear-Only Homomorphism (LOH) assumption, an assumption that has been used for more than a decade at the core of several proof-of-knowledge constructions, and these two recent security notions (vCCA and vCCAD). On the bright side, when working under the correctness assumption, we establish that the LOH property is sufficient to achieve vCCA security in both the private and public key settings. In the public key setting, we further show that a surprisingly simple and previously known Paillier-based construction also achieves this level of security, at only twice the cost of the baseline scheme. We then turn our attention to LWE-based schemes for which the Pandora box of decryption errors opens up. In the private key setting, we are able to achieve CPAD and vCCAD security but only in a fairly restrictive non-adaptive setting, in which vCCAD collapses onto a weak relaxation of CCA1. Finally, we eventually achieve adaptive vCCAD security provided that the number of ciphertexts given to the adversary is suitably restricted. While bridging the gap towards credible practicality requires further work, this is a first step towards obtaining linear homomorphic schemes achieving these recent CCA security notions by means only of relatively lightweight machinery.
MacaKey: Full-State Keyed Sponge Meets the Summation-Truncation Hybrid
Uncategorized
Uncategorized
The keyed sponge construction has benefited from various efficiency advancements over time, most notably leading to the possibility to absorb over the entire state, as in the full-state keyed sponge. However, squeezing has always remained limited to blocks smaller than the permutation size, as security is determined by the capacity c, the size of the non-squeezed state. In this work, we present Macakey, an improved version of the full-state keyed sponge that not only absorbs over the entire state but also squeezes over the entire state. The scheme combines ideas of the full-state keyed sponge with those of the summation-truncation hybrid of Gunsing and Mennink. We demonstrate that, with no sacrifice in generic security and with only using c bits of extra storage, Macakey can significantly boost performance, particularly in scenarios requiring large amounts of output. For example, using the 320-bit Ascon permutation with a 256-bit capacity, Macakey outputs five times as many bits as the full-state keyed sponge.
Practical cryptanalysis of pseudorandom correlation generators based on quasi-Abelian syndrome decoding
Quasi-Abelian Syndrome Decoding (QA-SD) is a recently introduced generalization of Ring-LPN that uses multivariate polynomials rings. As opposed to Ring-LPN, it enables the use of small finite field such as GF(3) and GF(4). It was introduced by Bombar et al (Crypto 2023) in order to obtain pseudorandom correlation generators for Beaver triples over small fields. This theoretical work was turned into a concrete and efficient protocol called F4OLEage by Bombar et al. (Asiacrypt 2024) that allows several parties to generate Beaver triples over GF(2).
We propose efficient algorithms to solve the decoding problem underlying the QA-SD assumption. We observe that it reduce to a sparse multivariate polynomial interpolation problem over a small finite field where the adversary only has access to random evaluation points, a blind spot in the otherwise rich landscape of sparse multivariate interpolation. We develop new algorithms for this problem: using simple techniques we interpolate polynomials with up to two monomials. By sending the problem to the field of complex numbers and using convex optimization techniques inspired by the field of “compressed sensing”, we can interpolate polynomials with more terms.
This enables us to break in practice parameters proposed by Bombar et al. at Crypto’23 and Asiacrypt’24 as well as Li et al. at Eurocrypt’25 (IACR flagship conferences Grand Slam). In the case of the F4OLEage protocol, our implementation recovers all the secrets in a few hours with probability 60%. This not only invalidates the security proofs, but it yields real-life privacy attacks against multiparty protocols using the Beaver triples generated by the broken pseudorandom correlation generators.
Obfuscation of Unitary Quantum Programs
Program obfuscation aims to hide the inner workings of a program while preserving its functionality. In the quantum setting, recent works have obtained obfuscation schemes for specialized classes of quantum circuits. For instance, Bartusek, Brakerski, and Vaikuntanathan (STOC 2024) constructed a quantum state obfuscation scheme, which supports the obfuscation of quantum programs represented as quantum states for pseudo-deterministic quantum programs with classical inputs and outputs in the classical oracle model.
In this work, we improve upon existing results by constructing the first quantum state obfuscation scheme for unitary (or approximately unitary) quantum programs supporting quantum inputs and outputs in the classical oracle model. At the core of our obfuscation scheme are two novel ingredients: a functional quantum authentication scheme that allows key holders to learn specific functions of the authenticated quantum state with simulation-based security, and a compiler that represents an arbitrary quantum circuit as a projective linear-plus-measurement quantum program described by a sequence of non-adaptive Clifford gates interleaved with adaptive and compatible measurements.
SPEEDY: Caught at Last
SPEEDY is a family of ultra-low-latency block ciphers designed by Leander et al. in 2021. In 2023, Boura et al. proposed a differential attack on the full 7-round variant, SPEEDY-7-192. However, shortly thereafter, Beyne and Neyt demonstrated that this attack was invalid, as the dominant differential characteristic it relied upon had probability zero. A similar issue affects another differential attack proposed the same year by Wang et al., which also targets SPEEDY-7-192 and suffers from the same flaw. As a result, although SPEEDY-7-192 was initially believed to be broken, it remained unbroken in practice, and the question of finding a valid attack on this cipher remained an open problem. In this work, we resolve this problem by presenting the first valid differential attack on SPEEDY-7-192. We verify the validity of our distinguisher using the quasidifferential framework. Moreover, our search for the differential distinguisher is significantly more rigorous than in the previous works, allowing us to explore a larger portion of the search space. We also fully exploit probabilistic extensions of the distinguisher to identify optimal parameters for the key recovery step. Our attack on SPEEDY-7-192 has data and time complexities of 2^{186.36} encryption calls and a memory complexity of 2^{84} 192-bit states. In addition, we present differential attacks on 4-round SPEEDY-5-192 and 5-round SPEEDY-6-192 which currently represent the best attacks against these smaller variants.
At the Top of the Hypercube -- Better Size-Time Tradeoffs for Hash-Based Signatures
Hash-based signatures have been studied for decades and have recently gained renewed attention due to their post-quantum security. At the core of the most prominent hash-based signature schemes, XMSS and SPHINCS+, lies a one-time signature scheme based on hash chains due to Winternitz. In this scheme, messages are encoded into vectors of positions (i.e., vertices in a hypercube) in the hash chains, and the signature contains the respective chain elements. The encoding process is crucial for the efficiency and security of this construction. In particular, it determines a tradeoff between signature size and computational costs. Researchers have been trying to improve this size-time tradeoff curve for decades, but all improvements have been arguably marginal.
In this work, we revisit the encoding process with the goal of minimizing verification costs and signature sizes. As our first result, we present a novel lower bound for the verification cost given a fixed signature size. Our lower bound is the first to directly apply to general encodings including randomized, non-uniform, and non-injective ones.
Then, we present new encodings and prove their security. Inspired by our lower bound, these encodings follow a counterintuitive approach: we map messages non-uniformly into the top layers of a much bigger hypercube than needed but the encoding itself has (hard to find) collisions. With this, we get a 20 % to 40 % improvement in the verification cost of the signature while keeping the same security level and the same size.
Our constructions can be directly plugged into any signature scheme based on hash chains, which includes SPHINCS+ and XMSS.
Bootstrapping GBFV with CKKS
The Generalized BFV [Geelen and Vercauteren; Eurocrypt'25] is an efficient fully homomorphic encryption scheme that supports integer computations over large cyclotomic moduli. However, the only known bootstrapping approach cannot support large precision as it uses BFV linear transformation as a subroutine. In this work, we introduce a GBFV bootstrapping that relies on CKKS bootstrapping as in the BFV bootstrapping from CKKS [Kim et al.; CCS'24]. The new bootstrapping can handle arbitrary precision, notably bootstrapping the CLPX scheme [Chen et al.; CT-RSA'18] for the first time, bootstrapping up to bits of plaintext modulus in less than seconds. In addition, we introduce conversions between GBFV and CKKS and discuss its impact.
Adaptively Secure Blockchain-Aided Decentralized Storage Networks: Formalization and Generic Construction
This work revisits the current Decentralized Storage Network (DSN) definition to propose a novel general construction based on a UTxO based ledger. To the best of our knowledge, this is the first adaptively secure UTxO blockchain-aided DSN. More concretely, we revisit the currently existing designs to thoroughly formalize the DSN definition and its security. Moreover we present a general construction, which a client delegates data to a DSN that keeps custody of it during a jointly agreed period. Our newly proposed approach, leveraged by the Extended UTxO (EUTxO) Model, neatly allows the storage network to offer automatic verifiability, i.e., without any interaction of the data owner, via proofs published in the blockchain. In summary, this work presents a redesign of the DSN with the aid of a EUTxO based blockchain, by (1) putting forth a formal and rigorous description of a blockchain-aided DSN protocol, (2) offering a thorough description of a practical EUTxO based DSN, and (3) detailing a security analysis showing that our protocol is adaptively secure by providing (rational) security guarantees.
PaCo: Bootstrapping for CKKS via Partial CoeffToSlot
We introduce PaCo, a novel and efficient bootstrapping procedure for the CKKS homomorphic encryption scheme, where PaCo stands for “(Bootstrapping via) Partial CoeffToSlot”. At a high level, PaCo reformulates the CKKS decryption equation in terms of blind rotations and modular additions. This reformulated decryption circuit is then evaluated homomorphically within the CKKS framework. Our approach makes use of the circle group in the complex plane to simulate modular additions via complex multiplication, and utilizes alternative polynomial ring structures to support blind rotations. These ring structures are enabled by a variant of the CoeffToSlot operation, which we call a partial CoeffToSlot. This yields a new bootstrapping approach within CKKS, achieving a computational complexity which is logarithmic in the number of complex slots. We further introduce a parallelized variant that enables bootstrapping over all CKKS slots with enhanced throughput, highlighting PaCo’s suitability for practical and large-scale homomorphic applications. In addition to the bootstrapping technique itself, we develop several supporting tools — particularly in the context of bit-reversing and alternative ring structures for CKKS — which can be of independent interest to the community. Finally, a proof-of-concept implementation confirms that PaCo achieves performance competitive with state-of-the-art methods for CKKS bootstrapping.
Fast Fuzzy PSI from Symmetric-Key Techniques
Private set intersection (PSI) enables a sender holding a set and a receiver holding a set to securely compute the intersection . Fuzzy PSI (FPSI) is a PSI variant where the receiver learns the items for which there exists such that with respect to some distance metric. Recently, Gao et al. (ASIACRYPT 2024) proposed the first FPSI protocols for and distance with linear complexity. They summarized their FPSI construction into two steps: fuzzy mapping and fuzzy matching. However, their realizations of the two steps heavily rely on public key operations, namely the DH-key exchange and additively homomorphic encryption, resulting in low efficiency.
In this work, we propose new FPSI protocols for and distances, primarily leveraging symmetric-key primitives.
We revisit the definition of fuzzy mapping and rigorously redefine it as a cryptographic scheme. We further introduce consistency for fuzzy mapping scheme, which could simplify the fuzzy matching step into plain PSI.
We then demonstrate how to execute fuzzy mapping step satisfying consistency. We also propose several new technologies to completely avoid the extensive use of computationally expensive public-key operations burden inherent in existing solutions.
We implement our FPSI protocols and compare them with the state-of-the-art FPSI protocols. Experiments show that our protocols perform better than state-of-the-art under all the parameters we tested. Specifically, our protocols achieve a speedup in running time and shrinking in communication cost, depending on set sizes, dimension and distance threshold.
Cryptocurrencies enable transactions among mutually distrustful users, necessitating strong privacy, namely, concealing both transfer amounts and participants' identities, while maintaining practical efficiency. While UTXO-based cryptocurrencies offer mature solutions achieving strong privacy and supporting multi-receiver transfers, account-based cryptocurrencies currently lack practical solutions that simultaneously guarantee these properties.
With the aim to close this gap, we propose a generic framework for account-based cryptocurrencies that achieve strong privacy and support multi-receiver transfers, and then give a practical instantiation called \textit{Anonymous PGC}. Experimental results demonstrate that, for a 64-sized anonymity set and 8 receivers, Anonymous PGC outperforms Anonymous Zether (IEEE S\&P 2021) --- which offers limited anonymity and no multi-receiver support ---
achieving 2.6 faster transaction generation, 5.1 faster verification,
and 2.1 reduction in transaction size.
Along the way of building Anonymous PGC, we present two novel -out-of- proofs. First, we generalize the Groth-Kohlweiss (GK) -out-of- proof (EUROCRYPT 2015) to the -out-of- case, resolving an open problem of its natural generalization. Particularly, the obtained -out-of- proof lends itself to integrate with range proofs in a seamless way, yielding an efficient -out-of- range proof, which demonstrates that witnesses among instances lie in specific ranges. Second, we extend the Attema-Cramer-Fehr (ACF) -out-of- proof (CRYPTO 2021) to support distinct group homomorphisms, improving its expressiveness while reducing both prover and verifier complexities from quadratic to linear. We believe these two -out-of- proofs are of independent interest, and will find more applications in privacy-preserving scenarios.
A Fast, Efficient, Platform-Adaptive, and AIS-20/31 Compliant PLL-Based True Random Number Generator on an SoC FPGA
Phase-locked loops (PLLs) embedded within field-program-mable gate arrays (FPGAs) or system-on-chip FPGAs (SoCs) present a promising methodology for the generation of random numbers. Their widespread integration across these platforms, combined with their isolated operational characteristics and robust entropy generation, as substantiated by prior research, positions PLL-based true random number generators (PLL-TRNGs) as highly effective solutions for this purpose. The present study focuses on the implementation of PLL-TRNGs utilizing the ZC702 Rev1.1 Evaluation Board, which incorporates the Zynq-7020 SoC from Xilinx. For experimental validation, a configuration involving three such boards is employed. The parameters governing the PLL-TRNG are optimized through a backtracking algorithm. Additionally, a novel, platform-adaptive technique is proposed to enhance the rate of random data bit generation while preserving entropy characteristics. The system's performance is rigorously evaluated against the criteria established by the German Federal Office for Information Security (BSI) AIS-20/31 Tests, with a detailed account of the implementation process provided. Furthermore, the study demonstrates the minimal resource utilization of the PLL-TRNG design within a SoC, thereby underscoring its suitability for Internet-of-Things (IoT) applications, where logic resources are often highly constrained.
Leveled Homomorphic Encryption over Composite Groups
Homomorphic encryption is a powerful tool that enables computation on encrypted data without requiring decryption. While many Fully Homomorphic Encryption schemes, supporting arbitrary computations on encrypted data, have been developed using lattice-based and AGCD-based approaches, progress in composite groups has been limited to Partial Homomorphic Encryption schemes, which only support either addition or multiplication. This paper introduces the first -leveled homomorphic encryption schemes over composite groups, based on factoring problem, that achieve both multiplicative and additive homomorphic properties. %Additionally, a modified version of Rothblum’s transformation technique is developed to provide public key variants of the symmetric schemes.
Our novel design eliminates the need for relinearization operation, which is common in LWE-based HE schemes, and removes the requirement for the circular security assumption. For applications where the traffic must be indistinguishable as encrypted, a scrambled scheme is designed using a labeling technique. While the initial schemes offer an expanded message space, the introduction of the double-sized Message technique enables the encryption of messages up to twice the size without increasing the ciphertext size. Implementation results show that our schemes significantly outperform existing solutions, particularly in multiplication operations, achieving speeds up to 1000 times faster than well-known schemes such as BFV, BGV, CKKS, and TFHE.
One-Way Homomorphic Encryption: A Composite Group Approach
Homomorphic Encryption (HE) is a fundamental Privacy-Enhancing Technology (PET) that enables computations on encrypted data without decryption. Despite its utility, designing an efficient and secure HE scheme is highly complex, requiring sophisticated cryptographic techniques. This paper introduces a novel approach to achieving homomorphic properties—supporting either one addition or one multiplication—within composite groups. While the proposed technique exhibits one-wayness, it has a good potential to serve as a foundational building block for constructing indistinguishable cryptosystems. This work contributes to the advancement of homomorphic encryption by providing a new perspective on secure computation within structured algebraic settings.
Optimistic Asynchronous Dynamic-committee Proactive Secret Sharing
Dynamic-committee Proactive Secret Sharing (DPSS) has gained increased attention for its ability to dynamically update shareholder committees and refresh secret shares, even under adversaries that gradually corrupt all nodes. However, existing state-of-the-art asynchronous DPSS protocols suffer from significant message complexity and communication complexity, where denotes the security parameter and is the committee size.
In this paper, under the trusted setup assumption, we propose the first asynchronous DPSS protocol with message complexity in all scenarios. Additionally, our protocol achieves communication complexity in the optimistic case, where all nodes are honest and the network is synchronous, and communication complexity in the worst case. Without a trusted setup, in the optimistic case, the message complexity is , and the communication complexity is . In the worst case, our protocol preserves state-of-the-art message and communication complexities, i.e., and , respectively. Besides, our protocol supports batch amortization and accommodates high thresholds. For committee sizes of 4 to 400, the estimated concrete communication cost of our DPSS is 19--100x (resp., 8--14x) smaller in the optimistic case (resp., worst case) compared to the state-of-the-art. Experiments in AWS show that our DPSS achieves a latency of 1.9--8 seconds for committee sizes from 4 to 64. Single-machine benchmarks reveal a (computational) runtime reduction of up to 44%.
Papercraft: Lattice-based Verifiable Delay Function Implemented
A verifiable delay function (VDF) requires a specified number of sequential steps to compute, yet the validity of its output can be verified efficiently, much faster than recomputing the function from scratch. VDFs are a versatile cryptographic tool, with many industrial applications, such as blockchain consensus protocols, lotteries and verifiable randomness. Unfortunately, without exceptions, all known practical VDF constructions are broken by quantum algorithms. In this work, we investigate the practicality of VDFs with plausible post-quantum security. We propose Papercraft, a working implementation of a VDF based entirely on lattice techniques and thus plausibly post-quantum secure. Our VDF is based on new observations on lattice-based succinct argument systems with many low-level optimisations, yielding the first lattice-based VDF that is implementable on today's hardware. As an example, our Papercraft implementation can verify a computation of almost 6 minutes in just 7 seconds. Overall, our work demonstrates that lattice-based VDFs are not just a theoretical construct, paving the way for their practical deployment.
Blockcipher-Based Key Derivation without PRP/PRF Switching
We examine the use of blockcipher-based key derivation beyond the birthday bound, arguing that the analysis step of PRP/PRF switching can be eliminated in many cases. To support this, we consider a modified ``ideal model'' for keying cryptographic applications in the multi-instance setting, where keys are chosen to be random \emph{but distinct}, rather than completely independent).
Our analysis shows that typical cryptographic applications remain secure in this model. One consequence is that it is typically safe to derive close to keys using an -bit blockcipher in counter mode. In particular, considering the practice of nonce-derived keys for authenticated encryption, our results imply that modes such as XAES-256-GCM that use CMAC-based key derivation are safe to use with more than distinct nonces.
Towards Improving Throughput and Scalability of DAG-based BFT SMR
Directed Acyclic Graph (DAG)-based BFT consensus protocols often suffer from limited throughput and scalability due to bandwidth-intensive data replication to all participants. However, it is sufficient to replicate data to a smaller sub-committee of parties that holds an honest majority with high probability.
In this work, we introduce tribe-assisted reliable broadcast, a novel primitive that ensures reliable broadcast (RBC) properties within a smaller honest-majority sub-committee—referred to as a clan—drawn from the entire network, called the tribe. Leveraging this primitive, we develop two efficient DAG-based BFT consensus protocols. First, we present a single-clan protocol, in which a single clan is elected from the tribe, and data is disseminated exclusively to this designated clan using tribe-assisted RBC. We then extend this design to a multi-clan setting, where multiple clans are elected and data is distributed within each respective clan via the same mechanism. Experimental results demonstrate that both protocols offer substantial improvements in throughput and latency over existing DAG-based BFT protocols, even at moderately large scales.
Lower Bounds for Garbled Circuits from Shannon-Type Information Inequalities
We establish new lower bounds on the size of practical garbled circuits, which hold against any scheme satisfying the following simple properties:
(1) Its security is based on symmetric-key cryptography only. More formally, security holds in Minicrypt, a model in which a random oracle is the only available source of cryptography.
(2) The evaluation algorithm makes non-adaptive queries to the random oracle.
(3) The evaluation algorithm "knows" which of its oracle queries are made by which other input combinations.
These restrictions are reasonable for garbling single gates. In particular, unlike prior attempts at lower bounds, we make no assumptions about the internal behavior of the garbling algorithms --- i.e., how it uses random oracle outputs and wire labels to compute the garbled gate, etc.
We prove separate lower bounds depending on whether the scheme uses the free-XOR technique (Kolesnikov & Schneider, ICALP 2008). In the free-XOR case, we prove that a garbled AND-gate requires bits; thus, the garbling scheme of Rosulek & Roy (Crypto 2022) is optimal. In the non-free-XOR case, we prove that a garbled AND-gate requires bits and a garbled XOR-gate requires bits; thus, the garbling scheme of Gueron, Lindell, Nof, and Pinkas (CCS 2015) is optimal.
We prove our lower bounds using tools from information theory. A garbling scheme can be characterized as a joint distribution over various quantities: wire labels, garbled gate information, random oracle responses. We show that different properties of a garbling scheme imply certain Shannon-type information inequalities about this distribution. We then use an automated theorem prover for Shannon-type inequalities to prove that our inequalities imply lower bounds on the entropy---hence, size---of the garbled gate information.
Improved Cryptanalysis of an RSA Variant Based on Cubic Pell Curve
In 2024, based on the cubic Pell curve, Nitaj and Seck proposed a variant of the RSA cryptosystem where the modulus is in the form , and the public key and private key satisfy the equation . They showed that can be factored when is less than a certain bound that depends on and in the situation , which is not extendable to . In this paper, we propose a cryptanalysis of this scheme in the situation , and give an explicit bound for that makes the scheme insecure. The method is based on Coppersmith's method and lattice reduction.
Decentralized Multi-Authority Attribute-Based Inner-Product Functional Encryption: Noisy and Evasive Constructions from Lattices
We initiate the study of multi-authority attribute-based functional encryption for noisy inner-product functionality, and propose two new primitives: (1) multi-authority attribute-based (noisy) inner-product functional encryption (MA-AB(N)IPFE), and (2) multi-authority attribute-based evasive inner-product functional encryption (MA-ABevIPFE). The MA-AB(N)IPFE primitive generalizes the existing multi-authority attribute-based inner-product functional encryption schemes by Agrawal et al. [AGT21], by enabling approximate inner-product computation under decentralized attribute-based control. This newly proposed notion combines the approximate function evaluation of noisy inner-product functional encryption (IPFE) with the decentralized key-distribution structure of multi-authority attribute-based encryption. To better capture noisy functionalities within a flexible security framework, we formulate the MA-ABevIPFE primitive under a generic-model view, inspired by the evasive IPFE framework by Hsieh et al. [HLL24]. It shifts the focus from pairwise ciphertext indistinguishability to a more relaxed pseudorandomness-based game.
To support the above notions, we introduce two variants of lattice-based computational assumptions:
- The evasive IPFE assumption (evIPFE): it generalizes the assumption introduced in [HLL24] to the multi-authority setting and admits a reduction from the evasive LWE assumption proposed by Waters et al. [WWW22];
- The indistinguishability-based evasive IPFE assumption (IND-evIPFE): it is an indistinguishability-based variant of the evasive IPFE assumption designed to capture the stronger security guarantees required by our MA-AB(N)IPFE scheme.
We present concrete lattice-based constructions for both primitives supporting subset policies, building upon the framework of [WWW22]. Our schemes are proven to be statically secure in the random oracle model under the standard LWE assumption and the newly introduced assumptions. Additionally, we demonstrate that our MA-AB(N)IPFE scheme can be transformed, via standard modulus switching, into a noiseless MA-ABIPFE scheme that supports exact inner-product functionality consistent with the MA-IPFE syntax in [AGT21,DP23]. This yields the first lattice-based construction of such a primitive. All our schemes support arbitrary polynomial-size attribute policies and are secure in the random oracle model under lattice assumptions with a sub-exponential modulus-to-noise ratio, making them practical candidates for noise-tolerant, fine-grained access control in multi-authority settings.
Improvement of Side-Channel Attacks on Mitaka
Mitaka is a variant of Falcon, which is one of the three postquantum
signature schemes selected by the NIST for standardization.
Introduced in 2021, Mitaka offers a simpler, parallelizable, and maskable
version of Falcon. Our article focuses on its physical security, and our
results are threefold. Firstly, we enhance a known profiled side-channel
attack on an unprotected Mitaka implementation by a factor of 512.
Secondly, we analyze the masked implementation of Mitaka described in
the reference article, which incorporates a different sampler and a protective
gadget. We expose the first side-channel flaw on this sampler.
This flaw enables to break the masked implementation with a first-order
side-channel attack. In this scenario, the key can be recovered using only
three times more traces compared to the attack on the unprotected implementation.
Finally, we discuss and recommend new countermeasures
to mitigate these attacks.
Finally! A Compact Lattice-Based Threshold Signature
Threshold signatures improve upon digital signatures by splitting the trust and robustness among multiple parties. In a (T, N) threshold signature any set of T parties can produce a signature but no set of less than T users can do so. Many such constructions are now available in the pre-quantum setting but post-quantum threshold schemes are still running heavy, with the state-of-the-art boasting signature sizes that are still an order of magnitude larger than post-quantum digital signatures.
We propose a novel very efficient threshold signature scheme, with a signature size close to that of a single Dilithium signature for any threshold T of at most 8 users. Our construction reduces to well-studied problems (MLWE and SelfTargetMSIS) and does not need any heavy machinery, essentially consisting in just T parallel executions of the Dilithium signature scheme. Though the resulting scheme is remarkably simple, many technical difficulties, such as sharing a secret in small shares, or simulating rejecting transcripts, have kept such an efficient threshold signature out of reach until now.
Simple and Efficient Lattice Threshold Signatures with Identifiable Aborts
Uncategorized
Uncategorized
We introduce simple yet efficient lattice-based threshold signatures with identifiable aborts, secure under the MLWE assumption. Central to our construction are novel Distributed Key Generation with Short Shares (sDKG) protocols over lattices, ensuring short shares, small reconstruction coefficients, and linear evaluation of honest shares. This
uniquely realizes the "threshold designer's dream": signature shares double as valid signatures under the corresponding secret key shares. With two concrete instantiations (ramp and replicated secret sharings), our schemes match Threshold Raccoon (del Pino et al. EUROCRYPT 2024)’s compact ~10kB size. Further, we unveil 'Death Star Detection', a new algorithm that enhances identifiable aborts by efficiently spotting short vector adversarial correlations, of interest beyond threshold signatures.
From List-Decodability to Proximity Gaps
Proximity testing for linear codes is a fundamental problem in coding theory with critical applications in cryptographic protocols, blockchain, and distributed storage systems. This work addresses the proximity gaps for linear codes, a crucial aspect for efficiently verifying whether a batch of codewords is close to a given code. We present a general framework for deriving proximity gaps from the list-decodability properties of the underlying linear code.
Our main result shows that if a code is -list-decodable, then the probability that a random combination of a batch of codewords containing a -far codeword (for ) remains -far from is bounded by . This result also establishes a form of (mutual) correlated agreement for linear codes, which can be used to strengthen soundness analyses in protocols that rely on proximity testing, thereby reducing query complexity and enabling practical instantiations over smaller finite fields.
In particular, we apply our main result to randomly punctured Reed–Solomon codes and folded Reed–Solomon codes—both of which are known to achieve list-decodability up to capacity—and derive linear proximity gaps for these families under the Johnson bound.
One for All, All for One: Universal semi-agnostic quantum circuit for solving (Standard) Abelian Hidden Subgroup Problems
We introduce a unified approach to quantum cryptanalysis that reduces all \emph{standard abelian hidden subgroup problems} (SAHSPs), including integer factorization, discrete logarithm in finite fields (DLP), and discrete logarithm on elliptic curves, to a single core problem: the \emph{elliptic curve discrete logarithm problem} (ECDLP).
This reduction forms the foundation of a framework for quantum cryptanalysis based on ECDLP. At the core of this framework is a \emph{semi-agnostic quantum algorithm} for solving ECDLP, which requires only partial knowledge of the problem's group structure. By operating on singular curves, we can translate problems such as integer factorization and finite field DLP into the ECDLP.
Leveraging the semi-agnostic algorithm for ECDLP, we propose a \emph{universal, reconfigurable quantum circuit} design that can solve any SAHSP instance by executing the ECDLP solver with appropriate curve and field parameters, without requiring problem-specific customization.
We prove that solving ECDLP in this model is sufficient to solve all instances of SAHSPs, establishing ECDLP as a \emph{complete problem} for this class.
These results unify the structure of Shor's algorithm across all SAHSPs, eliminating the need for custom quantum circuits for each problem. This demonstrates the practical feasibility of universal quantum cryptanalysis and underscores the urgency of transitioning to cryptographic systems that remain secure against quantum-capable adversaries.
Delegated PSI from Homomorphic Encryptions
This paper presents an efficient protocol for private set intersection in a setting with multiple set owners and a semi-honest cloud server. The core idea is to reduce the intersection computation to secure operations over Bloom filters, enabling both scalability and efficiency. By leveraging this transformation, our protocols achieve strong privacy guarantees while minimizing computation and communication overhead.
Side Channel Analysis in Homomorphic Encryption
Homomorphic encryption provides many opportunities for privacy-aware processing, including with methods related to machine learning. Many of our existing cryptographic methods have been shown in the past to be susceptible to side channel attacks. With these, the implementation of the cryptographic methods can reveal information about the private keys used, the result, or even the original plaintext. An example of this includes the processing of the RSA exponent using the Montgomery method, and where 0's and 1's differ in their processing time for modular exponentiation. With FHE, we typically use lattice methods, and which can have particular problems in their implementation in relation to side channel leakage. This paper aims to outline a range of weaknesses within FHE implementations as related to side channel analysis. It outlines a categorization for side-channel analysis, some case studies, and mitigation strategies.
Public-key Cryptography Attacks Using Adiabatic Quantum Computer
We explore the application of the QUBO and Ising models to the integer factorization problem with implications for the security of public-key algorithms such as RSA. A key contribution is a program that applies existing algorithms to parameterize and simulate integer factorization through an Ising model in order to replicate previous works. Due to limited access to quantum hardware, we use classical heuristic methods to approximate solutions.
Data Availability for Thousands of Nodes
Scalable data availability (DA) is essential for high-throughput, decentralized blockchains, enabling lightweight nodes to verify block availability without incurring the prohibitive costs of full data replication.
Reed-Solomon (RS) code commitment schemes underpin modern DA protocols by ensuring that dispersed data fragments can be verified as part of a valid codeword, even in the presence of malicious block producers.
However, state-of-the-art schemes such as FRIDA (Crypto'24), while computationally efficient, incur substantial per-node communication overhead at the scale of thousands of network nodes, often 5.7 the size of the actual data fragment.
In this work, we introduce CONDA, a new interleaved RS code commitment scheme that significantly reduces the communication overhead while retaining FRIDA's prover efficiency.
At its core is a novel evaluation consolidation technique for polynomial commitment scheme (PCS) that reduces the problem of proving evaluations at fixed points (one per verifier) to a single evaluation at a random point, using logarithmic communication.
This technique is lightweight, hash-based, and compatible with any multilinear PCS.
To further optimize for DA applications, we introduce LightLigero, a new multilinear PCS that improves upon DeepFold (Sec'25) with reduction in proof size and only slowdown in prover time.
Combining CONDA and LightLigero yields an efficient DA scheme for thousands of nodes.
Our implementation demonstrates a 4 reduction in communication cost compared to FRIDA, while incurring only a 25\% increase in prover time.
It also achieves near-best prover time and near-best communication cost simultaneously among all code commitment schemes.
CONDA also offers at least smaller proofs and faster provers than state-of-the-art verifiable secret sharing constructions such as ZXH+22 (Sec'22) and PolyFRIM (Sec'24).
Fheanor: a new, modular FHE library for designing and optimising schemes
Implementations of modern FHE schemes are available in various highly-optimized libraries. Many of these libraries are designed to allow developers who may not have deep expertise in FHE to build fast and secure privacy-preserving applications. To support such users, the API of these libraries often hides the internals of the schemes in question from the user. However, this design choice makes it hard for users of these libraries to modify existing schemes, or implement new ones; work that is often valuable when conducting research on FHE schemes.
We present our new Rust library Fheanor, which aims to facilitate such research on FHE schemes. The core target user is an FHE researcher, rather than an application developer. Most importantly, the design of Fheanor is very modular, and mirrors the mathematical structure of the available FHE schemes. By exposing the mathematical structure, but still hiding implementation details, it is easy to modify or extend the functionality of FHE schemes implemented in the library and still preserve high performance. Indeed, Fheanor demonstrates performance that is close to that of HElib or SEAL, with the potential for optimizations in the future.
Fheanor implements several features that have not, or have only rarely, been implemented in previous libraries. These include non-power-of-two cyclotomic rings, single-RNS based ring arithmetic, the CLPX/GBFV scheme, and bootstrapping for BFV and BGV.
In addition, this paper presents new theoretical contributions that are also implemented in Fheanor. The first is an extension of optimal digit extraction circuits, used in BFV/BGV bootstrapping, to the case 2^23. The second is a more efficient algorithm for computing the trace in the non-power-of-two cyclotomic setting.
Fly Away: Lifting Fault Security through Canaries and the Uniform Random Fault Model
Cryptographic implementations are vulnerable to active physical attacks where adversaries inject faults to extract sensitive information. Existing fault models, such as the threshold and random fault models, assume limitations on the amount or probability of injecting faults. Such models, however, insufficiently address the case of practical fault injection methods capable of faulting a large proportion of the wires in a circuit with high probability. Prior works have shown that this insufficiency can lead to concrete key recovery attacks against implementations proven secure in these models. We address this blind spot by introducing the uniform random fault model, which relaxes assumptions on the amount/probability of faults and instead assumes a uniform probabilistic faulting of all wires in a circuit or region. We then show that security in this new model can be reduced to security in the random fault model by inserting canaries in the circuit to ensure secret-independent fault detection. We prove that combining canaries with a more classical fault countermeasure such as redundancy can lead to exponential fault security in the uniform random fault model at a polynomial cost in circuit size in the security parameter. Finally, we discuss the interactions between our work and the practical engineering challenges of fault security, shedding light on how the combination of state-of-the-art countermeasures may protect against injections of many high probability faults, while opening a path to methodologies that formally analyze the guarantees provided by such countermeasures.
Distinguishing Full-Round AES-256 in a Ciphertext-Only Setting via Hybrid Statistical Learning
The security of block ciphers such as AES-128, AES-192, and AES-256 relies on the assumption that their ciphertext outputs are computationally indistinguishable from random permutations. While distinguishers have been proposed for reduced-round variants or under non-standard models such as known-key or chosen-key settings, no effective distinguisher has been demonstrated for the full-round AES ciphers in the standard secret-key model.
This work introduces FESLA (Feature Enhanced Statistical Learning Attack), a hybrid statistical learning framework that integrates outputs from a suite of classical statistical tests with machine learning and deep learning classifiers to construct ciphertext-only distinguishers for AES-128, AES-192, and AES-256. In contrast to existing approaches based on handcrafted or bitwise features, FESLA aggregates intermediate statistical metrics as features, enabling the capture of persistent structural biases in ciphertext distributions.
Experimental evaluation across multiple datasets demonstrates consistent 100% classification accuracy using Support Vector Machines, Random Forests, Multi-Layer Perceptron, Logistic Regression, and Naïve Bayes classifiers. Generalization and robustness are confirmed through k-fold cross-validation, including on previously unseen ciphertext samples.
These results establish the first ciphertext-only distinguishers for full-round AES-128, AES-192, and AES-256 under the secret-key model, and underscore the potential of machine learning–augmented cryptanalysis based on principled statistical feature engineering.
MOCHA: Mixnet Optimization Considering Honest Client Anonymity
Mix networks (mixnets) safeguard client anonymity by forwarding traffic through multiple intermediary nodes (mixnodes), which reorder and delay messages to obscure communication patterns against a global passive adversary capable of monitoring all network transmissions.
The anonymity provided by mixnets is usually assessed with a discrete-event simulator, gauging a target message's indistinguishability among output messages. While useful for comparative analysis, this approach only approximates the mixnet's anonymity potential. Hence, this paper sheds light on the necessity of considering the client (originator of messages) itself to gauge anonymity accurately. We further provide an algorithm (simulator) to simulate client anonymity for Loopix mixnets. We conduct experiments to optimize general Loopix mixnet parameters, considering both message and client anonymity. Our findings indicate that message anonymity often provides an upper bound and can yield misleading results for mixnet optimization, underscoring the importance of client anonymity. Additionally, we explore scenarios where client anonymity is significantly compromised due to an insufficient number of clients. To address these cases, we propose a multimixing strategy that enhances client anonymity by effectively merging varied traffic types with different mixing characteristics.
sPAR: (Somewhat) Practical Anonymous Router
Anonymous communication is one of the fundamental tools to achieve privacy for communication over the internet. Almost all existing design strategies (e.g., onion routing/Tor, mixnets) for anonymous communication rely on the existence of some honest server/router in the network infrastructure to provide anonymity. A recent seminal work by Shi and Wu (Eurocrypt 2021) proposes the first cryptographic design for a non-interactive anonymous router (NIAR) that can use a single untrusted server or router to permute a set of messages without revealing the permutation to the untrusted router. This work is a really important step towards showing the possibility of designing such protocol from standard cryptographic assumptions. However, their construction is only of theoretical nature and still leaves many open questions towards realizing such systems in practice: (1) the cryptographic building blocks (multi-client functional encryption, correlated pseudorandom function) used in their design are really difficult to implement in practice. (2) Their setup phase takes the permutation as an input to generate the encryption/decryption keys; which means that the messages from the same sender in different rounds will be at the same position in the output vector, unless the setup phase is run before every round with a new permutation. (3) It is not known how to realize such a setup procedure, that initializes a random permutation obliviously, without any trusted entities in the system.
In this paper, we propose the first (somewhat) practical design, which we call sPAR, that solves the above problems using homomorphic encryption techniques. Our design also relies on a one-time setup phase, however the setup phase does not take any specific permutation as input. Instead, our design generates a fresh permutation for every round based on the random values locally generated by the clients. Already existing practical instantiations of fully homomorphic encryption (FHE) schemes make our design implementable and deployable in practice. Our design presents a new direction for designing anonymous communication systems. Unlike some existing systems like Tor, sPAR does not scale to millions of users, however, we demonstrate with a proof-of-concept implementation that sPAR could easily support around hundred users with a few seconds of latency for each message.
On the Provable Dual Attack for LWE by Modulus Switching
As a theoretical cornerstone of post-quantum cryptography, the Learning With Errors (LWE) problem serves as the security foundation for standardized algorithms such as Kyber and Dilithium. Recently, a framework for provable dual attacks on LWE has been proposed by Pouly et al. in Eurocrypt 2024, addressing the limitations in effectiveness caused by existing methods' reliance on heuristic assumptions in LWE dual attacks. Their paper also poses an open problem on how to formally integrate modulus switching into this framework to reduce attack costs. The main purpose of this paper is to give a solution of this open problem by presenting an improved provable dual attack method that incorporates modulus switching and Chinese Remainder Theorem (CRT) techniques. First, we design a modulus switching mechanism that eliminates practical errors via the Poisson summation formula. By embedding the inherent noise from modulus switching into a rational lattice framework, our approach effectively preventing the risk of attack failure caused by the merging of such errors with LWE noise. Theoretical guarantees (Theorems 4 and 5) rigorously quantify the parameter ranges for successful attacks. Second, we introduce a CRT-based secret recovery method that aggregates partial secrets from independent sub-attacks. By leveraging the Chinese Remainder Theorem to reconstruct full secrets from congruence relations, our method adapts to arbitrary secret distributions. Furthermore, by using a tighter variant of Banaszczyk's measure inequality, we obtain a precise parameter range for the dual attack's efficacy through rigorous mathematical proof, and achieve the same complementary gap with the contradictory regime (proposed by Ducas et al.) as in Pouly et al.'s work. Experiments show 15-29 bit superior performance in attack estimation compared to the original framework.
Encrypted Matrix-Vector Products from Secret Dual Codes
Motivated by applications to efficient secure computation, we consider the following problem of encrypted matrix–vector product (EMVP). Let be a finite field. In an offline phase, a client uploads an encryption of a matrix to a server, keeping only a short secret key. The server stores the encrypted matrix .
In the online phase, the client may repeatedly send encryptions of query vectors , which enables the client and the server to locally compute compact shares of the matrix–vector product . The server learns nothing about or .
The shared output can either be revealed to the client or processed by another protocol.
We present efficient EMVP protocols based on variants of the learning parity with noise (LPN) assumption and the related learning subspace with noise (LSN) assumption.
Our EMVP protocols are field-agnostic in the sense that the parties only perform arithmetic operations over , and are close to optimal with respect to both communication and computation. In fact, for sufficiently large (typically a few hundreds), the online computation and communication costs of our LSN-based EMVP can be \emph{less than twice} the costs of computing in the clear.
Combined with suitable secure post-processing protocols on the secret-shared output, our EMVP protocols are useful for a variety of secure computation tasks, including encrypted fuzzy search and secure ML.
Our technical approach builds on recent techniques for private information retrieval in the secret-key setting. The core idea is to encode the matrix and the queries using a pair of secret dual linear codes, while defeating algebraic attacks by adding noise.
Classify Directly: A Dynamic Time SPA Classification Method Based on DTW
Side-channel analysis remains a critical threat to public-key cryptographic implementations. Simple Power Analysis (SPA) techniques can extract secret keys from a single power trace, often using clustering-based classification methods. However, traces captured in real-world environments often suffer from misalignment and variable trace lengths due to unstable clocks and random delays. As a result, clustering methods are required to use alignment methods that may alter the original information of the traces. To address this problem, this work proposes Dynamic Time Classification (DTC) as an alternative approach to classify cryptographic operations in SPA based on Dynamic Time Warping. Unlike clustering methods, DTC inherently compares power traces without requiring fixed-length segments, which greatly improved the adaptability to unequal traces and thus allows us to classify different operations relatively stably. Experimental results on public-key cryptographic algorithms and post-quantum algorithm implementations show that DTC are no less accurate than clustering methods and are more robust to timing variations. This work also systematically divides the features of different operations and explores the effects of different SPA methods on different types of feature. This work also conducts experiments with and without random delays for different categories, compares the experimental accuracy of different alignment methods, and discusses the feasibility of DTW as a preprocessing method.
Testing the Tests - Opportunities for Corrections and Improvements in NIST SP 800-22r1a and its Reference Code
During an analysis of the NIST SP 800-22r1a document, which provides a test suite to validate random number generators and their reference implementation, various issues were identified, including imprecise probability constants, erroneous example calculations, and discrepancies within test descriptions.
Here, we provide a technical analysis of the Statistical Test Suite, documenting inconsistencies and deficiencies in both the theoretical specification and the official C reference implementation.
The analysis also reveals concrete implementation bugs and structural limitations in the reference codebase.
Rather than revising any of the statistical tests, this work documents these flaws to support the ongoing revision process of the standard and to encourage more reliable and maintainable implementations.
Posterior Security: Anonymity and Message Hiding of Standard Signatures
We introduce posterior security of digital signatures, the additional security features after the original signature is generated. It is motivated by the scenario that some people store their secret keys in secure hardware and can only obtain a standard signature through a standardized interface. In this paper, we consider two different posterior security features: anonymity and message hiding.
We first introduce incognito signature, a new mechanism to anonymize a standard signature. Different from other ring or group signatures, the signer generates a standard (non-anonymous) signature first. The signature is then anonymized by a converter before sending to the verifier, by hiding the signer public key with a set of decoy public keys. We then introduce concealed signature which hides the message in a commitment. The standard signature is converted such that it can be verified with the commitment. The models of posterior anonymity and posterior message hiding capture the separation of the signer and the converter. Anonymity or message hiding is provided by the converter after the creation of a standard signature by the signer.
We give generic constructions of incognito signature and concealed signature. It can be applied to standard signatures like Schnorr. It gives the first practical anonymized ECDSA signature, and the signature size is logarithmic to the number of decoy public keys . The existing ring signature scheme with ECDSA keys is at least 152 times longer than our scheme for .
The incognito signature and concealed signature can be composed to provide posterior anonymity and message hiding. It is useful in applications like two-tier central bank digital currency, where users want to hide their addresses (public keys) and transaction amounts (messages) when the payment is settled in the interbank layer.
ProbeNav - Fast, precise and repeatable positioning of electromagnetic probes for local Side-Channel Attacks
Localized side-channel analysis makes it possible to evaluate only the relevant chip area by measuring near-field electromagnetic (EM) emanations. Compared to global power measurements, this can lead to more powerful attacks as the signal-to-noise ratio is higher and irrelevant circuit components are not included in the recorded measurements. Especially for profiled attacks and their reproduction, the probe position in a localized scenario is of utmost importance. Ideally a probe should be placed identically during the profiling and attack phases, as small variations can have a large impact on the success of the attack. In this work we present our methodology – ProbeNav – to accurately reposition an EM probe which is optimized for localized measurements, i.e., near-field measurements. We evaluate cross-correlation, Oriented Fast and rotated Brief (ORB) and particle filters to re-calibrate the coordinate system of our setup. As a result, our methodologies show that precise positioning on a STM32F303 microcontroller is possible for a profiled attack scenario with different EM probes. Furthermore, by requiring only a single trace per position, profiling is 3 times and repositioning 28 faster in terms of number of collected traces compared to the state of the art.
Practical Deniable Post-Quantum X3DH: A Lightweight Split-KEM for K-Waay
The Signal Protocol, underpinning secure messaging for billions, faces the challenge of migrating to a post-quantum world while preserving critical properties like asynchrony and deniability. While Signal’s PQXDH key exchange protocol addresses post-quantum confidentiality, full migration of the X3DH protocol remains elusive. Relying on a split KEM (K-Waay, USENIX ’24) offers a promising migration path, but it has so far suffered from size limitations compared to concurrent works leveraging
ring signatures.
This work introduces Sparrow-KEM and Sym-Sparrow-KEM, novel asymmetric and symmetric split KEMs respectively, i.e. for which keys can be used interchangeably for sending and receiving, or only in one direction. They are designed to optimize the K-Waay protocol for size efficiency. Leveraging the MLWE assumption, these constructions reduce by a factor 5.1× the communication of prior post-quantum X3DH based on split KEMs, plus provides a 40× speedup. Additionally, Sym-Sparrow-KEM is the first symmetric split-KEM to offer deniability, IND-1KCA, and IND-1BatchCCA security, capturing implicit authentication properties. We provide formal security proofs for both schemes, including deniability. Our results demonstrate the feasibility of a compact and deniable post-quantum X3DH protocol based on split KEMs.
Neural-Inspired Advances in Integral Cryptanalysis
The study by Gohr et.al at CRYPTO 2019 and sunsequent related works have shown that neural networks can uncover previously unused features, offering novel insights into cryptanalysis. Motivated by these findings, we employ neural networks to learn features specifically related to integral properties and integrate the corresponding insights into optimized search frameworks. These findings validate the framework of using neural networks for feature exploration, providing researchers with novel insights that advance established cryptanalysis methods.
Neural networks have inspired the development of more precise integral search models. By comparing the integral distinguishers obtained via neural networks with those identified by classical methods, we observe that existing automated search models often fail to find optimal distinguishers. To address this issue, we develop a meet-in-the-middle search framework that balances model accuracy and computational efficiency. As a result, we reduce the number of active plaintext bits required for an 11-round integral distinguisher on SKINNY-64-64, and further identify a 12-round key-dependent integral distinguisher—achieving one additional round over the previous best-known result.
The integral distinguishers discovered by neural networks enable key-recovery attacks on more rounds. We identify a 7-round key-independent integral distinguisher from neural networks with even only one active plaintext cell, which is based on linear combinations of bits. This distinguisher enables a 15-round key-recovery attack on SKINNY-n-n through a strategy with 3 rounds of forward decryption and 5 rounds of backward encryption, improving upon the previous record by one round. The same distinguisher also enhances attacks on SKINNY-n-2n and SKINNY-n-3n. Additionally, we discover an 8-round key-dependent integral distinguisher using neural network that further reduces the time complexity of key-recovery attacks against SKINNY.
V rity: Verifiable Local Differential Privacy
Local differential privacy (LDP) enables individuals to report sensitive data while preserving privacy. Unfortunately, LDP mechanisms are vulnerable to poisoning attacks, where adversaries controlling a fraction of the reporting users can significantly distort the aggregate output--much more so than in a non-private solution where the inputs are reported directly. In this paper, we present two novel solutions that prevent poisoning attacks under LDP while preserving its privacy guarantees.
Our first solution, , addresses scenarios where the users report inputs with a ground truth available to a third party. The second solution, , tackles the more challenging case in which the users locally generate their input and there is no ground truth which can be used to bootstrap verifiable randomness generation.
Succinct Computational Secret Sharing for Monotone Circuits
Secret sharing is a cornerstone of modern cryptography, underpinning the secure distribution and reconstruction of secrets among authorized participants. In these schemes, succinctness—measured by the size of the distributed shares—has long been an area of both great theoretical and practical interest, with large gaps between constructions and best known lower bounds. In this paper, we present a novel computational secret sharing scheme for monotone Boolean circuits that achieves substantially shorter share sizes than previously known constructions in the standard model. Our scheme attains a public share size of and a user share size of , where n denotes the number of users, is the monotone circuit and is the security parameter, thus effectively eliminating the dependence on the circuit size. This marks a significant improvement over earlier approaches, which exhibited share sizes that grew with the number of gates in the circuit. Our construction makes use of indistinguishability obfuscation and injective one-way functions.
Unmasking TRaccoon: A Lattice-Based Threshold Signature with An Efficient Identifiable Abort Protocol
TRaccoon is an efficient 3-round lattice-based T-out-of-N threshold signature, recently introduced by del Pino et al. (Eurocrypt 2024). While the design resembles the classical threshold Schnorr signature, Sparkle (Crites et al., Crypto 2023), one shortfall is that it has no means to identify malicious behavior, a property highly desired in practice. This is because to resist lattice-specific attacks, TRaccoon relies on a technique called masking, informally blinding each partial signature with a one-time additive mask. del Pino et al. left it as an open problem to add a mechanism to identify malicious behaviors to TRaccoon.
In this work, we propose TRaccoon-IA, a TRaccoon with an efficient identifiable abort protocol, allowing to identify malicious signers when the signing protocol fails. The identifiable abort protocol is a simple add-on to TRaccoon, keeping the original design intact, and comes with an added communication cost of 60 + 6.4 |T| KB only when signing fails. Along the way, we provide the first formal security analysis of a variant of LaBRADOR (Beullens et al., Crypto 2023) with zero-knowledge, encountering several hurdles when formalizing it in detail. Moreover, we give a new game-based definition for interactive identifiable abort protocols, extending the popular game-based definition used to prove unforgeability of recent threshold signatures.
On Graphs of Incremental Proofs of Sequential Work
In this work, we characterize graphs of \emph{(graph-labeling) incremental proofs of sequential work} (iPoSW). First, we define \emph{incremental} graphs and prove they are necessary for iPoSWs. Relying on space pebbling complexity of incremental graphs, we show that the depth-robust graphs underling the PoSW of Mahmoody et al.\ are not incremental, and hence, their PoSW cannot be transformed into an iPoSW.
Second, and toward a generic iPoSW construction, we define graphs whose structure is compatible with the incremental sampling technique (Döttling et al.). These are \emph{dynamic} graphs. We observe that the graphs underlying all PoSWs, standalone or incremental, are dynamic. We then generalize current iPoSW schemes by giving a generic construction that transforms any PoSW whose underlying graph is incremental and dynamic into an iPoSW. As a corollary, we get a new iPoSW based on the modified Cohen-Pietrzak graph (Abusalah et al.). When used in constructing blockchain light-client bootstrapping protocols (Abusalah et al.) such an iPoSW, results in the most efficient bootstrappers/provers, in terms of both proof size and space complexity.
Along the way, we show that previous iPoSW definitions allow for trivial solutions. To overcome this, we provide a refined definition that captures the essence of iPoSWs and is satisfied by all known iPoSW constructions.
Deterministic algorithms for class group actions
We present an algorithm for the CSIDH protocol that is fully deterministic and strictly constant time. It does not require dummy operations and can be implemented without conditional branches. Our proof-of-concept C implementation shows that a key exchange can be performed in a constant (i.e. fixed) number of finite field operations, independent of the secret keys. The algorithm relies on a technique reminiscent of the standard Montgomery ladder, and applies to the computation of isogenies that divide an endomorphism of smooth degree represented by its kernel. We describe our method in the general context of class group actions on oriented elliptic curves, giving rise to a large family of non-interactive key exchanges different from CSIDH.
Full-Authority Data Sharing Systems: Ciphertext-Dependent Proxy Re-Encryption with Dynamic Key Generation
Proxy re-encryption (PRE) is a powerful primitive for secure cloud storage sharing. Suppose Alice stores encrypted datasets (ciphertexts) in a cloud server (proxy). If Bob requests data sharing, Alice shares the ciphertexts by computing and sending a re-encryption key to the proxy, which will perform the re-encryption operation that generates the ciphertexts that are decryptable to Bob. Still, the proxy cannot access the plaintexts/datasets. Traditionally, the re-encryption key can convert all of Alice's ciphertexts, and the proxy should operate the re-encryption on the ciphertexts selected by the users (Alice/Bob). There is a trust issue: Alice must grant full decryption rights (losing control) to rely on proxy-enforced access policies (vulnerable to collusion). Existing PRE schemes fail to reconcile fine-grained control with collusion resistance. If Alice uses different keys to encrypt each dataset, the re-encryption complexity is linear to the number of requested datasets. We propose full-authority data sharing, a novel paradigm combining ciphertext-dependent PRE (cdPRE) and dynamic key generation (dKG). Unlike traditional PRE, cdPRE binds re-encryption keys to specific ciphertexts, ensuring collusion resistance (i.e., proxy + Bob cannot access unauthorised data). dKG dynamically connects keys via key derivation functions; for example, the chain system reduces per-dataset delegation cost to for sequential release in publication/subscription systems (vs. in trivial solutions, where is the number of datasets). We instantiate this paradigm with Kyber (NIST-PQC standardised) and AES, prove its security, and experimentally verify the high efficiency of the scheme.
Walnut: A Generic Framework with Enhanced Scalability for BFT Protocols
The performance of traditional BFT protocols significantly decreases as grows ( for the number of replicas), and thus, they support up to a few hundred replicas. Such scalability issues severely limit the application scenarios of BFT. Meanwhile, the committee sampling technique has the potential to scale the replica size significantly by selecting a small portion of replicas as the committee and then conveying the consensus results to the rest.
However, this technique is rarely used in BFT, and there is still a lack of methods to scale the traditional BFT protocol being deployed to support more replicas rather than the costly re-deployment of new protocols.
This paper introduces Walnut, a secure and generic committee-sampling-based modular consensus. Specifically, we use the verifiable random function for committee election and integrate committee rotation with the consensus. This resulting construction ensures that each selected committee is of a fixed size and acknowledged by all replicas, even in a partially synchronous network. For Walnut, we provide a rigorous definition and outline the necessary properties of each module to achieve safety and liveness.
To clarify Walnut's effectiveness, we apply this framework to HotStuff to obtain the Walnut-HS protocol, together with a proof of fit-in. We also implement Walnut-HS and compare its performance with HotStuff, using up to 100 Amazon EC2 instances in WAN. The experiments show that Walnut-HS can easily scale to 1,000 replicas with only a slight performance degradation, while HotStuff performs poorly or even breaks when . Besides, Walnut-HS performs well in comparison with Hotstuff for small-scale experiments. For example, the peak throughput of Walnut-HS is at most 38.6% higher than HotStuff for .
Post-Quantum PKE from Unstructured Noisy Linear Algebraic Assumptions: Beyond LWE and Alekhnovich's LPN
Noisy linear algebraic assumptions with respect to random matrices, in particular Learning with Errors ( ) and Alekhnovich Learning Parity with Noise (Alekhnovich ), are among the most investigated assumptions that imply post-quantum public-key encryption (PKE). They enjoy elegant mathematical structure. Indeed, efforts to build post-quantum PKE and advanced primitives such as homomorphic encryption and indistinguishability obfuscation have increasingly focused their attention on these two assumptions and their variants.
Unfortunately, this increasing reliance on these two assumptions for building post-quantum cryptography leaves us vulnerable to potential quantum (and classical) attacks on Alekhnovich and . Quantum algorithms is a rapidly advancing area, and we must stay prepared for unexpected cryptanalytic breakthroughs. Just three decades ago, a short time frame in the development of our field, Shor's algorithm rendered most then-popular number theoretic and algebraic assumptions quantumly broken. Furthermore, within the last several years, we have witnessed major classical and quantum
breaks on several assumptions previously introduced for post-quantum cryptography. Therefore, we ask the following question:
In a world where both and Alekhnovich are broken, can there still exist noisy linear assumptions that remain plausibly quantum hard and imply PKE?
To answer this question positively, we introduce two natural noisy-linear algebraic assumptions that are both with respect to random matrices, exactly like and Alekhnovich , but with different error distributions. Our error distribution combines aspects of both small norm and sparse error distributions. We design a PKE from these assumptions and give evidence that these assumptions are likely to still be secure even in a world where both the and Alekhnovich assumptions are simultaneously broken. We also study basic properties of these assumptions, and show that in the parameter settings we employ to build PKE, neither of them are ``lattice'' assumptions in the sense that we don't see a way to attack them using a lattice closest vector problem solver, except via -completeness reductions.
Rerandomizable Garbling, Revisited
In key-and-message homomorphic encryption (KMHE), the key space is a subset of the message space, allowing encryption of secret keys such that the same homomorphism can be applied to both the key and the message of a given ciphertext.
KMHE with suitable security properties is the main building block for constructing rerandomizable garbling schemes (RGS, Gentry et al., CRYPTO 2010), which enable advanced cryptographic applications like multi-hop homomorphic encryption, the YOSO-like MPC protocol SCALES (Acharya et al., TCC 2022 and CRYPTO 2024), and more.
The BHHO scheme (Boneh et al., CRYPTO 2008) is currently the only known KMHE scheme suitable for constructing RGS. An encryption of a secret key consists of group elements, where is the security parameter, which incurs a significant bandwidth and computational overhead that makes the scheme itself, and the RGS protocols building upon it, impractical.
We present a new, more efficient KMHE scheme with linear-size ciphertexts.
Despite using heavier cryptographic tools (pairings instead of plain DDH-hard groups), the concrete ciphertext size and computational costs are very significantly reduced. We are able to shrink the ciphertext by 97.83 % (from 16.68 MB to 360 kB) and reduce the estimated computations for encryption by 99.996 % (from 4 minutes to 0.01 seconds) in comparison to BHHO.
Additionally, we introduce gate KMHE as a new tool to build more efficient RGS. Our RGS construction shrinks the size of a garbled gate by 98.99 % (from 133.43 MB to 1.35 MB) and decreases the estimated cost of garbling by 99.998 % (from 17 minutes to 0.02 seconds per gate) in comparison to Acharya et al.
In summary, our work shows for the first time that RGS and the SCALES protocol (and hence YOSO-like MPC) are practically feasible for simple circuits.
Improvements on the schemes VOX and QR UOV When minus is a plus
In this article, we present an improvement for QR UOV and a reparation of VOX.
Thanks to the use of the perturbation minus we are able to fully exploit the parameters in order to reduce the public key. It also helps us to repair the scheme VOX. With QR UOV- we are able to significantly reduce the size of the public key at the cost of a small increase of the signature size. While with VOX- we can obtain a public key as short as . VOX- also maintains a very short signature size.
We also show that the use of minus perturbation is very useful with schemes that uses the QR (Quotient ring perturbation).
Verifiable E-Voting with a Trustless Bulletin Board
Voter privacy and end-to-end (E2E) verifiability are critical features of electronic voting (e-voting) systems to safeguard elections. To achieve these properties commonly a perfect bulletin board (BB) is assumed that provides consistent, reliable, and tamper-proof storage and transmission of voting data. However, in practice, BBs operate in asynchronous and unreliable networks, and hence, are susceptible to vulnerabilities such as equivocation attacks and dropped votes, which can compromise both verifiability and privacy. Although prior research has weakened the perfect BB assumption, it still depends on trusting certain BB components.
In this work, we present and initiate a formal exploration of designing e-voting systems based on fully untrusted BBs. For this purpose, we leverage the notion of accountability and in particular use accountable BBs. Accountability ensures that if a security breach occurs, then cryptographic evidence can identify malicious parties. Fully untrusted BBs running in asynchronous networks bring new challenges. Among others, we identify several types of attacks that a malicious but accountable BB might be able to perform and propose a new E2E verifiability notion for this setting. Based on this notion and as a proof of concept, we construct the first e-voting system that is provably E2E verifiable and provides vote privacy even when the underlying BB is fully malicious. This establishes an alternative to traditional e-voting architectures that rely on (threshold) trusted BB servers.
T-Spoon: Tightly Secure Two-Round Multi-Signatures with Key Aggregation
Multi-signatures over pairing-free cyclic groups have seen significant advancements in recent years, including achieving two-round protocols and supporting key aggregation.
Key aggregation enables the combination of multiple public keys into a single succinct aggregate key for verification and has essentially evolved from an optional feature to a requirement.
To enhance the concrete security of two-round schemes, Pan and Wagner (Eurocrypt 2023, 2024) introduced the first tightly secure constructions in this setting.
However, their schemes do not support key aggregation, and their approach inherently precludes a single short aggregate public key.
This leaves the open problem of achieving tight security and key aggregation simultaneously.
In this work, we solve this open problem by presenting the first tightly secure two-round multi-signature scheme in pairing-free groups supporting key aggregation.
As for Pan and Wagner's schemes, our construction is based on the DDH assumption.
In contrast to theirs, it also has truly compact signatures, with signature size asymptotically independent of the number of signers.
Correlation power analysis of LESS and CROSS
This paper presents a side-channel attack targeting the LESS and CROSS post-quantum digital signature schemes, resulting in full key recovery for both. These schemes have advanced to the second round of NIST’s call for additional signatures. By leveraging correlation power analysis and horizontal attacks, we are able to recover the secret key by observing the power consumption during the multiplication of an ephemeral secret vector with a public matrix. The attack path is enabled by the presence of a direct link between the secret key elements and the ephemeral secret, given correct responses. This attack targets version 1.2 of both schemes. In both settings we can recover the secret key in a single trace for the NIST’s security level I parameter set. Additionally, we propose improvements to the existing horizontal attack on CROSS, reducing the required rounds that need to be observed by an order of magnitude for the same parameter sets.
Last updated: 2025-05-11
KeyJoin: Privacy-Focused CoinJoin Protocol for Bitcoin
Bitcoin is based on the Blockchain, an open ledger containing information about each transaction in the Bitcoin network. Blockchain serves many purposes, but it allows anyone to track all transactions and activities of each Bitcoin address. The privacy of the network is being threatened by some organizations that track transactions. Tracking and subsequent filtering of coins lead to the loss of exchangeability of Bitcoin.
Despite Bitcoin’s transparency, it is possible to increase user privacy using a variety of existing methods. One of these methods is called CoinJoin, was proposed by Bitcoin developer Greg Maxwell in 2013. This technology involves combining several users transactions to create a single transaction with multiple inputs and outputs, which makes transaction analysis more complicated.
This work describes the KeyJoin, a privacy-focused CoinJoin protocol based on the keyed-verification anonymous credentials (KVAC).
Towards Optimal Differential Attacks on FLY and PIPO
Lightweight block ciphers such as PIPO and FLY are designed to operate efficiently and securely in constrained environments. While the differential attack on PIPO-64-128 has already been studied by the designers, no concrete differential attack had been conducted for PIPO-64-256 and FLY. Motivated by this gap, we revisit the security of PIPO against differential attacks and generalize the analysis framework to make it applicable to structurally related ciphers. Based on this generalized framework, we search for key-recovery-attack-friendly distinguishers and apply clustering techniques to enhance their effectiveness in key-recovery attacks. As a result, we improve the previously proposed differential attack on PIPO-64-128, reducing the time complexity by a factor of . Furthermore, we propose a 13-round differential attack on PIPO-64-256, which covers two more rounds than the previous result. We also apply the same methodology to FLY and present the first differential attack on 12-round FLY, reaching one round beyond the best-known distinguisher. We believe this work improves the understanding of the structures of FLY and PIPO, and provides a basis for future research on advanced key-recovery attacks for related cipher designs.
Registered Functional Encryption for Attribute-Weighted Sums with Access Control
In this work, we present Functional Encryption (FE) schemes for Attribute-Weighted Sums (AWS), introduced by Abdalla, Gong and Wee (Crypto 2020) in the registration-based setting (RFE). In such a setting, users sample their own public/private key pairs ; a key curator registers user public keys along with their functions ; encryption takes as input attribute-value pairs where is public and is private; and decryption recovers the weighted sum while leaking no additional information about . Recently, Agrawal, Tomida and Yadav (Crypto 2023) studied the attribute-based case of AWS (AB-AWS) providing fine-grained access control, where the function is described by a tuple , the input is extended to and decryption recovers the weighted sum only if . Our main results are the following:
- We build the first RFE for (AB-)1AWS functionality, where , that achieves adaptive indistinguishability-based security under the (bilateral) -Lin assumption in prime-order pairing groups. Prior works achieve RFE for linear and quadratic functions without access control in the standard model, or for attribute-based linear functions in the generic group model.
- We develop the first RFE for AB-AWS functionality, where is unbounded, that achieves very selective simulation-based security under the bilateral -Lin assumption. Here, “very selective” means that the adversary declares challenge attribute values, all registered functions and corrupted users upfront. Previously, SIM-secure RFEs were only constructed for linear and quadratic functions without access control in the same security model.
We devise a novel nested encoding mechanism that facilitates achieving attribute-based access control and unbounded inputs in the registration-based setting for AWS functionalities, proven secure in the standard model. In terms of efficiency, our constructions feature short public parameters, secret keys independent of , and compact ciphertexts unaffected by the length of public inputs. Moreover, as required by RFE properties, all objective sizes and algorithm costs scale poly-logarithmically with the total number of registered users in the system.
Universally Composable Interactive and Ordered Multi-Signatures
Uncategorized
Uncategorized
Multi-signatures allow a given set of parties to cooperate in order to create a digital signature whose size is independent of the number of signers. At the same time, no other set of parties can create such a signature. While non-interactive multi-signatures are known (e.g. BLS from pairings), many popular multi-signature schemes such as MuSig2 (which are constructed from pairing-free discrete logarithm-style assumptions) require interaction. Such interactive multi-signatures have recently found practical applications e.g. in the cryptocurrency space.
Motivated by classical and emerging use cases of such interactive multi-signatures, we introduce the first systematic treatment of interactive multi-signatures in the universal composability (UC) framework. Along the way, we revisit existing game-based security notions and prove that constructions secure in the game-based setting can easily be made UC secure and vice versa.
In addition, we consider interactive multi-signatures where the signers must interact in a fixed pattern (so-called ordered multi-signatures). Here, we provide the first construction of ordered multi-signatures based on the one-more discrete logarithm assumption, whereas the only other previously known construction required pairings. Our scheme achieves a stronger notion of unforgeability, guaranteeing that the adversary cannot obtain a signature altering the relative order of honest signers. We also present the first formalization of ordered multi-signatures in the UC framework and again show that our stronger game-based definitions are equivalent to UC security.
A Note on ``CABC: A Cross-Domain Authentication Method Combining Blockchain with Certificateless Signature for IIoT''
We show that the authentication method [Future Gener. Comput. Syst. 158: 516-529 (2024)] cannot be practically implemented, because the signature scheme is insecure against certificateless public key replacement forgery attack. The explicit dependency between the certificateless public key and secret key is not properly used to construct some intractable problems, such as Elliptic Curve Discrete Logarithm (ECDL). An adversary can find an efficient signing algorithm functionally equivalent to the valid signing algorithm. We also correct some typos in the original presentation.
A note on closed addition chains and complete numbers
We introduce a new class of addition chains and show the numbers for which these chains are optimal satisfy the Scholz conjecture, precisely the inequality
Constant-time Integer Arithmetic for SQIsign
SQIsign, the only isogeny-based signature scheme submitted to NIST’s additional signature standardization call, achieves the smallest public key and signature sizes among all post-quantum signature schemes. However, its existing implementation, particularly in its quaternion arithmetic operations, relies on GMP’s big integer functions, which, while efficient, are often not designed for constant-time execution.
In this work, we take a step toward side-channel-protected SQIsign by implementing constant-time techniques for SQIsign’s big integer arithmetic, which forms the computational backbone of its quaternion module. For low-level fundamental functions including Euclidean division, exponentiation and the function that computes integer square root, we either extend or tailor existing solutions according to SQIsign's requirements such as handling signed integers or scaling them for integers up to 12,000 bits. Further, we propose a novel constant-time modular reduction technique designed to handle dynamically changing moduli.Our implementation is written in C without reliance on high-level libraries such as GMP and we evaluate the constant-time properties of our implementation using Timecop with Valgrind that confirm the absence of timing-dependent execution paths. We provide experimental benchmarks across various SQIsign parameter sizes to demonstrate the performance of our constant-time implementation.
Worst-Case Time Analysis of Key Agreement Protocols in 10BASE-T1S Automotive Networks
With the rise of in-vehicle and car-to-x communication systems, ensuring robust security in automotive networks is becoming increasingly vital. As the industry shifts toward Ethernet-based architectures, the IEEE 802.1AE MACsec standard is gaining prominence as a critical security solution for future in-vehicle networks (IVNs). MACsec utilizes the MACsec Key Agreement Protocol (MKA), defined in the IEEE 802.1X standard, to establish secure encryption keys for data transmission. However, when applied to 10BASE-T1S Ethernet networks with multidrop topologies, MKA encounters a significant challenge known as the real-time paradox. This paradox arises from the competing demands of prioritizing key agreement messages and real-time control data, which conflict with each other. Infineon addresses this challenge with its innovative In-Line Key Agreement (IKA) protocol. By embedding key agreement information directly within a standard data frame, IKA effectively resolves the real-time paradox and enhances network performance. This paper establishes a theoretical worst-case delay bound for key agreement in multidrop 10BASE-T1S IVNs with more than two nodes, using Network Calculus techniques. The analysis compares the MKA and IKA protocols in terms of performance. For a startup scenario involving a 16-node network with a 50 bytes MPDU size, the MKA protocol exhibits a worst-case delay that is 1080% higher than that of IKA. As the MPDU size increases to 1486 bytes, this performance gap narrows significantly, reducing the delay difference to just 6.6%.
Simple Power Analysis Attack on SQIsign
The isogeny-based post-quantum digital signature algorithm SQIsign offers the most compact key and signature sizes among all candidates in the ongoing NIST call for additional post-quantum signature algorithms. To the best of our knowledge, we present the first Simple Power Analysis (SPA) side-channel attack on SQIsign, demonstrating its feasibility for key recovery.
Our attack specifically targets secret-dependent computations within Cornacchia's algorithm, a fundamental component of SQIsign's quaternion module. At the core of this algorithm, a secret-derived yet ephemeral exponent is used in a modular exponentiation subroutine. By performing SPA on the modular exponentiation, we successfully recover this ephemeral exponent. We then develop a method to show how this leaked exponent can be exploited to ultimately reconstruct the secret signing key of SQIsign.
Our findings emphasize the critical need for side-channel-resistant implementations of SQIsign, highlighting previously unexplored vulnerabilities in its design.
Row Reduction Techniques for -Party Garbling
Recent advancements in maliciously secure garbling have significantly improved the efficiency of constant-round multi-party computation. Research in the field has primarily focused on reducing communication complexity through row reduction techniques and improvements to the preprocessing phase with the use of simpler correlations.
In this work, we present two contributions to reduce the communication complexity of state of the art multi-party garbling with an arbitrary number of corruptions. First, we show how to achieve full row reduction for -party garbled circuits in HSS17-style protocols (Hazay et al., Asiacrypt'17 & JC'20) and authenticated garbling (Yang et al., CCS'20), reducing the size of the garbled circuit by 25% from to and from to bits per AND gate, respectively. Achieving row reduction in multi-party garbling has been an open problem which was partly addressed by the work of Yang et al. for authenticated garbling. In our work, we show a full row reduction for both garbling approaches, thus addressing this open problem completely.
Second, drawing inspiration from the work of Dittmer et al. (Crypto 2022), we propose a new preprocessing protocol to obtain the required materials for the garbling phase using large field triples that can be generated with sublinear communication. The new preprocessing significantly reduces the communication overhead of garbled circuits. Our optimizations result in up to a reduction in communication compared to HSS17 and a reduction over the state of the art authenticated garbling of Yang et al. for 3 parties in a circuit with 10 million AND gates.
Bandwidth-Efficient Robust Threshold ECDSA in Three Rounds
Threshold ECDSA schemes distribute the capability of issuing signatures to multiple parties. They have been used in practical MPC wallets holding cryptocurrencies. However, most prior protocols are not robust, wherein even one misbehaving or non-responsive party would mandate an abort. Robust schemes have been proposed (Wong et al., NDSS ’23, ’24), but they do not match state-of-the-art number of rounds which is only three (Doerner et al., S&P ’24). In this work, we propose robust threshold ECDSA schemes RompSig-Q and RompSig-L that each take three rounds (two of which are broadcasts). Building on the works of Wong et al. and
further optimized towards saving bandwidth, they respectively take each signer (1.0𝑡 + 1.6) KiB and 3.0 KiB outbound broadcast communication, and thus exhibit bandwidth efficiency that is competitive in practical scenarios where broadcasts are natively handled. RompSig-Q preprocesses multiplications and features fast online signing; RompSig-L leverages threshold CL encryption for scalability and dynamic participation.
Fast Enhanced Private Set Union in the Balanced and Unbalanced Scenarios
Private set union (PSU) allows two parties to compute the union of their sets without revealing anything else. It can be categorized into balanced and unbalanced scenarios depending on the size of the set on both sides. Recently, Jia et al. (USENIX Security 2024) highlight that existing scalable PSU solutions suffer from during-execution leakage and propose a PSU with enhanced security for the balanced setting. However, their protocol's complexity is superlinear with the size of the set. Thus, the problem of constructing a linear enhanced PSU remains open, and no unbalanced enhanced PSU exists. In this work, we address these two open problems:
-Balanced case: We propose the first linear enhanced PSU. Compared to the state-of-the-art enhanced PSU (Jia et al., USENIX Security 2024), our protocol achieves a reduction in communication cost and a speedup in running time, depending on set sizes and network environments.
-Unbalanced case: We present the first unbalanced enhanced PSU, which achieves sublinear communication complexity in the size of the large set. Experimental results demonstrate that the larger the difference between the two set sizes, the better our protocol performs. For unbalanced set sizes with single thread in Mbps bandwidth, our protocol requires only MB of communication. Compared with the state-of-the-art enhanced PSU, there is shrink in communication and roughly speedup in the running time.
Repeated Agreement is Cheap! On Weak Accountability and Multishot Byzantine Agreement
Byzantine Agreement (BA) allows processes to propose input values to reach consensus on a common, valid -bit value, even in the presence of up to faulty processes that can deviate arbitrarily from the protocol. Although strategies like randomization, adaptiveness, and batching have been extensively explored to mitigate the inherent limitations of one-shot agreement tasks, there has been limited progress on achieving good amortized performance for multi-shot agreement, despite its obvious relevance to long-lived functionalities such as state machine replication.
Observing that a weak form of accountability suffices to identify and exclude malicious processes, we propose new efficient and deterministic multi-shot agreement protocols for multi-value validated Byzantine agreement (MVBA) with a strong unanimity validity property (SMVBA) and interactive consistency (IC). Specifically, let represent the size of the cryptographic objects needed to solve Byzantine agreement when . We achieve both IC and SMVBA with amortized latency, with a bounded number of slower instances. The SMVBA protocol has amortized communication and the IC has amortized communication. For input values larger than , our protocols are asymptotically optimal. These results mark a substantial improvement—up to a linear factor, depending on —over prior results. To the best of our knowledge, the present paper is the first to achieve the long-term goal of implementing a state machine replication abstraction of a distributed service that is just as fast and efficient as its centralized version, but with greater robustness and availability.
High-Performance FPGA Implementations of Lightweight ASCON-128 and ASCON-128a with Enhanced Throughput-to-Area Efficiency
The ASCON algorithm was chosen for its efficiency and suitability for resource-constrained environments such as IoT devices. In this paper, we present a high-performance FPGA implementation of ASCON-128 and ASCON-128a, optimized for the throughput-to-area ratio. By utilizing a 6-round permutation in one cycle for ASCON-128 and a 4-round permutation in one cycle for ASCON-128a, we have effectively maximized throughput while ensuring efficient resource utilization. Our implementation shows significant improvements over existing designs, achieving 34.16\% better throughput-to-area efficiency on Artix-7 and 137.58\% better throughput-to-area efficiency on Kintex-7 FPGAs. When comparing our results on the Spartan-7 FPGA with Spartan-6, we observed a 98.63\% improvement in throughput-to-area efficiency. However, it is important to note that this improvement may also be influenced by the advanced capabilities of the Spartan-7 platform compared to the older Spartan-6, in addition to the design optimizations implemented in this work.
A Specification of an Anonymous Credential System Using BBS+ Signatures with Privacy-Preserving Revocation and Device Binding
Recently, there has been a growing interest in anonymous credentials (ACs) as they can mitigate the risk of personal data being processed by untrusted actors without consent and beyond the user's control. Furthermore, due to the privacy-by-design paradigm of ACs, they can prove possession of personal attributes, such as an authenticated government document containing sensitive personal information, while preserving the privacy of the individual by not actually revealing the data. Typically, AC specifications consider the privacy of individuals during the presentation of an AC, but often neglect privacy-preserving approaches for enhanced security features such as AC non-duplication or AC revocation. To achieve more privacy-friendly enhanced security features of non-duplication and privacy-preserving revocation, an AC can be partially stored on secure, trusted hardware and linked to a status credential that reflects its revocation status.
In this paper, we specify an AC system that satisfies the requirements of minimality of information, unlinkability, non-duplication, and privacy-preserving revocation.
This is achieved by adapting the hardware binding method of the Direct Anonymous Attestation protocol with the BBS+ short group signatures of Camenisch et al. and combining it with status credentials.
Sampling Arbitrary Discrete Distributions for RV Commitment Schemes Using the Trimmed-Tree Knuth-Yao Algorithm
Sampling from non-uniform randomness according to an algorithm which keeps the internal randomness used by the sampler hidden is increasingly important for cryptographic applications, such as timing-attack-resistant lattice-based cryptography or certified differential privacy. In this paper we present a provably efficient sampler that maintains random sample privacy, or random sample hiding, and is applicable to arbitrary discrete random variables. Namely, we present a constant-time version of the classic Knuth-Yao algorithm that we name "trimmed-tree" Knuth-Yao. We establish distribution-tailored Boolean circuit complexity bounds for this algorithm, in contrast to the previous naive distribution-agnostic bounds. For a -sub-Gaussian discrete distribution where is the number of bits for representing the domain, and is the bits for precision of the PDF values, we prove the Boolean circuit complexity of the trimmed-tree Knuth-Yao algorithm has upper bound , an exponential improvement over the naive bounds, and in certain parameter regimes establish the lower bound . Moreover, by proving the subtrees in the trimmed-tree Knuth-Yao circuit are small, we prove it can computed by running circuits of size in parallel and then running sequential operations on the output. We apply these circuits for trimmed-tree Knuth-Yao to constructing random variable commitment schemes for arbitrary discrete distributions, giving exponential improvements in the number of random bits and circuit complexity used for certified differentially private means and counting queries over large datasets and domains.
Generalization of semi-regular sequences: Maximal Gröbner basis degree, variants of genericness, and related conjectures
Nowadays, the notion of semi-regular sequences, originally proposed by Fröberg, becomes very important not only in Mathematics, but also in Information Science, in particular Cryptology. For example, it is highly expected that randomly generated polynomials form a semi-regular sequence, and based on this observation, secure cryptosystems based on polynomial systems can be devised. In this paper, we deal with a semi regular sequence and its variant, named a generalized cryptographic semi-regular sequence, and give precise analysis on the complexity of computing a Gröbner basis of the ideal generated by such a sequence with help of several regularities of the ideal related to Lazard's bound on maximal Gröbner basis degree and other bounds. We also study the genericness of the property that a sequence is semi-regular, and its variants related to Fröberg's conjecture. Moreover, we discuss on the genericness of another important property that the initial ideal is weakly reverse lexicographic, related to Moreno-Socías' conjecture, and show some criteria to examine whether both Fröberg's conjecture and Moreno-Socías' one hold at the same time.
Multi-Client Attribute-Based and Predicate Encryption, Revisited
Multi-client Attribute-Based Encryption (ABE) is a generalization of key-policy ABE where attributes can be independently encrypted across several ciphertexts w.r.t. labels, and a joint decryption of these ciphertexts is possible if and only if (1) all ciphertexts share the same label, and (2) the combination of attributes satisfies the policy of the decryption key. All encryptors have their own secret key and security is preserved even if some of them are known to the adversary.
Very recently, Pointcheval et al. (TCC 2024) presented a semi-generic construction of MC-ABE for restricted function classes, e.g., NC0 and constant-threshold policies. We identify an abstract criterion common to all their policy classes which suffices to present the construction in a fully black-box way and allows for a slight strengthening of the supported policy classes. The construction of Pointcheval et al. is based on pairings. We additionally provide a new lattice-based instantiation from (public-coin) evasive LWE.
Furthermore, we revisit existing constructions for policies that can be viewed as a conjunction of local policies (one per encryptor). Existing constructions from MDDH (Agrawal et al., CRYPTO 2023) and LWE (Francati et al., EUROCRYPT 2023) do not support encryption w.r.t. different labels. We show how this feature can be included. Notably, the security model of Francati et al. additionally guarantees attribute-hiding but does not capture collusions. Our new construction is also attribute-hiding and provides resilience against any polynomially bounded number of collusions which must be fixed at the time of setup.
One Bit to Rule Them All – Imperfect Randomness Harms Lattice Signatures
The Fiat-Shamir transform is one of the most widely applied methods for secure signature construction. Fiat-Shamir starts with an interactive zero-knowledge identification protocol and transforms this via a hash function into a non-interactive signature. The protocol's zero-knowledge property ensures that a signature does not leak information on its secret key , which is achieved by blinding via proper randomness .
Most prominent Fiat-Shamir examples are DSA signatures and the new post-quantum standard Dilithium.
In practice, DSA signatures have experienced fatal attacks via leakage of a few bits of the randomness per signature.
Similar attacks now emerge for lattice-based signatures, such as Dilithium.
We build on, improve and generalize the pioneering leakage attack on Dilithium by Liu, Zhou, Sun, Wang, Zhang, and Ming.
In theory, their original attack can recover a 256-dimensional subkey of Dilithium-II (aka ML-DSA-44) from leakage in a single bit of per signature, in any bit position .
However, the memory requirement of their attack grows exponentially in the bit position of the leak.
As a consequence, if the bit leak is in a high-order position, then their attack is infeasible.
In our improved attack, we introduce a novel transformation, that allows us to get rid of the exponential memory requirement.
Thereby, we make the attack feasible for bit positions .
Furthermore, our novel transformation significantly reduces the number of required signatures in the attack.
The attack applies more generally to all Fiat-Shamir-type lattice-based signatures.
For a signature scheme based on module LWE over an -dimensional module, the attack uses a 1-bit leak per signature to efficiently recover a -fraction of the secret key.
In the ring LWE setting, which can be seen as module LWE with , the attack thus recovers the whole key.
For Dilithium-II, which uses , knowledge of a -fraction of the 1024-dimensional secret key lets its security estimate drop significantly from to bits.
SoK: Dlog-based Distributed Key Generation
Distributed Key Generation (DKG) protocols are fundamental components of threshold cryptography, enabling key generation in a trustless manner for a range of cryptographic operations such as threshold encryption and signing. Of particular widespread use are DKG protocols for discrete-logarithm based cryptosystems. In this Systematization of Knowledge (SoK), we present a comprehensive analysis of existing DKG protocols in the discrete-logarithm setting, with the goal of identifying cryptographic techniques and design principles that facilitate the development of secure and resilient protocols. To offer a structured overview of the literature, we adopt a modular approach and classify DKG protocols based on their underlying network assumption and cryptographic tools. These two factors determine how DKG protocols manage secret sharing and reach consensus as their essential building blocks. We also highlight various insights and suggest future research directions that could drive further advancements in this area.
An Attack on TON’s ADNL Secure Channel Protocol
We present an attack on the Abstract Datagram Network Layer (ADNL) protocol used in The Open Network (TON), currently the tenth largest blockchain by market capitalization. In its TCP variant, ADNL secures communication between clients and specialized nodes called liteservers, which provide access to blockchain data. We identify two cryptographic design flaws in this protocol: a handshake that permits session-key replay and a non-standard integrity mechanism whose security critically depends on message confidentiality. We transform these vulnerabilities into an efficient plaintext-recovery attack by exploiting two ADNL communication patterns, allowing message reordering across replayed sessions. We then develop a plaintext model for this scenario and construct an efficient algorithm that recovers the keystream using a fraction of known plaintexts and a handful of replays. We implement our attack and show that an attacker intercepting the communication between a TON liteserver and a widely deployed ADNL client can recover the keystream used to encrypt server responses by performing eight connection replays to the server. This allows the decryption of sensitive data, such as account balances and user activity patterns. Additionally, the attacker can modify server responses to manipulate blockchain information displayed to the client, including account balances and asset prices.
Relating Definitions of Computational Differential Privacy in Wider Parameter Regimes
The literature on computational differential privacy (CDP) has focused almost exclusively on definitions that are computational analogs of `pure' -DP. We initiate the formal study of computational versions of approximate DP, i.e. -DP with non-negligible . We focus on IND-CDP and SIM -CDP and show that the hierarchy between them when potentially differs substantially from when . In one direction, we show that for , any mechanism which is -SIM -CDP also is -IND-CDP, but only if is logarithmic in the security parameter. As a special case, this proves that the existing implication from -SIM -CDP to -IND-CDP does not hold for arbitrary , as previously claimed. Furthermore, we prove that when the parameters are the same in IND-CDP and SIM -CDP and is superlogarithmic, there exists a natural task that can be solved whilst satisfying SIM -CDP but which no IND-CDP mechanism can solve. This is the first separation in the CDP literature which is not due to using a task contrived specifically in order to give rise to the separation.
In the other direction, we show that the techniques for establishing an implication from -IND-CDP to -SIM -CDP extend only to that a mechanism being -IND-CDP implies it is also -SIM -CDP with . Finally, we show that the Groce-Katz-Yerukhimovich barrier results against separations between CDP and statistical DP hold also in the setting of non-negligible .
Randomized vs. Deterministic? Practical Randomized Synchronous BFT in Expected Constant Time
Most practical synchronous Byzantine fault-tolerant (BFT) protocols, such as Sync HotStuff (S&P 2020), follow the convention of partially synchronous BFT and adopt a deterministic design. Indeed, while these protocols achieve O(n) time complexity, they exhibit impressive performance in failure-free scenarios.
This paper challenges this conventional wisdom, showing that a randomized paradigm terminating in expected O(1) time may well outperform prior ones even in the failure-free scenarios. Our framework reduces synchronous BFT to a new primitive called multi-valued Byzantine agreement with strong external validity (MBA-SEV). Inspired by the external validity property of multi-valued validated Byzantine agreement (MVBA), the additional validity properties allow us to build a BFT protocol where replicas agree on the hashes of the blocks. Our instantiation of the paradigm, Sonic, achieves O(n) amortized message complexity per block proposal, expected O(1) time, and enables a fast path of only two communication step.
Our evaluation results using up to 91 instances on Amazon EC2 show that the peak throughput of Sonic and P-Sonic (a pipelining variant of Sonic) is 2.24x-14.52x and 3.08x-24.25x that of Sync HotStuff, respectively.
Security Analysis of NIST Key Derivation Using Pseudorandom Functions
Key derivation functions can be used to derive variable-length random strings that serve as cryptographic keys. They are integral to many widely-used communication protocols such as TLS, IPsec and Signal. NIST SP 800-108 specifies several key derivation functions based on pseudorandom functions such as \mode{CMAC} and \mode{HMAC}, that can be used to derive additional keys from an existing cryptographic key. This standard either explicitly or implicitly requests their KDFs to be variable output length pseudorandom function, collision resistant, and preimage resistant. Yet, since the publication of this standard dating back to the year of 2008, until now, there is no formal analysis to justify these security properties of KDFs.
In this work, we give the formal security analysis of key derivation functions in NIST SP 800-108. We show both positive and negative results regarding these key derivation functions. For KCTR-CMAC, KFB-CMAC, and KDPL-CMAC that are key derivation functions based on CMAC in counter mode, feedback mode, and double-pipeline mode respectively, we prove that all of them are secure variable output length pseudorandom functions and preimage resistance. We show that KFB-CMAC and KDPL-CMAC are collision resistance. While for KCTR-CMAC, we can mount collision attack against it that requires only six block cipher queries and can succeed with probability 1/4. For KCTR-HMAC, KFB-HMAC, and KDPL-HMAC that are key derivation functions based on HMAC in modes, we show that all of them behave like variable output length pseudorandom functions. When the key of these key derivation functions is of variable length, they suffer from collision attacks. For the case when the key of these key derivation function is of fixed length and less than bits where is the input block size of the underlying compression function, we can prove that they are collision resistant and preimage resistant.
Groebner Basis Cryptanalysis of Anemoi
Arithmetization-Oriented (AO) symmetric primitives play an important role in the efficiency and security of zero-knowledge (ZK) proof systems. The design and cryptanalysis of AO symmetric-key primitives is a new topic particularly focusing on algebraic aspects. An efficient AO hash function aims at lowering the multiplicative complexity in the arithmetic circuit of the hash function over a suitable finite field. The AO hash function Anemoi was proposed in CRYPTO 2023.
In this work we present an in-depth Groebner basis (GB) cryptanalysis of Anemoi over GF(p). The main aim of any GB cryptanalysis is to obtain a well-structured set of polynomials representing the target primitive, and finally solve this system of polynomials using an efficient algorithm.
We propose a new polynomial modelling for Anemoi that we call ACICO. We show that using ACICO one can obtain a GB defined by a well-structured set of polynomials. Moreover, by utilising ACICO we can prove the exact complexity of the Groebner basis computation (w.r.t Buchberger's algorithm) in the cryptanalysis of Anemoi. The structured GB further allows us to prove the dimension of the quotient space which was conjectured in a recently published work.
Afterwards, we provide the complexity analysis for computing the variety (or the solutions) of the GB polynomial system (corresponding to Anemoi) which is the final step in GB cryptanalysis, by using known approaches. In particular, we show that GB polynomial structure allows us to use the Wiedemann algorithm and improve the efficiency of cryptanalysis compared to previous works.
Our GB cryptanalysis is applicable to more than two branches (a parameter in Anemoi), while the previously published results showed cryptanalysis only for two branches. Our complexity analysis implies that the security of Anemoi should not be relied upon the GB computation.
We also address an important mathematical question in GB cryptanalysis of Anemoi namely, does the Anemoi polynomial system has a Shape form?, positively. By proving this we guarantee that upon application of basis conversion method like FGLM one can obtain a convenient system of polynomials that is easy to solve.
HydraProofs: Optimally Computing All Proofs in a Vector Commitment (with applications to efficient zkSNARKs over data from multiple users)
In this work, we introduce HydraProofs, the first vector commitment (VC) scheme that achieves the following two properties. (i) The prover can produce all the opening proofs for different elements (or consecutive sub-arrays) for a vector of size N in optimal time O(N). (ii) It is directly compatible with a family of zkSNARKs that encode their input as a multi-linear polynomial, i.e., our VC can be directly used when running the zkSNARK on its pre-image, without the need to open'' the entire vector pre-image inside the zkSNARK. To the best of our knowledge, all prior VC schemes either achieve (i) but are not efficiently pluggable'' into zkSNARKs (e.g., a Merkle tree commitment that requires re-computing the entire hash tree inside the circuit), or achieve (ii) but take (NlogN) time. We then combine HydraProofs with the seminal GKR protocol and apply the resulting zkSNARK in a setting where multiple users participate in a computation executed by an untrusted server and each user wants to ensure the correctness of the result and that her data was included. Our experimental evaluation shows our approach outperforms prior ones by 4-16x for prover times on general circuits. Finally, we consider two concrete application use cases, verifiable secret sharing and verifiable robust aggregation. For the former, our construction achieves the first scheme for Shamir's secret sharing with linear time prover (lower than the time needed for the dealer computation). For the second we propose a scheme that works against misbehaving aggregators and our experiments show it can be reasonably deployed in existing schemes with minimal slow-downs.
Post-Quantum Cryptography in eMRTDs: Evaluating PAKE and PKI for Travel Documents
Passports, identity cards and travel visas are examples of machine readable travel documents (MRTDs) or eMRTDs for their electronic variants. The security of the data exchanged between these documents and a reader is secured with a standardized password authenticated key exchange (PAKE) protocol known as PACE.
A new world-wide protocol migration is expected with the arrival of post-quantum cryptography (PQC) standards. In this paper, we focus on the impact of this migration on constrained embedded devices as used in eMRTDs. We present a feasibility study of a candidate post-quantum secure PAKE scheme as the replacement for PACE on existing widely deployed resource-constrained chips. In a wider context, we study the size, performance and security impact of adding post-quantum cryptography with a focus on chip storage and certificate chains for existing eMRTDs.
We show that if the required post-quantum certificates for the eMRTD fit in memory, the migration of existing eMRTD protocols to their post-quantum secure equivalent is already feasible but a performance penalty has to be paid. When using a resource constrained SmartMX3 P71D600 smart card, designed with classical cryptography in mind, then execution times of a post-quantum secure PAKE algorithm using the recommended post-quantum parameter of the new PQC standard ML-KEM can be done in under a second. This migration will be aided by future inclusion of dedicated hardware accelerators and increased memory to allow storage of larger keys and improve performance.