All papers (Page 3 of 23853 results)
Honest Majority MPC with Communication in Minicrypt
In this work, we consider the communication complexity of MPC protocols in honest majority setting achieving malicious security in both information-theoretic setting and computational setting. On the one hand, we study the possibility of basing honest majority MPC protocols on oblivious linear evaluation (OLE)-hybrid model efficiently with information-theoretic security. More precisely, we instantiate preprocessing phase of the recent work Sharing Transformation (Goyal, Polychroniadou, and Song, CRYPTO 2022) assuming random OLE correlations. Notably, we are able to prepare packed Beaver triples with malicious security achieving amortized communication of field elements plus a number of OLE correlations per packed Beaver triple, which is the best known result. To further efficiently prepare random OLE correlations, we resort to IKNP-style OT extension protocols (Ishai et al., CRYPTO 2003) in random oracle model.
On the other hand, we derive a communication lower bound for preparing OLE correlations in the information-theoretic setting based on negative results due to Damgård, Larsen, and Nielsen (CRYPTO 2019).
Combining our positive result with the work of Goyal, Polychroniadou, and Song (CRYPTO 2022), we derive an MPC protocol with amortized communication of elements per gate in random oracle model achieving malicious security, where denotes the length of a field element and is the security parameter.
Transistor: a TFHE-friendly Stream Cipher
Fully Homomorphic Encryption (FHE) allows computations on encrypted data without requiring decryption, ensuring data privacy during processing. However, FHE introduces a significant expansion of ciphertext sizes compared to plaintexts, which results in higher communication. A practical solution to mitigate this issue is transciphering, where only the master key is homomorphically encrypted, while the actual data is encrypted using a symmetric cipher, usually a stream cipher. The server then homomorphically evaluates the stream cipher to convert the encrypted data into a homomorphically encrypted form.
We introduce Transistor, a stream cipher specifically designed for efficient homomorphic evaluation within the TFHE scheme, a widely-used FHE framework known for its fast bootstrapping and ability to handle low-precision data. Transistor operates on which is chosen to optimize TFHE performances. Its components are carefully engineered to both control noise growth and provide strong security guarantees. First, a simple TFHE-friendly implementation technique for LFSRs allows us to use such components to cheaply increase the state size. At the same time, a small Finite State Machine is the only part of the state updated non-linearly, each non-linear operation corresponding in TFHE to a rather expensive Programmable Bootstrapping. This update is done using an AES-round-like transformation. But, in contrast to other stream ciphers like SNOW or LEX, our construction comes with information-theoretic security arguments proving that an attacker cannot obtain any information about the secret key from three or fewer consecutive keystream outputs. These information-theoretic arguments are then combined with a thorough analysis of potential correlations to bound the minimal keystream length required for recovering the secret key.
Our implementation of Transistor significantly outperforms the state of the art of TFHE transciphering, achieving a throughput of over 60 bits/s on a standard CPU, all while avoiding the need for an expensive initialization process.
Securely Instantiating 'Half Gates' Garbling in the Standard Model
Garbling is a fundamental cryptographic primitive, with numerous theoretical and practical applications. Since the first construction by Yao (FOCS’82, ’86), a line of work has concerned itself with reducing the communication and computational complexity of that construction. One of the most efficient garbling schemes presently is the ‘Half Gates’ scheme by Zahur, Rosulek, and Evans (Eurocrypt’15). Despite its widespread adoption, the provable security of this scheme has been based on assumptions whose only instantiations are in idealized models. For example, in their original paper, Zahur, Rosulek, and Evans showed that hash functions satisfying a notion called circular correlation robustness (CCR) suffice for this task, and then proved that CCR secure hash functions can be instantiated in the random permutation model. In this work, we show how to securely instantiate the Half Gates scheme in the standard model. To this end, we first show how this scheme can be securely instantiated given a (family of) weak CCR hash function, a notion that we introduce. Furthermore, we show how a weak CCR hash function can be used to securely instantiate other efficient garbling schemes, namely the ones by Rosulek and Roy (Crypto’21) and Heath (Eurocrypt’24). Thus we believe this notion to be of independent interest. Finally, we construct such weak CCR hash functions using indistinguishability obfuscation and one-way functions. The security proof of this construction constitutes our main technical contribution. While our construction is not practical, it serves as a proof of concept supporting the soundness of these garbling schemes, which we regard to be particularly important given the recent initiative by NIST to standardize garbling, and the optimizations in Half Gates being potentially adopted.
Cryptanalysis of rank-2 module-LIP: a single real embedding is all it takes
The rank- module-LIP problem was introduced in cryptography by (Ducas, Postlethwaite, Pulles, van Woerden, Asiacrypt 2022), to construct the highly performant HAWK scheme. A first cryptanalytic work by (Mureau, Pellet--Mary, Pliatsok, Wallet, Eurocrypt 2024) showed a heuristic polynomial time attack against the rank- module-LIP problem over totally real number fields. While mathematically interesting, this attack focuses on number fields that are not relevant for cryptography. The main families of fields used in cryptography are the highly predominant cyclotomic fields (used for instance in the HAWK scheme), as well as the NTRU Prime fields, used for instance in the eponymous NTRU Prime scheme (Bernstein, Chuengsatiansup, Lange, van Vredendaal, SAC 2017).
In this work, we generalize the attack of Mureau et al. against rank- module-LIP to the family of all number fields with at least one real embedding, which contains the NTRU Prime fields. We present three variants of our attack, firstly a heuristic one that runs in quantum polynomial time. Secondly, under the extra assumption that the defining polynomial of has a -transitive Galois group (which is the case for the NTRU Prime fields), we give a provable attack that runs in quantum polynomial time. And thirdly, with the same -transitivity assumption we give a heuristic attack that runs in classical polynomial time. For the latter we use a generalization of the Gentry--Szydlo algorithm to any number field which might be of independent interest.
Context-Dependent Threshold Decryption and its Applications
We initiate the study of high-threshold public-key decryption, along with an enhanced security feature called context-dependent decryption.
Our study includes definitions, constructions, security proofs, and applications.
The notion of high-threshold decryption has received almost no attention in the literature. The enhanced security feature of context-dependent encryption is entirely new, and plays an important role in many natural applications of threshold decryption.
New Techniques for Random Probing Security and Application to Raccoon Signature Scheme
The random probing model formalizes a leakage scenario where each wire in a circuit leaks with probability . This model holds practical relevance due to its reduction to the noisy leakage model, which is widely regarded as the appropriate formalization for power and electromagnetic side-channel attacks.
In this paper, we present new techniques for designing efficient masking schemes that achieve tighter random probing security with lower complexity. First, we introduce the notion of \emph{cardinal random probing composability} (Cardinal-RPC), offering a new trade-off between complexity and security for composing masking gadgets. Next, we propose a novel refresh technique based on a simple iterative process: randomly selecting and updating two shares with fresh randomness. While not perfectly secure in the standard probing model, this method achieves arbitrary cardinal-RPC security, making it a versatile tool for constructing random-probing secure circuits. Using this refresh, we develop additional basic gadgets (e.g., linear multiplication, addition, and copy) that satisfy the cardinal-RPC notion. Despite the increased complexity, the gains in security significantly outweigh the overhead, with the number of iterations offering useful flexibility.
To showcase our techniques, we apply them to lattice-based signatures. Specifically, we introduce a new random-probing composable gadget for sampling small noise, a key component in various post-quantum algorithms. To assess security in this context, we generalize the random probing security model to address auxiliary inputs and public outputs. We apply our findings to Raccoon, a masking-friendly signature scheme originally designed for standard probing security. We prove the secure composition of our new gadgets for key generation and signature computation, and show that our masking scheme achieves a superior security-performance tradeoff compared to previous approaches based on random probing expansion. To our knowledge, this is the first fully secure instantiation of a post-quantum algorithm in the random probing model.
Tighter Control for Distributed Key Generation: Share Refreshing and Expressive Reconstruction Policies
The secure management of private keys is a fundamental challenge, particularly for the general public, as losing these keys can result in irreversible asset loss. Traditional custodial approaches pose security risks, while decentralized secret sharing schemes offer a more resilient alternative by distributing trust among multiple parties. In this work, we extend an existing decentralized, verifiable, and extensible cryptographic key recovery scheme based on Shamir's secret sharing. We introduce a refresh phase that ensures proactive security, preventing long-term exposure of secret shares. Our approach explores three distinct methods for refreshing shares, analyzing and comparing their security guarantees and computational complexity. Additionally, we extend the protocol to support more complex access structures, with a particular focus on threshold access trees, enabling fine-grained control over key reconstruction.
Finding and Protecting the Weakest Link: On Side-Channel Attacks on Masked ML-DSA
NIST has standardized ML-KEM and ML-DSA as replacements for pre-quantum key exchanges and digital signatures. Both schemes have already seen analysis with respect to side-channels, and first fully masked implementations of ML-DSA have been published. Previous attacks have focused on unprotected implementations or assumed only hiding countermeasures to be in-place. Thus, in contrast to ML-KEM, the threat of side-channel attacks for protected implementations of ML-DSA is mostly unclear.
In this work, we analyze the side-channel vulnerability of masked ML-DSA implementations. We first systematically assess the vulnerability of several potential points of attacks in different leakage models using information theory. Then, we explain how an adversary could launch first, second, and higher-order attacks using a recently presented framework for side-channel information in lattice-based schemes. In this context, we propose a filtering technique that allows the framework to solve for the secret key from a large number of hints; this had previously been prevented by numerical instabilities. We simulate the presented attacks and discuss the relation to the information-theoretic analysis.
Finally, we carry out relevant attacks on physical devices, discuss recent masked implementations, and instantiate a countermeasure against the most threatening attacks. The countermeasure mitigates the attacks with the highest noise-tolerance while having very little overhead. The results on the physical devices validate our simulations.
Error-Simulatable Sanitization for TFHE and Applications
We show that the randomized TFHE bootstrapping technique of Bourse and Izabechéne provides a form of sanitization which is error-simulatable. This means that the randomized bootstrap can be used not only for sanitization of ciphertexts (i.e. to hide the function that has been computed), but that it can also be used in server-assisted threshold decryption. Thus we extend the server-assisted threshold decryption method of Passelégue and Stehlé (ASIACRYPT '24) to FHE schemes which have small ciphertext modulus (such as TFHE). In addition the error-simulatable sanitization enables us to obtain FuncCPA security for TFHE essentially for free.
Post-Quantum Blind Signatures from Matrix Code Equivalence
We construct a novel code-based blind signature scheme, us- ing the Matrix Equivalence Digital Signature (MEDS) group action. The scheme is built using similar ideas to the Schnorr blind signature scheme and CSI-Otter, but uses additional public key and commitment informa- tion to overcome the difficulties that the MEDS group action faces: lack of module structure (present in Schnorr), lack of a quadratic twist (present in CSI-Otter), and non-commutativity of the acting group. We address security concerns related to public key validation, and prove the security of our protocol in the random oracle model, using the security framework of Kastner, Loss, and Xu, under a variant of the Inverse Matrix Code Equivalence problem and a mild heuristic assumption.
Clustering Approach for Higher-Order Deterministic Masking
We present a novel scheme for securely computing the AND operation, without requiring additional online randomness. Building on the work of Nikova et al., our construction extends security beyond the first order while ensuring a uniform output distribution and resilience against glitches up to a specified threshold. This result addresses a longstanding open problem in side-channel-resistant masking schemes.
Our approach is based on a new method of share clustering, inspired by finite affine geometry, enabling simultaneous consideration of both security and uniformity. Furthermore, we demonstrate how this clustering-based framework can be applied to higher-order protection of ciphers like Ascon under a fully deterministic masking regime.
By eliminating the need for online randomness within the protected circuit, our work expands the practical scope of efficient and higher-order masking schemes for resource constraint applications.
X-Transfer: Enabling and Optimizing Cross-PCN Transactions
Blockchain interoperability solutions allow users to hold and transfer assets among different chains, and in so doing reap the benefits of each chain. To fully reap the benefits of multi-chain financial operations, it is paramount to support interoperability and cross-chain transactions also on Layer-2 networks, in particular payment channel networks (PCNs). Nevertheless, existing works on Layer-2 interoperability solutions still involve on-chain events, which limits their scalability and throughput. In this work, we present X-Transfer, the first secure, scalable, and fully off-chain protocol that allows payments across different PCNs. We formalize and prove the security of X-Transfer against rational adversaries with a game theoretic analysis. In order to boost efficiency and scalability, X-Transfer also performs transaction aggregation to increase channel liquidity and transaction throughput while simultaneously minimizing payment routing fees. Our empirical evaluation of X-Transfer shows that X-Transfer achieves at least twice as much throughput compared to the baseline of no transaction aggregation, confirming X-Transfer's efficiency.
Unconditional foundations for supersingular isogeny-based cryptography
In this paper, we prove that the supersingular isogeny problem (Isogeny), endomorphism ring problem (EndRing) and maximal order problem (MaxOrder) are equivalent under probabilistic polynomial time reductions, unconditionally.
Isogeny-based cryptography is founded on the presumed hardness of these problems, and their interconnection is at the heart of the design and analysis of cryptosystems like the SQIsign digital signature scheme. Previously known reductions relied on unproven assumptions such as the generalized Riemann hypothesis. In this work, we present unconditional reductions, and extend this network of equivalences to the problem of computing the lattice of all isogenies between two supersingular elliptic curves (HomModule).
For cryptographic applications, one requires computational problems to be hard on average for random instances. It is well-known that if Isogeny is hard (in the worst case), then it is hard for random instances. We extend this result by proving that if any of the above-mentionned classical problems is hard in the worst case, then all of them are hard on average. In particular, if there exist hard instances of Isogeny, then all of Isogeny, EndRing, MaxOrder and HomModule are hard on average.
A Decomposition Approach for Evaluating Security of Masking
Masking is a common countermeasure against side-channel attacks that encodes secrets into multiple shares, each of which may be subject to leakage. A key question is under what leakage conditions, and to what extent, does increasing the number of shares actually improve the security of these secrets. Although this question has been studied extensively in low-SNR regimes, scenarios where the adversary obtains substantial information—such as on low-noise processors or through static power analysis—have remained underexplored.
In this paper, we address this gap by deriving necessary and sufficient noise requirements for masking security in both standalone encodings and linear gadgets. We introduce a decomposition technique that reduces the relationship between an extended-field variable and its leakage into subproblems involving linear combinations of the variable’s bits. By working within binary subfields, we derive optimal bounds and then lift these results back to the extended field.
Beyond binary fields, we also present a broader framework for analyzing masking security in other structures, including prime fields. As an application, we prove a conjecture by Dziembowski et al. (TCC 2016), which states that for an additive group with its largest subgroup , a -noisy leakage satisfying ensures that masking enhances the security of the secret.
10-Party Sublinear Secure Computation from Standard Assumptions
Secure computation enables mutually distrusting parties to jointly compute a function on their secret inputs, while revealing nothing beyond the function output. A long-running challenge is understanding the required communication complexity of such protocols – in particular, when communication can be sublinear in the circuit representation size of the desired function. While several techniques have demonstrated the viability of sublinear secure computation in the two-party setting, known methods for the corresponding multi-party setting rely either on fully homomorphic encryption, non-standard hardness assumptions, or are limited to a small number of parties. In this work, we expand the study of multi-party sublinear secure computation by demonstrating sublinear-communication 10-party computation from various combinations of standard hardness assumptions. In particular, our contributions show:
– 8-party homomorphic secret sharing under the hardness of (DDH or DCR), the superpolynomial hardness of LPN, and the existence of constant-depth pseudorandom generators;
– A general framework for achieving (N + M )-party sublinear secure computation using M-party homomorphic secret sharing for NC1 and correlated symmetric PIR.
Together, our constructions imply the existence of a 10-party MPC protocol with sublinear computation. At the core of our techniques lies a novel series of computational approaches based on homomorphic secret sharing.
𝜔(1/𝜆)-Rate Boolean Garbling Scheme from Generic Groups
Garbling schemes are a fundamental cryptographic tool for enabling private computations and ensuring that nothing leaks beyond the output. As a widely studied primitive, significant efforts have been made to reduce their size. Until recently, all such schemes followed the Lindell and Pinkas paradigm for Boolean circuits (JoC 2009), where each gate is represented as a set of ciphertexts computed using only symmetric-key primitives. However, this approach is inherently limited to 𝑂(𝜆) bits per gate, where 𝜆 is the security parameter. Recently, it has been shown that achieving smaller garbled circuit size is possible under stronger assumptions, such as variants of Learning with Errors (LWE) or Indistinguishability Obfuscation (iO). In addition to requiring high-end cryptography, none of these constructions is black-box in the underlying cryptographic primitives, a key advantage of prior work. In this paper, we
present the first approach to garbling Boolean circuits that makes a black-box use of a group and uses 𝑜(𝜆) bits per gate.
Building on a novel application of the Reverse Multiplication-Friendly Embeddings (RMFE) paradigm (Cascudo et al., CRYPTO 2018), We introduce a new packing mechanism for garbling schemes, that packs boolean values into integers and leverage techniques for arithmetic garbling over integer rings. Our results introduce two new succinct schemes that achieve improved rates by a factor of√︁ log 𝜆, retaining the black-box usage. (1) Our first scheme is proven in the Generic Group model (GGM) for circuits with Ω(√︁ log 𝜆) width, obtaining a garbled circuit size of 𝜆 · |C|/√︁ log(𝜆). (2) Our second scheme is proven in the plain model under the Power-DDH assumption, attaining a garbled circuit size of 𝜆 · (|C|/√︁ log(𝜆) + poly(𝜆) · depth(C), but is restricted to layered circuits. Our schemes are the first to achieve sublinear (in 𝜆) cost per gate under assumptions that do not imply fully homomorphic encryption; in addition, our scheme is also the first to achieve this while making a black-box use of cryptography.
Authentication and sole control at a high level of assurance on widespread smartphones with threshold signatures
How to be assured that a user entered their PIN on their smartphone? The question is especially relevant when deploying remotely secured services such as with mobile wallets for digital identity and banking, which typically deploy a server side backed by a hardware security module (HSM). As long as the server can be trusted, authentication can be performed with high assurance, but it is challenging to guarantee sole control. This report defines an approach in terms of an abstract security problem and a concrete solution based on threshold signatures. It can be applied to use cases such as HSM-backed mobile identity wallets and other identification means.
Memory-Efficient BKW Algorithm for Solving the LWE Problem
The study of attack algorithms for the Learning with Errors (LWE) problem is crucial for the cryptanalysis of LWE-based cryptosystems. The BKW algorithm has gained significant attention as an important combinatorial attack for solving LWE. However, its exponential time and memory requirements severely limit its practical applications, even with medium-sized parameters. In this paper, we present a memory-efficient BKW algorithm for LWE, which extends Bogos's work [Asiacrypt'16] on the Learning Parity with Noise (LPN) problem. While their work improved efficiency, it overlooked the high memory demands of the BKW algorithm. We address this with two key improvements. First, we propose an efficient reduction technique for low-memory regimes, -sum-PCS-reduce, which combines the -sum technique with Parallel Collision Search (PCS) to achieve a better time-memory trade-off. Second, we present an improved memory-optimized finite automaton for our optimized BKW algorithm by incorporating several efficient memory-saving reduction techniques and pruning potential high-memory paths. Our algorithm, using graphs as a meta tool, can automatically identify the optimal reduction path within the graph, aiming to reduce both time and memory complexities. Compared to the state-of-the-art coded-BKW in the lattice-estimator, our algorithm achieves time complexity improvements ranging from to . Furthermore, memory complexity is improved, with reductions ranging from to .
White-Box Watermarking Signatures against Quantum Adversaries and Its Applications
Software watermarking for cryptographic functionalities enables embedding an arbitrary message (a mark) into a cryptographic function. An extraction algorithm, when provided with a (potentially unauthorized) circuit, retrieves either the embedded mark or a special symbol unmarked indicating the absence of a mark. It is difficult to modify or remove the embedded mark without destroying the functionality of a marked function. Previous works have primarily employed black-box extraction techniques, where the extraction algorithm requires only input-output access to the circuit rather than its internal descriptions (white-box extraction). Zhandry (CRYPTO 2021) identified several challenges in watermarking public-key encryption (PKE) with black-box extraction and introduced the notion of privacy for white-box watermarking against classical adversaries. Kitagawa and Nishimaki (Journal of Cryptology 37(3)) extended watermarking techniques to pseudorandom functions (PRFs) and PKE in the presence of quantum adversaries, enabling extraction from pirate quantum circuits but failing to achieve privacy.
In this work, we investigate white-box watermarking for digital signatures secure against quantum adversaries. Our constructions enable the extraction of embedded marks from the description of a pirate quantum circuit that produces valid signatures while ensuring that black-box access to a marked signing function does not reveal information about the embedded mark. We define and construct white-box watermarking signatures that are secure against quantum adversaries, leveraging the leaning with errors (LWE) assumption and quantum fully homomorphic encryption. Furthermore, we highlight that privacy concerns are even more critical in the context of signatures than in PKE. We also present a compelling practical application of white-box watermarking signatures.
Additionally, we explore the concept of universal copy protection for signatures. We define universal copy protection as a mechanism that transforms any quantumly secure signature scheme into a copy-protected variant without altering the verification key or verification algorithm. This approach is preferable to developing specific copy-protected signature schemes, as it allows existing schemes to be secured without modifying their published verification keys. We demonstrate that universal copy protection for all quantum secure signatures is impossible by leveraging our white-box watermarking signatures secure against quantum adversaries.
Dazzle: Improved Adaptive Threshold Signatures from DDH
The adaptive security of threshold signatures considers an adversary that adaptively corrupts users to learn their secret key shares and states. Crites, Komlo, and Maller (Crypto 2023) proposed Sparkle, the first threshold signature scheme in the pairing-free discrete-log setting to be proved adaptively secure. However, its proof of full adaptive security requires the algebraic group model (AGM) and is based on an interactive assumption. Bacho, Loss, Tessaro, Wagner, and Zhu (Eurocrypt 2024) proposed Twinkle, whose full adaptive security can be based on the standard DDH assumption only.
We propose Dazzle and Dazzle-T, adaptively secure threshold signature schemes based on DDH without the AGM, the same assumption and model as Twinkle. Our schemes improve upon Twinkle in signature size, round complexity, and/or security tightness. In particular, Dazzle and Dazzle-T both have signatures that are shorter than Twinkle by one group element. Regarding the round complexity and tightness, Twinkle is three-round and non-tight. Our Dazzle is two-round and has the same security loss as Twinkle, while Dazzle-T is three-round and fully tight.
We achieve our improvements by optimizing the underlying single-party signature scheme and showing that the single-party scheme can be transformed to a threshold scheme by a simpler transformation than that of Twinkle.
Transparent SNARKs over Galois Rings
Recently, there is a growing need for SNARKs to operate over a broader range of algebraic structures, and one important structure is Galois ring. We present transparent SNARK schemes over arbitrary Galois rings. Compared with Rinocchio scheme in Ganesh et al. (J Cryptol 2023), our SNARK schemes do not require a trusted third party to establish a structured reference string (SRS).
In this paper, we present the expander code over arbitrary Galois rings, which can be encoded in time. Using this expander code, we then extend the Brakedown commitment scheme in Golovnev et al. (CRYPTO 2023) to Galois rings. By combining the Libra framework in Xie et al. (CRYPTO 2019), we present a transparent SNARK for log-space uniform circuits over Galois rings, achieving prover time, proof size, and verifier time. And by combining HyperPlonk in Chen et al. (EUROCRYPT 2023), we present a transparent SNARK for NP circuits over Galois rings, with prover time, proof size, and verifier time.
PKE and ABE with Collusion-Resistant Secure Key Leasing
Secure key leasing (SKL) is an advanced encryption functionality that allows a secret key holder to generate a quantum decryption key and securely lease it to a user. Once the user returns the quantum decryption key (or provides a classical certificate confirming its deletion), they lose their decryption capability. Previous works on public key encryption with SKL (PKE-SKL) have only considered the single-key security model, where the adversary receives at most one quantum decryption key. However, this model does not accurately reflect real-world applications of PKE-SKL. To address this limitation, we introduce collusion-resistant security for PKE-SKL (denoted as PKE-CR-SKL). In this model, the adversary can adaptively obtain multiple quantum decryption keys and access a verification oracle which validates the correctness of queried quantum decryption keys. Importantly, the size of the public key and ciphertexts must remain independent of the total number of generated quantum decryption keys. We present the following constructions:
- A PKE-CR-SKL scheme based on the learning with errors (LWE) assumption.
- An attribute-based encryption scheme with collusion-resistant SKL (ABE-CR-SKL), also based on the LWE assumption.
- An ABE-CR-SKL scheme with classical certificates, relying on multi-input ABE with polynomial arity.
HasteBoots: Proving FHE Bootstrapping in Seconds
Fully Homomorphic Encryption (FHE) enables computations on encrypted data, ensuring privacy for outsourced computation. However, verifying the integrity of FHE computations remains a significant challenge, especially for bootstrapping, the most computationally intensive operation in FHE. Prior approaches, including zkVM-based solutions and general-purpose SNARKs, suffer from inefficiencies, with proof generation times ranging from several hours to days. In this work, we propose HasteBoots, a succinct argument tailored for FHE operations. By designing customized polynomial interactive oracle proofs and optimized polynomial commitment schemes, HasteBoots achieves proof generation in a few seconds for FHE bootstrapping, significantly outperforming existing methods. Our approach demonstrates the potential for scalable and efficient verifiable FHE, paving the way for practical, privacy-preserving computations.
Quantum Security Evaluation of ASCON
Grover's algorithm, which reduces the search complexity of symmetric-key ciphers and hash functions, poses a significant security challenge in cryptography. Recent research has focused on estimating Grover's search complexity and assessing post-quantum security. This paper analyzes a quantum circuit implementation of ASCON, including ASCON-AEAD, hash functions, and ASCON-80pq, in alignment with NIST’s lightweight cryptography standardization efforts. We place particular emphasis on circuit depth, which directly impacts execution time, and analyze the quantum resource costs associated with Grover’s algorithm-based key recovery and collision attacks. Additionally, we estimate the resources required to assess the quantum-resistant security strength of ASCON, based on security levels and the latest research trends.
Improved Resultant Attack against Arithmetization-Oriented Primitives
In the last decade, the introduction of advanced cryptographic protocols operating on large finite fields has raised the need for efficient cryptographic primitives in this setting, commonly referred to as Arithmetization-Oriented (AO). The cryptanalysis of AO hash functions is essentially done through the study of the CICO problem on the underlying permutation. Two recent works at Crypto 2024 and Asiacrypt 2024 managed to solve the CICO problem much more efficiently than traditional Gröbner basis methods, using respectively advanced Gröbner basis techniques and resultants.
In this paper, we propose an attack framework based on resultants that applies to a wide range of AO permutations and improves significantly upon these two recent works. Our improvements mainly come from an efficient reduction procedure that we propose and rigorously analyze, taking advantage of fast multivariate multiplication. We present the most efficient attacks on Griffin, Arion, Anemoi, and Rescue. We show that most variants of Griffin, Arion and Anemoi fail to reach the claimed security level. For the first time, we successfully break a parameter set of Rescue, namely its -bit security variant. The presented theory and complexity estimates are backed up with experimental attacks. Notably, we practically find CICO solutions for out of rounds of Griffin, out of rounds of Anemoi, out of rounds of Rescue, improving by respectively , and rounds on the previous best practical attacks.
MPC with Publicly Identifiable Abort from Pseudorandomness and Homomorphic Encryption
Publicly identifiable abort is a critical feature for ensuring accountability in outsourced computations using secure multiparty computation (MPC). Despite its importance, no prior work has specifically addressed identifiable abort in the context of outsourced computations. In this paper, we present the first MPC protocol that supports publicly identifiable abort with minimal overhead for external clients. Our approach minimizes client-side computation by requiring only a few pseudorandom function evaluations per input. On the server side, the verification process involves lightweight linear function evaluations using homomorphic encryption. This results in verification times of a few nanoseconds per operation for servers, with client overhead being approximately two orders of magnitude lower. Additionally, the public verifiability of our protocol reduces client input/output costs compared to SPDZ-based protocols, on which we base our protocol. For example, in secure aggregation use cases, our protocol achieves over twice the efficiency during the offline phase and up to an 18 % speedup in the online phase, significantly outperforming SPDZ.
TFHE Gets Real: an Efficient and Flexible Homomorphic Floating-Point Arithmetic
Floating-point arithmetic plays a central role in computer science and is used in various domains where precision and computational scale are essential. One notable application is in machine learning, where Fully Homomorphic Encryption (FHE) can play a crucial role in safeguarding user privacy. In this paper, we focus on TFHE and develop novel homomorphic operators designed to enable the construction of precise and adaptable homomorphic floating-point operations. Integrating floating-point arithmetic within the context of FHE is particularly challenging due to constraints such as small message space and the lack of information during computation. Despite these challenges, we were able to determine parameters for common precisions (e.g., 32-bit, 64-bit) and achieve remarkable computational speeds, with 32-bit floating-point additions completing in 2.5 seconds and multiplications in approximately 1 second in a multi-threaded environment. These metrics provide empirical evidence of the efficiency and practicality of our proposed methods, which significantly outperform previous efforts. Our results demonstrate a significant advancement in the practical application of FHE, making it more viable for real-world scenarios and bridging the gap between theoretical encryption techniques and practical usability.
Inaccessible Entropy for Watermarking Generative Agents
In this work, we construct distortion-free and unforgeable watermarks for language models and generative agents. The watermarked output cannot be forged by a adversary nor removed by the adversary without significantly degrading model output quality. That is, the watermarked output is distortion-free: the watermarking algorithm does not noticeably change the quality of the model output and without the public detection key, no efficient adversary can distinguish output that is watermarked from outputs which are not. The core of the watermarking schemes involve embedding a message and publicly-verifiable digital signature in the generated model output. The message and signature can be extracted during the detection phase and verified by any authorized entity that has a public key. We show that, assuming the standard cryptographic assumption of one-way functions, we can construct distortion-free and unforgeable watermark schemes. Our framework relies on analyzing the inaccessible entropy of the watermarking schemes based on computational entropy notions derived from the existence of one-way functions.
Tighter Security Notions for a Modular Approach to Private Circuits
To counteract side-channel attacks, a masking scheme splits each intermediate variable into shares and transforms each elementary operation (e.g., field addition and multiplication) to the masked correspondence called gadget, such that intrinsic noise in the leakages renders secret recovery infeasible in practice. A simple and efficient security notion is the probing model ensuring that any shares are independently distributed from the secret input. One requirement of the probing model is the noise in the leakages should increase with the number of shares, largely restricting the side-channel security in the low-noise scenario. Another security notion for masking, called the random probing model, allows each variable to leak with a probability . While this model reflects the physical reality of side channels much better, it brings significant overhead. At Crypto 2018, Ananth et al. proposed a modular approach that can provide random probing security for any security level by expanding small base gadgets with share recursively, such that the tolerable leakage probability decreases with while the security increases exponentially with the recursion depth of expansion. Then, Belaïd et al. provided a formal security definition called Random Probing Expandability (RPE) and an explicit framework using the modular approach to construct masking schemes at Crypto 2020.
In this paper, we investigate how to tighten the RPE definition via allowing the dependent failure probabilities of multiple inputs, which results in a new definition called related RPE. It can be directly used for the expansion of multiplication gates and reduce the complexity of the base multiplication gadget from proposed at Asiacrypt 2021 to and maintain the same security level. Furthermore, we describe a method to expand any gates (rather than only multiplication) with the related RPE gadgets.
Besides, we denote another new RPE definition called Multiple inputs RPE used for the expansion of multiple-input gates composed with any gates. Utilizing these methods, we reduce the complexity of 3-share circuit compiler to , where is the size of the unprotected circuit and the protection failure probability of the global circuit is . In comparison, the complexity of the state-of-the-art work, proposed at Eurocrypt 2021, is for the same value of . Additionally, we provide the construction of a 5-share circuit compiler with a complexity .
Garbled Lookup Tables from Homomorphic Secret Sharing
Garbled Circuit (GC) is a fundamental tool in cryptography, especially in secure multiparty computation. Most garbling schemes follow a gate-by-gate paradigm. The communication cost is proportional to the circuit size times the security parameter .
Recently, Heath, Kolesnikov and Ng (Eurocrypt 2024) partially transcend the circuit size barrier by considering large gates. To garble an arbitrary -input -output gate, their scheme requires bits of communication. The security relies on circular correlation robust hash functions (CCRH).
We further improve the communication cost to , removing the exponential term. The computation cost is , dominated by exponentiations. Our construction is built upon recent progress in DCR-based Homomorphic Secret Sharing (HSS), so it additionally relies on the decisional composite residuosity (DCR) assumption.
As an intermediate step, we construct programmable distributed point functions with decomposable keys, relying on the DCR assumption. Previously, such primitive can only be constructed from multilinear maps or sub-exponential lattice assumptions.
Adaptively Secure IBE from Lattices with Asymptotically Better Efficiency
Current adaptively secure identity-based encryption (IBE) constructions from lattices are unable to achieve a good balance among the master public key size, secret key size, modulus and reduction loss. All existing lattice-based IBE schemes share a common restriction: the modulus is quadratic in the trapdoor norm.
In this work, we remove this restriction and present a new adaptively secure IBE scheme from lattices in the standard model, which improves the state-of-the-art construction proposed by Abla et al. (TCC 2021) and achieves asymptotically better efficiency. More precisely, we achieve the asymptotically minimal number of public vectors among all the existing schemes, along with a significantly smaller modulus compared to the scheme by Abla et al. (TCC 2021). Furthermore, our scheme enjoys the smallest Gaussian width of the secret key among all existing schemes and has the same tightness as Abla et al.'s scheme.
We propose a novel cross-multiplication design for our IBE scheme, along with several novel tools and techniques, including: (a) a homomorphic computation algorithm that outputs BGG+-style encoding with two distinct-norm trapdoors; (b) a sampling algorithm with hybrid Gaussian outputs; and (c) a partial rerandomization algorithm. These new tools and techniques are general and could find rich applications in lattice-based cryptography.
Chiplet-Based Techniques for Scalable and Memory-Aware Multi-Scalar Multiplication
This paper presents a high-performance architecture for accelerating Multi-Scalar Multiplication (MSM) on ASIC platforms, targeting cryptographic applications with high throughput demands. Unlike prior MSM accelerators that focus solely on efficient processing elements (PEs), our chiplet-based design optimally balances area, power, and computational throughput. We identify a mixed window configuration of 12- and 13-bit windows that enables an efficient multi-PE integration of 10 PEs per chiplet. Our single-PE design achieves a 1.37x speedup and 1.3x area reduction over prior works, while the multi-PE chiplet design improves the area-time product by 2.2x, offering scalability, lower production costs, and higher manufacturing yields.
Verifiable Streaming Computation and Step-by-Step Zero-Knowledge
We propose a new incrementally computable proof system, called Incrementally Verifiable Computation (IVsC). IVsC enables computing incremental proofs of correct execution for any RAM program on a input . Input is called a input if it is only available on-the-fly as part of an ongoing data generation/streaming process, and not available at once. We also propose a new notion of zero-knowledge features for IVsC that guarantees the proof can be incrementally verified w.r.t.~an encrypted digest, where the proof and digest hide the full data stream. We design zero-knowledge IVsC from a wide variety of standard falsifiable assumptions (such as decision-linear/sub-exponential DDH/LWE).
We also introduce a new notion of non-interactive zero-knowledge proofs, that we call zero-knowledge protocols. Such protocols have strong zero-knowledge guarantees, wherein the prover's entire internal memory is simulatable at any point during proof generation. That is, unlike standard zero-knowledge proofs, where only the final proof is simulatable, we can also simulate prover's state at every step of the computation. Such proof systems will be useful in settings where an adversary could corrupt an honest prover even before it generates the full proof. We show that a zero-knowledge IVsC system can be used (almost) as a black-box to design step-by-step zero-knowledge proof systems, therefore secure under standard assumptions.
The Round Complexity of Black-Box Post-Quantum Secure Computation
We study the round-complexity of secure multi-party computation (MPC) in the post-quantum regime where honest parties and communication channels are classical but the adversary can be a quantum machine. Our focus is on the black-box setting where both the construction as well as the security reduction are black-box in nature. In this context, Chia, Chung, Liu, and Yamakawa [FOCS'22] demonstrated the infeasibility of achieving standard simulation-based security within constant rounds, unless . This outcome leaves crucial feasibility questions unresolved. Specifically, it remains unknown whether black-box constructions are achievable within polynomial rounds; additionally, the existence of constant-round constructions with respect to - , a relaxed yet useful alternative to the standard simulation notion, remains unestablished.
This work provides positive answers to the aforementioned questions. We introduce the first black-box construction for post-quantum MPC in polynomial rounds, from the minimal assumption of post-quantum semi-honest oblivious transfers. In the two-party scenario, our construction requires only rounds. These results have already found application in the oracle separation between classical-communication quantum MPC and in the recent work of Kretschmer, Qian, and Tal [STOC'25].
As for -simulation, Chia, Chung, Liang, and Yamakawa [CRYPTO'22] resolved the issue for the two-party setting, leaving the general multi-party setting as an open question. We complete the picture by presenting the first black-box and constant-round construction in the multi-party setting. Our construction can be instantiated using various standard post-quantum primitives including lossy public-key encryption, linearly homomorphic public-key encryption, or dense cryptosystems.
En route, we obtain a black-box and constant-round post-quantum commitment that achieves a weaker version of the standard 1-many non-malleability, from the minimal assumption of post-quantum one-way functions. Besides its utility in our post-quantum MPC construction, this commitment scheme also reduces the assumption used in the lower bound of quantum parallel repetition recently established by Bostanci, Qian, Spooner, and Yuen [STOC'24]. We anticipate that it will find more applications in the future.
cuFalcon: An Adaptive Parallel GPU Implementation for High-Performance Falcon Acceleration
The rapid advancement of quantum computing has ushered in a new era of post-quantum cryptography, urgently demanding quantum-resistant digital signatures to secure modern communications and transactions. Among NIST-standardized candidates, Falcon—a compact lattice-based signature scheme—stands out for its suitability in size-sensitive applications. In this paper, we present cuFalcon, a high-throughput GPU implementation of Falcon that addresses its computational bottlenecks through adaptive parallel strategies. At the operational level, we optimize Falcon key components for GPU architectures through memory-efficient FFT, adaptive parallel ffSampling, and a compact computation mode. For signature-level optimization, we implement three versions of cuFalcon: the raw key version, the expanded key version, and the balanced version, which achieves a trade-off between efficiency and memory usage. Additionally, we design batch processing, streaming mechanisms, and memory pooling to handle multiple signature tasks efficiently. Ultimately, performance evaluations show significant improvements, with the raw key version achieving 172k signatures per second and the expanded key version reaching 201k. Compared to the raw key version, the balanced version achieves a 7% improvement in throughput, while compared to the expanded key version, it reduces memory usage by 70%. Furthermore, our raw key version implementation outperforms the reference implementation by 36.75 and achieves a 2.94 speedup over the state-of-the-art GPU implementation.
New Exchanged Boomerang Distinguishers for 5-Round AES
In block ciphers, the attacker should not be able to distinguish a block cipher from a random permutation, making the existence of a distinguisher important. Cryptanalysis of the reduced-round variants of block ciphers is also important in cryptographic design. AES is the most widely used block cipher, and currently, the best-known distinguisher for 5-round AES has a data and time complexity of with a success probability of 55\%. In this paper, we propose the fully active exchanged boomerang and multiple exchanged boomerang distinguishers for 5-round AES, based on the retracing boomerang key-recovery attack. The fully active exchanged boomerang distinguisher utilizes the probability that either each byte of the diagonal of the returned plaintext pair is fully active, or the diagonal is inactive for all diagonals. This probability is very high, but we enhance it using the friends pair technique to distinguish a block cipher from a random permutation. The multiple exchanged boomerang distinguisher utilizes the fact that there are three trails where the probability of one diagonal of the returned plaintext pair being inactive is higher than the random probability, and one trail where it is equal to the random probability. This 5-round distinguisher has a complexity of and a success probability of 82\%, which represents a new best-known result for the secret-key distinguisher on 5-round AES.
LatticeFold+: Faster, Simpler, Shorter Lattice-Based Folding for Succinct Proof Systems
Folding is a technique for building efficient succinct proof systems. Many existing folding protocols rely on the discrete-log based Pedersen commitment scheme, and are therefore not post-quantum secure and require a large (256-bit) field. Recently, Boneh and Chen constructed LatticeFold, a folding protocol using lattice-based commitments which is plausibly post-quantum secure and can operate with small (64-bit) fields. For knowledge soundness, LatticeFold requires the prover to provide a range proof on all the input witnesses using bit-decomposition, and this slows down the prover. In this work we present LatticeFold+, a very different lattice-based folding protocol that improves on LatticeFold in every respect: the prover is five to ten times faster, the verification circuit is simpler, and the folding proofs are shorter. To do so we develop two novel lattice techniques. First, we develop a new purely algebraic range proof which is much more efficient than the one in LatticeFold, and may be of independent interest. We further shrink the proof using double commitments (commitments of commitments). Second, we show how to fold statements about double commitments using a new sumcheck-based transformation.
Towards Optimal Early Stopping Agreement Protocols
Early stopping agreement protocols guarantee termination based on the actual number of malicious parties, , encountered during execution, rather than assuming the worst-case scenario of many corruptions. The lower bound on the round complexity for such protocols is known to be many rounds. In this work, we substantially improve the state of the art for cryptographic early stopping protocols in the honest majority setting where . In this scenario, the best known protocol due to Elsheimy, Loss, and Papamanthou (ASIACRYPT `24) achieves a round complexity of for some constant . We begin by introducing an improvement to the Elsheimy et al. approach which can be used to obtain the first polynomial-time early stopping agreement protocols in the corruption regime which achieve a complexity of . We then instantiate our generic framework with refined components to obtain a very concretely efficient protocol with a round complexity of only and polynomial communication complexity. Finally, we show that it is possible to come within one round of the optimal round complexity by presenting a broadcast protocol based on the exponential information gathering approach, which achieves a round complexity of , albeit with exponential communication complexity.
Silent Circuit Relinearisation: Sublinear-Size (Boolean and Arithmetic) Garbled Circuits from DCR
We introduce a general template for building garbled circuits with low communication, under the decisional composite residuosity (DCR) assumption. For the case of layered Boolean circuits, we can garble a circuit of size with communication proportional to bits, plus an additive factor that is polynomial in the security parameter. For layered arithmetic circuits with -bounded integer computation, we obtain a similar result: the garbled arithmetic circuit has size bits, where is the security parameter. These are the first constructions of general-purpose, garbled circuits with sublinear size, without relying on heavy tools like indistinguishability obfuscation or attribute-based and fully homomorphic encryption.
To achieve these results, our main technical tool is a new construction of a form of homomorphic secret sharing where some of the inputs are semi-private, that is, known to one of the evaluating parties. Through a new relinearisation technique that allows performing arbitrary additions and multiplications on semi-private shares, we build such an HSS scheme that supports evaluating any function of the form , where is any polynomially-sized circuit applied to the semi-private input , and is a restricted-multiplication (or, NC1) circuit applied to the private input . This significantly broadens the expressiveness of prior known HSS constructions.
Provable Speedups for SVP Approximation Under Random Local Blocks
We point out if assuming every local block appearing in the slide reduction algorithms [ALNS20] is `random' (as usual in the cryptographic background), then the combination of the slide reduction algorithms [ALNS20] and Pouly-Shen 's algorithm [PoSh24] yields exponentially faster provably correct algorithms for -approximate SVP for all approximation factors , which is the regime most relevant for cryptography.
K-Linkable Ring Signatures and Applications in Generalized Voting
Rational Secret Sharing with Competition
The rational secret sharing problem (RSS) considers incentivizing rational parties to share their received information to reconstruct a correctly shared secret. Halpern and Teague (STOC'04) demonstrate that solving the RSS problem deterministically with explicitly bounded runtime is impossible, if parties prefer learning the secret than not learning, and they prefer fewer other parties to learn.
To overcome this impossibility result, we propose RSS with competition. We consider a slightly different yet sensible preference profile: Each party prefers to learn the secret early and prefers fewer parties learning before them. This preference profile changes the information-hiding dynamics among parties in prior works: First, those who have learned the secret are indifferent towards or even prefer informing others later; second, the competition to learn the secret earlier among different access groups in the access structure facilitates information sharing inside an access group. As a result, we are able to construct the first deterministic RSS algorithm that terminates in at most two rounds. Additionally, our construction does not employ any cryptographic machinery (being fully game-theoretic and using the underlying secret-sharing scheme as a black-box) nor requires the knowledge of the parties' exact utility function. Furthermore, we consider general access structures.
IBE-IBE: Intent-Based Execution through Identity-Based Encryption and Auctions
This paper introduces a decentralized and leaderless sealed bid auction model for dynamic pricing of intents across blockchain networks. We leverage Multi-Party Computation (MPC) and Identity-Based Encryption (IBE) to improve pricing while ensuring fairness and decentralization. By addressing the vulnerabilities of current centralized or static pricing mechanisms, our approach fosters transparent, secure, and competitive price discovery. We further enhance the confidentiality of intents through Multi-Party Computation (MPC), Fully Homomorphic Encryption (FHE), and Trusted Execution Environments (TEE). Our novel methodology mitigates the risks of frontrunning and centralization while preserving the rapid settlement times essential to decentralized finance (DeFi).
Robust Non-Interactive Zero-Knowledge Combiners
A -out-of- robust non-interactive zero-knowledge (NIZK) combiner is a construction that, given access to candidate instantiations of a NIZK for some language, itself implements a NIZK for the same language. Moreover, the combiner is secure, assuming at least of the given candidates are secure.
In this work, we provide the first definition of combiners for NIZK, and prove that no robust NIZK combiner exists assuming (unless the polynomial hierarchy collapses).
On the positive side, we provide different constructions of robust NIZK combiners for . In particular, we show how to obtain:
1) A black-box combiner working for a special class of {\em homomorphic} languages where are polynomial and .
2) A non-black-box combiner working for any language, where are constant and .
3) A non-black-box combiner working for any language, where are polynomial and .
DART: Decentralized, Anonymous, and Regulation-friendly Tokenization
We introduce DART, a fully anonymous, account-based payment system designed to address a comprehensive set of real-world considerations, including regulatory compliance, while achieving constant transaction size. DART supports multiple asset types, enabling users to issue on-chain assets such as tokenized real-world assets. It ensures confidentiality and anonymity by concealing asset types, transaction amounts, balances, and the identities of both senders and receivers, while guaranteeing unlinkability between transactions. Our design provides a mechanism for asset-specific auditing. Issuers can designate asset-specific auditors for the assets they issue, with the system preserving the privacy of the auditor’s identity to achieve asset type privacy. Only the designated auditor is authorized to decrypt transactions related to their associated asset, and users efficiently prove the association between the (hidden) asset type and the (hidden) designated auditor in their transactions.
DART supports non-interactive payments, allowing an online sender to submit a transaction even when the receiver is offline, while still incorporating a receiver affirmation mechanism that captures the real-world compliance consideration where the receiver must confirm (or deny) an incoming transaction. To the best of our knowledge, this is the first scheme of this kind in the permissionless setting. To accommodate all eventualities, DART also incorporates a reversibility mechanism, enabling senders to reclaim funds from pending transactions if the receiver’s affirmation is not yet provided. Finally, it offers a privacy-preserving proof of balance (per asset type) mechanism.
Our system achieves full anonymity while supporting concurrent incoming and outgoing transactions, resolving a common issue that plagues many account-based anonymous systems. We further demonstrate how our system supports multi-party transactions, allowing payment to multiple receivers in one transaction efficiently. We provide a full formal model in the Universal Composition (UC) setting, as well as a UC protocol realization.
On the Power of Polynomial Preprocessing: Proving Computations in Sublinear Time, and More
Cryptographic proof systems enable a verifier to be convinced of of a computation's correctness without re-executing it; common efficiency requirements include both succinct proofs and fast verification.
In this work we put forth the general study of cryptographic proof systems with sublinear proving time (after a preprocessing).
Prior work has achieved sublinear proving only for limited computational settings (e.g., vector commitments and lookup arguments), relying on specific assumptions or through non-black-box use of cryptographic primitives. In this work we lift many of these limitations through the systematic study of a specific object: polynomial commitments (PC) with sublinear proving time, a choice motivated by the crucial role that PC play in the design of efficient cryptographic schemes.
Our main result is a simple construction of a PC with sublinear prover based on any vector commitment scheme (VC) and any preprocessing technique for fast polynomial evaluation. We prove that this PC satisfies evaluation binding, which is the standard security notion for PC, and show how to expand our construction to achieve the stronger notion of knowledge soundness (extractability).
The first application of our result is a construction of "index-efficient" SNARKs meaning that the prover is sublinear, after preprocessing, in the size of the index (i.e., the NP-relation describing the proven statement). Our main technical contribution is a method to transform a class of standard Polynomial Interactive Oracle Proofs (PIOPs) into index-efficient PIOPs. Our construction of index-efficient SNARKs makes black-box use of such index-efficient PIOPs and a PC with sublinear prover.
As a corollary, this yields the first lookup argument for unstructured tables in which the prover is sublinear in the size of the table, while making only black-box use of a VC and thus allowing instantiations from generic assumptions such as collision-resistant hash functions. Prior lookup arguments with sublinear provers were only known with non-black-box use of cryptographic primitives, or from pairings. Finally, our last application is a transformation that builds UC-secure SNARKs from simulation-extractable ones, with an approximately linear overhead in proving time (as opposed to quadratic in prior work).
UC-Security of Encrypted Key Exchange: A Tutorial
Password-Authenticated Key Exchange (PAKE) is a type of key exchange protocols secure against man-in-the-middle adversaries, in the setting where the two parties only agree upon a low-entropy "password" in advance. The first and arguably most well-studied PAKE protocol is Encrypted Key Exchange (EKE) (Bellovin and Marritt, 1992), and the standard security notion for PAKE is in the Universal Composability (UC) framework (Canetti et al., 2005). While the UC-security of EKE has been "folklore" knowledge for many years, a satisfactory formal proof has long been elusive.
In this work, we present a UC-security proof for the most common instantiation of EKE, which is based on hashed Diffie–Hellman. Our proof is in the random oracle + ideal cipher models, and under the computational Diffie–Hellman assumption. We thoroughly discuss the UC-security definition for PAKE, subtleties and pitfalls in the security proof, how to write a UC proof, and flaws in existing works; along the way we also present some philosophical discussions on security definitions and security proofs in general. In this way, we hope to draw attention to several understudied, underexplained or underappreciated aspects of the UC-security of EKE.
This tutorial can be viewed as a simplified version of the recent work by Januzelli, Roy and Xu (2025); however, we completely rewrite most of the materials there to make them much more approachable to beginners who have just learned the UC framework.
Diamond iO: A Straightforward Construction of Indistinguishability Obfuscation from Lattices
Indistinguishability obfuscation (iO) has seen remarkable theoretical progress, yet it remains impractical due to its high complexity and inefficiency. A common bottleneck in recent iO schemes is the reliance on bootstrapping techniques from functional encryption (FE) into iO, which requires recursively invoking the FE encryption algorithm for each input bit—creating a significant barrier to practical iO schemes.
In this work, we propose diamond iO, a new lattice-based iO construction that replaces the costly recursive encryption process with lightweight matrix operations. Our construction is proven secure under the learning with errors (LWE) and evasive LWE assumptions, as well as our new assumption—all-product LWE—in the pseudorandom oracle model. By leveraging the FE scheme for pseudorandom functionalities introduced by Agrawal et al. (ePrint’24) in a non-black-box manner, we remove the reliance on prior FE-to-iO bootstrapping techniques and thereby significantly reduce complexity. A remaining challenge is to reduce our new assumption to standard assumptions such as LWE, further advancing the goal of a practical and sound iO construction.
Doubly Efficient Cryptography: Commitments, Arguments and RAM MPC
Can a sender commit to a long input without even reading all of it? Can a prover convince a verifier that an NP statement holds without even reading the entire witness? Can a set of parties run a multiparty computation (MPC) protocol in the RAM model, without necessarily even reading their entire inputs? We show how to construct such "doubly efficient" schemes in a setting where parties can preprocess their input offline, but subsequently they can engage in many different protocol executions over this input in sublinear online time. We do so in the plain model, without any common setup. Our constructions rely on doubly efficient private information retrieval (DEPIR) as a building block and can be instantiated based on Ring LWE.
In more detail, we begin by constructing doubly efficient (interactive) commitments, where the sender preprocesses the input offline, and can later commit to this input to arbitrary receivers in sublinear online time. Moreover, the sender can open individual bits of the committed input in sublinear time. We then use these commitments to implement doubly succinct (interactive) arguments, where the prover preprocesses the statement/witness offline, and can subsequently run many proof protocols to convince arbitrary verifiers of the statement's validity in sublinear online time. Furthermore, we augment these to get a doubly efficient "commit, prove and locally open" protocol, where the prover can commit to a long preprocessed input, prove that it satisfies some global property, and locally open individual bits, all in sublinear time. Finally, we leverage these tools to construct a RAM-MPC with malicious security in the plain model. Each party individually preprocesses its input offline, and can then run arbitrary MPC executions over this input with arbitrary other parties. The online run-time of each MPC execution is only proportional to the RAM run-time of the underlying program, that can be sublinear in the input size.
Merkle Mountain Ranges are Optimal: On witness update frequency for cryptographic accumulators
We study append-only set commitments with efficient updates and inclusion proofs, or cryptographic accumulators. In particular, we examine how often the inclusion proofs (or witnesses) for individual items must change as new items are added to the accumulated set. Using a compression argument, we show unconditionally that to accumulate a set of items, any construction with a succinct commitment ( storage) must induce at least total witness updates as items are sequentially added. In a certain regime, we strengthen this bound to total witness updates. These lower bounds hold not just in the worst case, but with overwhelming probability over a random choice of the accumulated set. Our results show that a close variant of the Merkle Mountain range, an elegant construction that has become popular in practice, is essentially optimal.
Anamorphic Resistant Encryption: the Good, the Bad and the Ugly
Anamorphic encryption (AE), introduced by Persiano, Phan and Yung at Eurocrypt `22, allows to establish secure communication in scenarios where users might be forced to hand over their decryption keys to some hostile authority. Over the last few years, several works have improved our understanding of the primitive by proposing novel realizations, new security notions and studying inherent limitations.
This work makes progress, mainly, on this last line of research.
We show concrete realizations of public key encryption schemes that, provably, cannot be turned anamorphic. These were called Anamorphic Resistant Encryption (ARE, fort short) in a recent work of Dodis and Goldin.
We also show that, under certain conditions, anamorphic encryption is equivalent to algorithm substitution attacks. This allows to positively reinterpret our AREs as PKE schemes provably resistant to subversion attacks. To the best of our knowledge, these seem to be the first IND-CPA secure schemes achieving subversion resistance without trust assumptions or non-black-box decomposition techniques.
Our two AREs heavily rely, among other things, on a direct usage of extremely lossy functions: here the lossyness property is used in the constructions, rather than just in the proofs. The first construction is in the public parameters model and also requires iO. The second construction eliminates the need of both public parameters and iO, but is in the random oracle and relies on the novel concept of robust extremely lossy functions with group structure, a primitive that we define and (show how to) realize in this paper.
Authenticated BitGC for Actively Secure Rate-One 2PC
In this paper, we present a constant-round actively secure two-party computation protocol with small communication based on the ring learning with errors (RLWE) assumption with key-dependent message security. Our result builds on the recent BitGC protocol by Liu, Wang, Yang, and Yu (Eurocrypt 2025) with communication of one bit per gate for semi-honest security. First, we achieve a different manner of distributed garbling, where the global correlation is secret-shared among the two parties. The garbler always and only holds the garbled labels corresponding to the wire values when all inputs are zero, while the evaluator holds the labels corresponding to the real evaluation. In the second phase, we run an authentication protocol that requires some extra communication, which allows two parties to check the correct computation of each gate by treating the ciphertext as commitments, now that the global key is distributed. For layered circuits, the extra communication for authentication is bits per gate, resulting in total communication of bits per gate. For generic circuits, the extra communication cost can be bit per gate in the worst case, and thus, the total communication cost would be 2 bits per gate.
NoIC: PAKE from KEM without Ideal Ciphers
We show a generic compiler from KEM to (Universally Composable) PAKE in the Random Oracle Model (ROM) and without requiring an Ideal Cipher. The compiler is akin to Encrypted Key Exchange (EKE) by Bellovin-Merritt, but following the work of McQuoid et al. it uses only a 2-round Feistel to password-encrypt a KEM public key. The resulting PAKE incurs only insignificant cost overhead over the underlying KEM, and it is a secure UC PAKE if KEM is secure and key-anonymous under the Plaintext-Checking Attack (PCA).
Several KEM-to-PAKE compilers were shown recently, secure under the OW-PCA and ANO-PCA assumptions on KEM, but all used an Ideal Cipher in addition to ROM. While there are techniques for emulating ROM against quantum attackers, it is currently unknown how to extend many of such techniques to the Ideal Cipher Model. Consequently, doing without the Ideal Cipher in protocol design makes the resulting construction a more plausible candidate for post-quantum secure PAKE if instantiated with post-quantum PCA-secure and anonymous KEM, such as the ML-KEM standard itself.
Our construction and proofs build on many of the ideas underlying the KEM-to-PAKE compiler using 2-round Feistel given by McQuoid et al, but our protocol is more efficient and our proofs address limitations in the analysis therein.
Privately Constrained PRFs from DCR: Puncturing and Bounded Waring Rank
A privately constrained pseudorandom function (pCPRF) is a PRF with the additional property that one can derive a constrained key that allows evaluating the PRF only on inputs satisfying a constraint predicate , without revealing itself or leaking information about the PRF’s output on inputs that do not satisfy the constraint.
Existing privately constrained PRFs face significant limitations: either (1) they rely on assumptions known to imply fully-homomorphic encryption or indistinguishability obfuscation, (2) they support only highly restricted classes of constraints—for instance, no known group-based pCPRF even supports the simple class of puncturing constraints (where the constrained key permits evaluation on all but one point while hiding the punctured point), or (3) they are limited to polynomial-size input domains. A long-standing open question has been whether one can construct a privately constrained PRF from group-based assumptions for more expressive classes of constraints. In this work, we present a pCPRF based on the decisional composite residuosity (DCR) assumption that supports a highly expressive class of predicates, namely constraints with polynomially bounded Waring rank, which notably includes puncturing.
From a technical perspective, our work follows the general template of Couteau, Meyer, Passelègue, and Riahinia (Eurocrypt'23), who constructed a pCPRF from group-based homomorphic secret-sharing but were limited to inner-product constraints in the constraint-hiding setting. Leveraging novel techniques for computing with distributed discrete logarithms (DDLog), we enable the non-interactive authentication of powers of linear combinations of shares of some value. This, in turn, allows us to express constraints with polynomially bounded Waring rank.
Our construction is single-key, selectively secure, and supports an exponential-size domain.
ETK: External-Operations TreeKEM and the Security of MLS in RFC 9420
The Messaging Layer Security protocol MLS is standardized in IETF’s RFC 9420 and allows a group of parties to securely establish and evolve group keys even if the servers are malicious. Its core mechanism is based on the TreeKEM protocol, but has gained many additional features and modifications during the development of the MLS standard. Over the last years, several partial security analyses have appeared of incomplete drafts of the protocol. One of the major additions to the TreeKEM design in MLS RFC 9420 (the final version of the standard) are the external operations, i.e., external commits and proposals, which interact deeply with the core TreeKEM protocol. These operations have not been considered in any previous security analysis, leaving their impact on the protocol’s overall security unclear.
In this work, we formalize ETK: External-Operations TreeKEM that includes external commits and proposals. We develop a corresponding ideal functionality and prove that ETK indeed realizes .
Our work is the first cryptographic analysis that considers both the final changes to the standard’s version of TreeKEM as well as external proposals and external commits. Compared to previous works that considered MLS draft versions, our ETK protocol is by far the closest to the final MLS RFC 9420 standard. Our analysis implies that the core of MLS’s TreeKEM variant as defined in RFC 9420 is an ETK protocol that realizes , when used with an SUF-CMA secure signature scheme, such as the IETF variant of Ed25519. We show that contrary to previous claims, MLS does not realize [Crypto2022] when used with signature schemes that only guarantee EUF-CMA, such as ECDSA.
Moreover, we show that the security of the protocol could be further strengthened by adding a functionality to insert PSKs, allowing another form of healing, and give a corresponding construction ETK-PSK and ideal functionality .
Network agnostic consensus in constant time
Network agnostic protocols (Blum, Katz, Loss TCC `19) are consensus or MPC protocols that strike a balance between purely synchronous and asynchronous protocols. Given thresholds that satisfy and , they have the unique property of remaining secure against an adversary that either (1) corrupts up to parties in a synchronous execution where all messages are delivered within a known bound or (2) corrupts up to in an asynchronous execution where messages can be delayed arbitrarily. All existing network agnostic protocols follow a design pattern which first attempts to run a synchronous path, and then switches to an asynchronous path as a fallback option if the synchronous path times out after some time due to the network being asynchronous. Unfortunately, has to be set conservatively to account for the possibility that the synchronous path might take an unusually long time even when the network is synchronous. As a result, for various basic tasks including Byzantine Agreement or MPC, no existing network agnostic protocol is able to terminate for all honest parties within constant expected time in \emph{all possible executions}.
In this work, we introduce a new paradigm to construct network agnostic consensus that, for the first time, overcome this barrier. Using this new design pattern we first present simple protocols for reliable broadcast (RB) and binary agreement (BA)
that are \emph{responsive} when no more than parties are corrupted and run in expected constant time regardless of the network conditions.\footnote{The latter was already possible for RB, which by design requires only constantly many constant rounds, but is not guaranteed to terminate for all parties.} We then extend our results to asynchronous common subset (ACS) and MPC. Notably, our approach \emph{reverses the order} of the synchronous and asynchronous path by designing protocols that are first and foremost asynchronous and only fall back to the synchronous execution path when more than parties are corrupted.
Two Is All It Takes: Asymptotic and Concrete Improvements for Solving Code Equivalence
The Linear Code Equivalence ( ) problem asks, for two given linear codes , to find a monomial mapping into . Algorithms solving crucially rely on a (heuristic) subroutine, which recovers the secret monomial from pairs of codewords satisfying . We greatly improve on this known bound by giving a constructive (heuristic) algorithm that recovers the secret monomial from any \emph{two} pairs of such codewords for any . We then show that this reduction in the number of required pairs enables the design of a more efficient algorithm for solving the problem. Our asymptotic analysis shows that this algorithm outperforms previous approaches for a wide range of parameters, including all parameters proposed across the literature. Furthermore, our concrete analysis reveals significant bit security reductions for suggested parameters. Most notably, in the context of the LESS signature scheme, a second-round contender in the ongoing NIST standardization effort for post-quantum secure digital signatures, we obtain bit security reductions of up to 24 bits.
Improved Subfield Curve Search For Specific Field Characteristics
Isogeny-based cryptography relies its security on the hardness of the supersingular isogeny problem: finding an isogeny between two supersingular curves defined over a quadratic field.
The Delfs-Galbraith algorithm is the most efficient procedure for solving the supersingular isogeny problem with a time complexity of operations. The bottleneck of the Delfs-Galbraith algorithm is the so-called subfield curve search (i.e., finding an isogenous supersingular elliptic curve defined over the base field), which determines the time complexity.
Given that, for efficiency, most recent isogeny-based constructions propose using finite fields with field characteristics equal to for some positive integers and .
This work focuses on primes of that particular form, and it presents two new algorithms for finding subfield curves with a time complexity of operations and a memory complexity polynomial in . Such algorithms exploit the existence of large torsion- points and extend the subfield root detection algorithm of Santos, Costello, and Shi (Crypto 2022) to our case study. In addition, it is worth highlighting that these algorithms easily extend to primes of the form and with being a small integer.
This study also examines the usage of radical -isogenies with the proposed extended subfield root detection algorithm. In this context, the results indicate that the radical -isogeny approach is competitive compared with the state-of-the-art algorithms.
“Check-Before-you-Solve”: Verifiable Time-lock Puzzles
Time-lock puzzles are cryptographic primitives that guarantee to the generator that the puzzle cannot be solved in less than sequential computation steps. They have recently found numerous applications, e.g., in fair contract signing and seal-bid auctions. However, solvers have no a priori guarantee about the solution they will reveal, e.g., about its ``usefulness'' within a certain application scenario. In this work, we propose verifiable time-lock puzzles (VTLPs) that address this by having the generator publish a succinct proof that the solution satisfies certain properties (without revealing anything else about it). Hence solvers are now motivated to ``commit'' resources into solving the puzzle. We propose VTLPs that support proving arbitrary NP relations about the puzzle solution.
At a technical level, to overcome the performance hurdles of the ``naive'' approach of simply solving the puzzle within a SNARK that also checks , our scheme combines the ``classic'' RSA time-lock puzzle of Rivest, Shamir, and Wagner, with novel building blocks for ``offloading'' expensive modular group exponentiations and multiplications from the SNARK circuit. We then propose a second VTLP specifically for checking RSA-based signatures and verifiable random functions (VRFs). Our second scheme does not rely on a SNARK and can have several applications, e.g., in the context of distributed randomness generation. Along the road, we propose new constant-size proofs for modular exponent relations over hidden-order groups that may be of independent interest. Finally, we experimentally evaluate the performance of our schemes and report the findings and comparisons with prior approaches.
Lightweight Single-Server PIR with Communication
It is well-known that any single-server PIR scheme with sublinear communication necessitates public-key cryptography. Several recent studies, which we collectively refer to as lightweight PIR, demonstrate that this limitation can be circumvented to some extent. However, all such schemes require at least communication per-query, where is the size of the database. Indeed, the celebrated result provided by Ishai et al. (Crypto '24) implies that, with solely symmetric-key cryptography, achieving per-query communication below necessitates more than client storage. Whether this barrier can be overcome with limited use of public-key cryptography remains an open question. In this paper, we tackle this question by presenting the first lightweight single-server PIR scheme with communication while allowing arbitrary (non-zero) client storage.
Building Hard Problems by Combining Easy Ones: Revisited
We establish the following theorem:
Let be random functions from to , . For all polynomial-query-bounded distinguishers making at most queries to each oracle, there exists a poly-time oracle simulator and a constant such that the probability is negligible, that is
A Robust Variant of ChaCha20-Poly1305
The ChaCha20-Poly1305 AEAD scheme is widely used as an alternative for AES-GCM on platforms without AES hardware instructions. Although recent analysis by Degabriele et al. shows that ChaCha20-Poly1305 provides adequate security in the conventional multiuser model, the construction is totally broken when a single nonce is repeated – a real-word scenario that can occur due to faulty implementations or the desire to use random nonces.
We present a new nonce-misuse resistant and key-committing authenticated encryption scheme, called ChaCha20-Poly1305-PSIV, that is based on carefully combining the ChaCha20-Poly1305 building blocks into the NSIV paradigm proposed by Peyrin and Seurin (CRYPTO 2016) without performance loss. We analyze the security of the underlying mode PSIV in the multi-user faulty-nonce model assuming that the underlying permutation is ideal, and prove its key-committing security in the cmt-1 model. Rust and C implementations are provided, and benchmarks confirm that performance is comparable to the ChaCha20-Poly1305 implementation in libsodium.
In terms of security and efficiency (without hardware support), our proposal compares favorably to AES-GCM-SIV. Since we reuse the ChaCha20-Poly1305 building blocks, we expect ChaCha20-Poly1305-PSIV to benefit from existing analysis and to be easy to deploy in practice.
Uniformly Most Powerful Tests for Ad Hoc Transactions in Monero
We introduce a general, low-cost, low-power statistical test for transactions in transaction protocols with small anonymity set authentication (TPSASAs), such as Monero. The test classifies transactions as ad hoc (spontaneously constructed to spend a deterministically selected key) or self-churned (constructed from a probability distribution very close to that of the default wallet software, and with the same sender and receiver). The test is a uniformly most powerful (UMP) likelihood ratio tests (LRT) from the Neyman-Pearson Lemma, and makes no assumptions about user behavior. We extend these tests to expoit prior information about user behavior. We discuss test parameterization, as well as how anonymity set cardinality and user behavior impact test performance. We also describe a maximum-likelihood de-anonymization attack on Monero based on our test.
The Quantum Decoherence Model: Everlasting Composable Secure Computation and More
Quantum cryptography allows to achieve security goals which are unobtainable using classical cryptography alone: it offers the promise of everlasting privacy. Thatis, an adversary trying to attack a protocol must succeed during the run of the protocol.
After the protocol has terminated, security holds unconditionally.
In this work, we initiate the study of a new model which we call the quantum decoherence model (QDM). In a nutshell, this model captures adversaries that are computationally bounded during the run of a protocol (and some time after), but become computationally unbounded long after the protocol terminates. Importantly, once the adversary becomes computationally unbounded, he can only remember a bounded number of qubits from before the computational bound was lifted.
We provide a variant of the Universal Composability framework which captures the new notion of quantum decoherence and augment it with quantum random oracles. As our main contribution, we construct a non-interactive commitment scheme achieving unconditional and statistical security against malicious senders and everlasting security against malicious receivers under our new security notion. Such commitments imply general secure multiparty computation with everlasting security.
Finally, we show that our core technique can be applied to a broader spectrum of problems. We show that it gives rise to everlasting public key encryption and OT in the QDM. Finally, we also consider the weaker notion of incompressible encryption in the setting of quantum decoherence, and show that post-quantum IND-CPA secure public
key encryption is sufficient to realize this notion without resorting to random oracles.
At the technical core of our constructions is a new, conceptually simple yet powerful reverse entropic uncertainty relation.
Slot a la carte: Centralization Issues in Ethereum's Proof-of-Stake Protocol
In this paper, we demonstrate that Ethereum's current proof-of-stake (PoS) consensus mechanism poses a significant threat to decentralisation. Our research focuses on the manipulability of distributed randomness beacons (DRBs) in leader selection. Specifically, we show that RANDAO - Ethereum's DRB - is seriously vulnerable to manipulations in its current form. For example, if a lucrative slot is foreseen, there is a risk that staking entities may temporarily collude to control of the validators, enabling them to execute a series of RANDAO manipulation attacks that secure the target slot with a success rate. The effectiveness of our method stems from the fact that we work with a significantly richer model of the possible attacks compared to previous works. Our manipulative strategies work by missing blocks from the canonical chain - either by withholding blocks in the adversary's own slots or by forking out blocks proposed by others. We argue that while PoS can pave the path in the future for blockchains, Ethereum's current DRB implementation has to be replaced with a more secure mechanism.
LSM Trees in Adversarial Environments
The Log Structured Merge (LSM) Tree is a popular choice for key-value stores that focus on optimized write throughput while maintaining performant, production-ready read latencies. To optimize read performance, LSM stores rely on a probabilistic data structure called the Bloom Filter (BF). In this paper, we focus on adversarial workloads that lead to a sharp degradation in read performance by impacting the accuracy of BFs used within the LSM store. Our evaluation shows up to increase in the read latency of lookups for popular LSM stores. We define adversarial models and security definitions for LSM stores. We implement adversary resilience into two popular LSM stores, LevelDB and RocksDB. We use our implementations to demonstrate how performance degradation under adversarial workloads can be mitigated.
Assumption-Free Fuzzy PSI via Predicate Encryption
We present the first protocol for efficient Fuzzy Private Set Intersection (PSI) that achieves linear communication complexity, does not depend on restrictive assumptions on the distribution of party inputs, and abstains from inefficient fully homomorphic encryption. Specifically, our protocol enables two parties to compute all pairs of elements from their respective sets that are within a given Hamming distance, without constraints on how these sets are structured.
Our key insight is that securely computing the (threshold) Hamming distance between two inputs can be reduced to securely computing their inner product. Leveraging this reduction, we construct a Fuzzy PSI protocol using recent techniques for inner-product predicate encryption. To enable the use of predicate encryption in our setting, we establish that these predicate encryption schemes satisfy a weak notion of simulation security and demonstrate how their internal key derivation can be efficiently distributed without a trusted third party.
As a result, our Fuzzy PSI on top of predicate encryption features not only asymptotically optimal linear communication complexity but is also concretely practical.
Practical Circuit Privacy/Sanitization for TFHE
Fully homomorphic encryption (FHE) enables the computation of arbitrary circuits over encrypted data. A widespread application of FHE is a simple two-party computation (2PC) protocol, where the server evaluates a circuit over the client's encrypted data and its private inputs. However, while the security of FHE guarantees that the client's data is protected from the server, there is no inherent support for the privacy of the server's input and the circuit.
One effective solution to this problem is an additional algorithm for FHE called sanitization, introduced by Ducas and Stehlé (Eurocrypt 2016). Roughly speaking, a sanitization algorithm removes any meaningful information contained in the ciphertext, including previous evaluations of circuits. Following their definition, several constructions for sanitization have been proposed, particularly for TFHE. However, all of these methods were impractical, requiring several bootstrappings or an excessive amount of randomized evaluations.
In this work, we construct a novel sanitization algorithm for TFHE that overcomes these issues. Our method only adds two lightweight randomization steps to the original TFHE bootstrapping, without any modifications to the core algorithms. As a result, our algorithm achieves sanitization with a single bootstrapping and minimal randomization, bringing sanitization closer to practicality.
To empirically evaluate the efficiency of our method, we provide concrete benchmark results based on our proof-of-concept implementation. Our algorithm sanitizes a single TFHE ciphertext in 35.88 ms, which is only 3.4% (1.18 ms) slower than the original TFHE bootstrapping with the same parameters. When directly compared to previous works, our method achieves a speedup by a factor of 4.82 to 209.03.
A note on the genus of the HAWK lattice
The cryptographic scheme and NIST candidate HAWK makes use of a particular module lattice and relies for its security on the assumption that finding module lattice isomorphisms (module LIP) is hard. To support this assumption, we compute the mass of the HAWK lattice, which gives a lower bound on the number of isometry classes of module lattices which cannot be distinguished from the HAWK lattice by an easily computed invariant called the genus. This number turns out to be so large that an attack based on the genus alone seems infeasible.
Rejected Challenges Pose New Challenges: Key Recovery of CRYSTALS-Dilithium via Side-Channel Attacks
Rejection sampling is a crucial security mechanism in lattice-based signature schemes that follow the Fiat-Shamir with aborts paradigm, such as ML-DSA/CRYSTALS-Dilithium. This technique transforms secret-dependent signature samples into ones that are statistically close to a secret-independent distribution (in the random oracle model). While many side-channel attacks have directly targeted sensitive data such as nonces, secret keys, and decomposed commitments, fewer studies have explored the potential leakage associated with rejection sampling. Notably, Karabulut~et~al. showed that leakage from rejected challenges can undermine, but not entirely break, the security of the Dilithium scheme.
Motivated by the above, we convert the problem of key recovery (from the leakage of rejection sampling) to an integer linear programming problem (ILP), where rejected responses of unique Hamming weights set upper/lower constraints of the product between the challenge and the private key. We formally study the worst-case complexity of the problem as well as empirically confirm the practicality of the rejected challenge attack. For all three security levels of Dilithium-2/3/5, our attack recovers the private key in seconds or minutes with a 100% Success Rate (SR).
Our attack leverages knowledge of the rejected challenge and response, and thus we propose methods to extract this information by exploiting side-channel leakage from Number Theoretic Transform (NTT) operations. We demonstrate the practicality of this rejected challenge attack by using real side-channel leakage on a Dilithium-2 implementation running on an ARM Cortex-M4 microcontroller. To the best of our knowledge, it is the first efficient side-channel key recovery attack on ML-DSA/Dilithium that targets the rejection sampling procedure. Furthermore, we discuss some countermeasures to mitigate this security issue.
An Innovative Lightweight Symmetric Encryption Algorithm Integrating NeoAlzette ARX S-box and XCR CSPRNG
This paper introduces "Little OaldresPuzzle_Cryptic," a novel lightweight symmetric encryption algorithm.
At the core of this algorithm are two main cryptographic components: the NeoAlzette permutation S-box based on ARX (Addition-Rotation-XOR) primitives and the innovative pseudo-random number generator XorConstantRotation (XCR), used exclusively in the key expansion process. The NeoAlzette S-box, a non-linear function for 32-bit pairs, is meticulously designed for both encryption strength and operational efficiency, ensuring robust security in resource-constrained environments. During the encryption and decryption processes, a pseudo-randomly selected mixed linear diffusion function, distinct from XCR, is applied, enhancing the complexity and unpredictability of the encryption.
We comprehensively explore the various technical aspects of the Little OaldresPuzzle_Cryptic algorithm.
Its design aims to balance speed and security in the encryption process, particularly for high-speed data transmission scenarios. Recognizing that resource efficiency and execution speed are crucial for lightweight encryption algorithms, without compromising security, we conducted a series of statistical tests to validate the cryptographic security of our algorithm. These tests included assessments of resistance to linear and differential cryptanalysis, among other measures.
By combining the NeoAlzette S-box with sophisticated key expansion using XCR, and integrating the pseudo-randomly selected mixed linear diffusion function in its encryption and decryption processes, our algorithm significantly enhances its capability to withstand advanced cryptographic analysis techniques while maintaining lightweight and efficient operation. Our test results demonstrate that the Little OaldresPuzzle_Cryptic algorithm effectively supports the encryption and decryption needs of high-speed data, ensuring robust security and making it an ideal choice for various modern cryptographic application scenarios.
Keywords: Symmetric Encryption Algorithm, Lightweight Cryptography, ARX Primitives, PRNG, NeoAlzette S-boxes, XorConstantRotation, Diffusion and Confusion Layers, Cryptographic Security, Statistical Tests, Resource-Constrained Environments.
Constructing Quantum Implementations with the Minimal T-depth or Minimal Width and Their Applications
With the rapid development of quantum computers, optimizing the quantum implementations of symmetric-key ciphers, which constitute the primary components of the quantum oracles used in quantum attacks based on Grover and Simon's algorithms, has become an active topic in the cryptography community. In this field, a challenge is to construct quantum circuits that require the least amount of quantum resources. In this work, we aim to address the problem of constructing quantum circuits with the minimal T-depth or width (number of qubits) for nonlinear components, thereby enabling implementations of symmetric-key ciphers with the minimal T-depth or width. Specifically, we propose several general methods for obtaining quantum implementation of generic vectorial Boolean functions and multiplicative inversions in GF(2^n), achieving the minimal T-depth and low costs across other metrics. As an application, we present a highly compact T-depth-3 Clifford+T circuit for the AES S-box. Compared to the T-depth-3 circuits presented in previous works (ASIACRYPT 2022, IEEE TC 2024), our circuit has significant reductions in T-count, full depth and Clifford gate count. Compared to the state-of-the-art T-depth-4 circuits, our circuit not only achieves the minimal T-depth but also exhibits reduced full depth and closely comparable width. This leads to lower costs for the DW-cost and T-DW-cost. Additionally, we propose two methods for constructing minimal-width implementations of vectorial Boolean functions. As applications, for the first time, we present a 9-qubit Clifford+T circuit for the AES S-box, a 16-qubit Clifford+T circuit for a pair of AES S-boxes, and a 5-qubit Clifford+T circuit for the chi function of SHA3. These circuits can be used to derive quantum circuits that implement AES or SHA3 without ancilla qubits.
Prior-Based Label Differential Privacy via Secure Two-Party Computation
Differential privacy (DP) is a fundamental technique used in machine learning (ML) training for protecting the privacy of sensitive individual user data. In the past few years, a new approach for combining prior-based Local Differential Privacy (LDP) mechanisms with a relaxed DP criterion, known as Label DP, has shown great promise in increasing the utility of the final trained model without compromising on the DP privacy budget. In this work, we identify a crucial privacy gap in the current implementations of these prior-based LDP mechanisms, namely the leakage of sensitive priors. We address the challenge of implementing such LDP mechanisms without leaking any information about the priors while preserving the efficiency and accuracy of the current insecure implementations. To that end, we design simple and efficient secure two-party computation (2PC) protocols for addressing this challenge, implement them, and perform end-to-end testing on standard datasets such as MNIST, CIFAR-10. Our empirical results indicate that the added security benefit essentially comes almost for free in the sense that the gap between the current insecure implementations and our proposed secure version, in terms of run-time overhead and accuracy degradation, is minimal. E.g., for CIFAR-10, with strong DP privacy parameter, the additional runtime due to 2PC is over WAN with decrease in accuracy over an insecure (non-2PC) approach.
Practical Keyword Private Information Retrieval from Key-to-Index Mappings
This paper introduces practical schemes for keyword Private Information Retrieval (keyword PIR), enabling private queries on public databases using keywords. Unlike standard index-based PIR, keyword PIR presents greater challenges, since the query's position within the database is unknown and the domain of keywords is vast. Our key insight is to construct an efficient and compact key-to-index mapping, thereby reducing the keyword PIR problem to standard PIR. To achieve this, we propose three constructions incorporating several new techniques. The high-level approach involves (1) encoding the server's key-value database into an indexable database with a key-to-index mapping and (2) invoking standard PIR on the encoded database to retrieve specific positions based on the mapping. We conduct comprehensive experiments, with results showing substantial improvements over the state-of-the-art keyword PIR, ChalametPIR (CCS'24), i.e., a reduction in communication and runtime improvement, depending on database size and entry length. Our constructions are practical, executing keyword PIR in just 47 ms for a database containing 1 million 32-byte entries.
Last updated: 2025-02-14
NovaTEE: Private Clearing and Settlement on Trusted Execution Hardware
NovaTEE is a novel private multilateral settlement network designed to address critical inefficiencies in both traditional financial markets and cryptocurrency trading. The current clearing landscape suffers from fragmented capital allocation, restrictive prime brokerage relationships, and prolonged settlement timeframes in traditional finance, while cryptocurrency markets face challenges with over-collateralization, siloed lending pools, and security risks from centralized exchanges.
We introduce a settlement system that leverages Trusted Execution Environments (TEEs) and threshold cryptography to enable secure, private, and efficient settlement of obligations between multiple parties. The system utilizes a distributed key generation model and novel clearing mechanisms to optimize capital efficiency through multilateral netting, while maintaining strong privacy guarantees and regulatory compliance capabilities. By combining TEE-based security with advanced cryptographic protocols, including zero-knowledge proofs and sparse Merkle trees for data verification, our solution enables efficient cross-venue and cross-chain settlement while protecting sensitive trading information. This approach significantly reduces capital requirements for market participants, optimizes transaction costs, and provides institutional-grade clearing infrastructure without compromising on security or privacy. The system's architecture ensures that no single party has complete access to transaction details while maintaining auditability through a distributed backup network, offering a practical solution for institutional adoption of on-chain settlement.
Reductions Between Code Equivalence Problems
In this paper we present two reductions between variants of the Code Equivalence problem. We give polynomial-time Karp reductions from Permutation Code Equivalence (PCE) to both Linear Code Equivalence (LCE) and Signed Permutation Code Equivalence (SPCE). Along with a Karp reduction from SPCE to the Lattice Isomorphism Problem (LIP) proved in a paper by Bennett and Win (2024), our second result implies a reduction from PCE to LIP.
Efficient Mixed Garbling from Homomorphic Secret Sharing and GGM-Tree
We present new techniques for garbling mixed arithmetic and boolean circuits, utilizing the homomorphic secret sharing scheme introduced by Roy \& Singh (Crypto 2021), along with the half-tree protocol developed by Guo et al (Eurocrypt 2023). Compared to some two-party interactive protocols, our mixed garbling only requires several times more communication cost.
We construct the bit decomposition/composition gadgets with communication cost for integers in the range , requiring computations for the GGM-tree. Our approach is compatible with constant-rate multiplication protocols, and the cost decreases as increases. Even for a small , the concrete efficiency ranges from ( bits) to ( bits) per decomposition/composition. In addition, we develop the efficient gadgets for mod and unsigned truncation based on bit decomposition and composition.
We construct efficient arithmetic gadgets over various domains. For bound integers, we improve the multiplication rate in the work of Meyer et al. (TCC 2024) from to . We propose new garbling schemes over other domains through bounded integers with our modular and truncation gadgets, which is more efficient than previous constructions. For , additions and multiplication can be garbled with a communication cost comparable to our bit decomposition. For general finite field , particularly for large values of and , we garble the addition and multiplication at the cost of , where . For applications to real numbers, we introduce an ``error-based'' truncation that makes the cost of multiplication dependent solely on the desired precision.
Revisiting the Differential-Linear Attacks on ChaCha from IEEE TIT and INDOCRYPT 2024 (Extended Abstract)
The ChaCha stream cipher has become one of the best known ARX-based ciphers because of its widely use in several systems, such as in TLS, SSH and so on. In this paper, we find some errors in the attacks on ChaCha256 from IEEE TIT and INDOCRYPT 2024, and then corrected cryptanalytic attacks on ChaCha256 are given. However, the corrected attacks have extremely large time and data complexities. The corrected results show that the technique proposed in IEEE TIT may not be able to obtain improved differential-linear attacks on ChaCha.
Addressing Scalability Issues of Blockchains with Hypergraph Payment Networks
Payment channels are auspicious candidates in layer-2 solutions to reduce the number of on-chain transactions on traditional blockchains and increase transaction throughput. To construct payment channels, peers lock funds on 2-of-2 multisig addresses and open channels between one another to transact via instant peer-to-peer transactions. Transactions between peers without a direct channel are made possible by routing the payment over a series of adjacent channels. In certain cases, this can lead to relatively low transaction success rates and high transaction fees. In this work, we introduce pliability to constructing payment channels and graft edges with more than two endpoints into the payment graph. We refer to these constructions as hyperedges. We present hyperedge-based topologies to form hypergraphs and compare them to Bitcoin's Lightning network and other state-of-the-art solutions. The results demonstrate that hyperedge-based implementations can both increase transaction success rate, in addition to decreasing the network cost by more than 50% compared to that of the Lightning Network.
Simpler and Stronger Models for Deniable Authentication
Uncategorized
Uncategorized
Deniable Authentication is a highly desirable guarantee for secure messaging: it allows Alice to authentically send a message to a designated receiver Bob in a *Plausibly Deniable* manner. Concretely, while Bob is guaranteed Alice sent , he cannot convince a judge Judy that Alice really sent this message---even if he gives Judy his secret keys. This is because Judy knows Bob *can* make things up. This paper models the security of Multi-Designated Verifier Signatures (MDVS) and Multi-Designated Receiver Signed Public Key Encryption (MDRS-PKE)---two (related) types of schemes that provide such guarantees---in the Constructive Cryptography (CC) framework (Maurer and Renner, ICS '11).
The only work modeling dishonest parties' ability of "making things up" was by Maurer et al. (ASIACRYPT '21), who modeled the security of MDVS, also in CC. Their security model has two fundamental limitations:
1. deniability is not guaranteed when honest receivers read;
2. it relies on the CC-specific concept of specifications.
We solve both problems. Regarding the latter, our model is a standard simulator-based one. Furthermore, our composable treatment allowed to identify a new property, Forgery Invalidity, without which we do not know how to prove the deniability of neither MDVS nor MDRS-PKE when honest receivers read. Finally, we prove that Chakraborty et al.'s MDVS (EUROCRYPT '23) has this property, and that Maurer et al.'s MDRS-PKE (EUROCRYPT '22) preserves it from the underlying MDVS.
Ciphertext-Simulatable HE from BFV with Randomized Evaluation
Homomorphic Encryption (HE) is a privacy-enhancing technology that enables computation over encrypted data without the need for decryption. A primary application of HE is in the construction of communication-efficient Two-Party Computation (2PC) protocols between a client and a server, serving as the key owner and the evaluator, respectively. However, the 2PC protocol built on an HE scheme is not necessarily secure, as the standard IND-CPA security of HE does not guarantee the privacy of the evaluation circuit.
Several enhanced security notions for HE, such as circuit privacy and sanitization, have been proposed to address this issue, but they require significant overhead in terms of parameter size or time complexity.
In this work, we introduce a novel security notion for HE, called ciphertext simulatability, which precisely captures the security requirements of HE in the construction of 2PC. Then, we provide a concrete construction of ciphertext-simulatable HE from the BFV scheme by modifying its evaluation algorithm. We provide theoretical analysis and demonstrate experimental results to ensure that our solution has insignificant overhead in terms of parameter size and error growth.
As a matter of independent interest, we demonstrate how our approach of designing ciphertext-simulatable BFV can be further extended to satisfy stronger security notions such as sanitization.
Distributed Non-Interactive Zero-Knowledge Proofs
Distributed certification is a set of mechanisms that allows an all-knowing prover to convince the units of a communication network that the network's state has some desired property, such as being -colorable or triangle-free. Classical mechanisms, such as proof labeling schemes (PLS), consist of a message from the prover to each unit, followed by on-e round of communication between each unit and its neighbors.
Later works consider extensions, called distributed interactive proofs, where the prover and the units can have multiple rounds of communication before the communication among the units. Recently, Bick, Kol, and Oshman (SODA '22) defined a zero-knowledge version of distributed interactive proofs, where the prover convinces the units of the network’s state without revealing any other information about the network’s state or structure. In their work, they propose different variants of this model and show that many graph properties of interest can be certified with them.
In this work, we define and study distributed non-interactive zero-knowledge proofs (dNIZK); these can be seen as a non-interactive version of the aforementioned model, and also as a zero-knowledge version of PLS. We prove the following:
- There exists a dNIZK protocol for -coloring with -bit messages from the prover and -size messages among neighbors. This disproves a conjecture from previous work asserting that the total number of bits from the prover should grow linearly with the number of edges.
- There exists a family of dNIZK protocols for triangle-freeness, that presents a trade-off between the size of the messages from the prover and the size of the messages among neighbors. Interestingly, we also introduce a variant of this protocol where the message size depends only on the maximum degree of a node and not on the total number of nodes, improving upon the previous non-zero-knowledge protocol for this problem.
- There exists a dNIZK protocol for any graph property in NP in the random oracle models, which is secure against an arbitrary number of malicious parties. Previous work considered compilers from PLS to distributed zero-knowledge protocol, which results in protocols with parameters that are incomparable to ours.
Search and Verify Isogeny-Based Quantum Money with Rational Points
Quantum money is the cryptographic application of the quantum no-cloning theorem. It has recently been instantiated by Montgomery and Sharif (Asiacrypt'24) from class group actions on elliptic curves. In this work, we propose a novel method to forge a quantum banknote by leveraging the efficiency of evaluating division polynomials with the coordinates of rational points, offering a more efficient alternative to brute-force attack. Since our attack still requires exponential time, it remains impractical to forge a quantum banknote. Interestingly, due to the inherent properties of quantum money, our attack method also results in a more efficient verification procedure. Our algorithm leverages the properties of quadratic twists to utilize rational points in verifying the cardinality of the superposition of elliptic curves. We expect this approach to contribute to future research on elliptic-curve-based quantum cryptography.
Improved Secure Two-party Computation from a Geometric Perspective
Multiplication and other non-linear operations are widely recognized as the most costly components of secure two-party computation (2PC) based on linear secret sharing. Multiplication and non-linear operations are well known to be the most expensive protocols in secure two-party computation (2PC). Moreover, the comparison protocol (or protocol) is essential for various operations such as truncation, signed extension, and signed non-uniform multiplication. This paper aims to optimize these protocols by avoiding invoking the costly comparison protocol, thereby improving their efficiency.
We propose a novel approach to study 2PC from a geometric perspective. Specifically, we interpret the two shares of a secret as the horizontal and vertical coordinates of a point in a Cartesian coordinate system, with the secret itself represented as the corresponding point. This reformulation allows us to address the comparison problem by determining the region where the point lies. Furthermore, we identify scenarios where the costly comparison protocol can be replaced by more efficient evaluating AND gate protocols within a constrained range. Using this method, we improve protocols for truncation, signed extension and signed non-uniform multiplication, all of which are fundamental to 2PC. In particular, for the one-bit error truncation protocol and signed extension protocols, we reduce the state-of-the-art communication complexities of Cheetah (USENIX’22) and SirNN (S\&P’21) from to in two rounds, where is the input length and is the security parameter. For signed multiplication with non-uniform bit-width, we reduce the communication cost of SirNN's by 40\% to 60\%.
Sublinear Proofs over Polynomial Rings
We propose a sublinear-sized proof system for rank-one constraint satisfaction over polynomial rings (Ring-R1CS), particularly for rings of the form . These rings are widely used in lattice-based constructions,
which underlie many modern post-quantum cryptographic schemes.
Constructing efficient proof systems for arithmetic over these rings is challenged by two key obstacles: (1) Under practical popular choices of and , the ring is not field-like, and thus tools like Schwartz–Zippel lemma cannot apply; (2) when is large, which is common in implementations of lattice-based cryptosystems, the ring is large, causing the proof size suboptimal.
In this paper, we address these two obstacles, enabling more efficient proofs for arithmetics over when is a `lattice-friendly' modulus,
including moduli that support fast NTT computation or power-of-two moduli.
Our primary tool is a novel \emph{ring switching} technique.
The core idea of ring switching is to convert the R1CS over into another R1CS instance over Galois rings, which is field-like and small (with size independent with ).
As (zero-knowledge) proofs have many applications in cryptography, we expect that efficient proof systems for polynomial ring arithmetic could lead to more efficient constructions of advanced primitives from lattice assumptions, such as aggregate signatures, group signatures, verifiable random function, or verifiable fully holomorphic encryption.
Engorgio: An Arbitrary-Precision Unbounded-Size Hybrid Encrypted Database via Quantized Fully Homomorphic Encryption
This work proposes an encrypted hybrid database framework that combines vectorized data search and relational data query over quantized fully homomorphic encryption (FHE). We observe that, due to the lack of efficient encrypted data ordering capabilities, most existing encrypted database (EDB) frameworks do not support hybrid queries involving both vectorized and relational data. To further enrich query expressiveness while retaining evaluation efficiency, we propose Engorgio, a hybrid EDB framework based on quantized data ordering techniques over FHE. Specifically, we design a new quantized data encoding scheme along with a set of novel comparison and permutation algorithms to accurately generate and apply orders between large-precision data items. Furthermore, we optimize specific query types, including full table scan, batched query, and Top-k query to enhance the practical performance of the proposed framework. In the experiment, we show that, compared to the state-of-the-art EDB frameworks, Engorgio is up to 28x--854x faster in homomorphic comparison, 65x--687x faster in homomorphic sorting and 15x--1,640x faster over a variety of end-to-end relational, vectorized, and hybrid SQL benchmarks. Using Engorgio, the amortized runtime for executing a relational and hybrid query on a 48-core processor is under 3 and 75 seconds, respectively, over a 10K-row hybrid database.
Cryptanalysis of a nonlinear filter-based stream cipher
It is shown that the stream cipher proposed by Carlet and Sarkar in ePrint report 2025/160 is insecure. More precisely, one bit of the key can be deduced from a few keystream bytes. This property extends to an efficient key-recovery attack. For example, for the proposal with 80 bit keys, a few kilobytes of keystream material are sufficient to recover half of the key.
Endomorphisms for Faster Cryptography on Elliptic Curves of Moderate CM Discriminants, II
The present article is a natural extension of the previous one about the GLV method of accelerating a (multi-)scalar multiplication on elliptic curves of moderate CM discriminants . In comparison with the first article, much greater magnitudes of (in absolute value) are achieved, although the base finite fields of the curves have to be pretty large. This becomes feasible by resorting to quite powerful algorithmic tools developed primarily in the context of lattice-based and isogeny-based cryptography. Curiously, pre-quantum cryptography borrows research outcomes obtained when seeking conversely quantum-resistant solutions or attacks on them.
For instance, some -cycle of pairing-friendly MNT curves (with , i.e., ) is relevant for the result of the current article. The given -cycle was generated at one time by Guillevic to provide security bits, hence it was close to application in real-world zk-SNARKs. Another more performant MNT -cycle (with slightly smaller security level, but with much larger ) was really employed in the protocol Coda (now Mina) until zero-knowledge proof systems on significantly faster pairing-free (or half-pairing) -cycles were invented. It is also shown in the given work that more lollipop curves, recently proposed by Costello and Korpal to replace MNT ones, are now covered by the GLV technique.
Finding a polytope: A practical fault attack against Dilithium
In Dilithium, the rejection sampling step is crucial for the proof of security and correctness of the scheme. However, to our knowledge, there is no attack in the literature that takes advantage of an attacker knowing rejected signatures. The aim of this paper is to create a practical black-box attack against Dilithium with a weakened rejection sampling. We succeed in showing that an adversary with enough rejected signatures can recover Dilithium's secret key in less than half an hour on a desktop computer. There is one possible application for this result: by physically preventing one of the rejection sampling tests from happening, we obtain two fault attacks against Dilithium.
AUCIL: An Inclusion List Design for Rational Parties
The decentralized nature of blockchains is touted to provide censorship resistance. However, in reality, the ability of proposers to completely control the contents of a block makes censorship relatively fragile. To combat this, a notion of inclusion lists has been proposed in the blockchain community. This paper presents the first formal study of inclusion lists. Our inclusion list design leverages multiple proposers to propose transactions and improve censorship resistance. The design has two key components. The first component is a utility-maximizing input list creation mechanism that allows rational proposers to achieve a correlated equilibrium while prioritizing high-value transactions. The second component, AUCIL (auction-based inclusion list), is a mechanism for aggregating the input lists from the proposers to output an inclusion list.
On the Average Random Probing Model
We exhibit a gap between the average random probing model, as defined by Dziembowski et al. at Eurocrypt 2015, and the same model, as defined in the recent paper of Brian et al. at Eurocrypt 2024. Whereas any noisy leakage can be tightly reduced to the former one, we show in this paper that it cannot be tightly reduced to the latter one, unless requiring extra assumptions, e.g., if the noisy leakage is deterministic. As a consequence, the reduction from noisy leakages to random probings — without field size loss — remains unproven.
Practical Electromagnetic Fault Injection on Intel Neural Compute Stick 2
Machine learning (ML) has been widely deployed in various applications, with many applications being in critical infrastructures. One recent paradigm is edge ML, an implementation of ML on embedded devices for Internet-of-Things (IoT) applications. In this work, we have conducted a practical experiment on Intel Neural Compute Stick (NCS) 2, an edge ML device, with regard to fault injection (FI) attacks. More precisely, we have employed electromagnetic fault injection (EMFI) on NCS 2 to evaluate the practicality of the attack on a real target device. We have investigated multiple fault parameters with a low-cost pulse generator, aiming to achieve misclassification at the output of the inference. Our experimental results demonstrated the possibility of achieving practical and repeatable misclassifications.
Adaptive Distributional Security: A Framework for Input-Adaptive Cryptography
It is often desirable to break cryptographic primitives into two components: an input-independent offline component, and a cheap online component used when inputs arrive. Security of such online/offline primitives must be proved in the input-adaptive setting: the adversary chooses its input adaptively, based on what it sees in the offline-phase. Proving security in the input-adaptive setting can be difficult, particularly when one wishes to achieve simulation security and avoid idealized objects like a random oracle (RO).
This work proposes a framework for reasoning about input-adaptive primitives: adaptive distributional security (ADS). Roughly, an ADS primitive provides security when it is used with inputs drawn from one of two distributions that are themselves hard to distinguish. ADS is useful as a framework for the following reasons:
- An ADS definition can often circumvent impossibility results imposed on the corresponding simulation-based definition. This allows us to decrease the online-cost of primitives, albeit by using a weaker notion of security.
- With care, one can typically upgrade an ADS-secure object into a simulation-secure object (by increasing cost in the online-phase).
- ADS is robust, in the sense that (1) it enables a form of composition and (2) interesting ADS primitives are highly interconnected in terms of which objects imply which other objects.
- Many useful ADS-secure objects are plausibly secure from straightforward symmetric-key cryptography.
We start by defining the notion of an ADS encryption (ADE) scheme.
A notion of input-adaptive encryption can be easily achieved from RO, and the ADE definition can be understood as capturing the concrete property provided by RO that is sufficient to achieve input-adaptivity. From there, we use ADE to achieve ADS variants of garbled circuits and oblivious transfer, to achieve simulation-secure garbled circuits, oblivious transfer, and two-party computation, and prove interconnectedness of these primitives. In sum, this results in a family of objects with extremely cheap online-cost.
Binary Codes for Error Detection and Correction in a Computationally Bounded World
We study error detection and correction in a computationally bounded world, where errors are introduced by an arbitrary adversarial channel. Our focus is on codes, where the encoding and decoding procedures can share a public random seed, but are otherwise deterministic. We can ask for either or security, depending on whether the adversary can choose the message being encoded before or after seeing the seed. For large alphabets, a recent construction achieves essentially optimal rate versus error tolerance trade-offs under minimal assumptions, surpassing information-theoretic limits. However, for the binary alphabet, the only prior improvement over information theoretic codes relies on non-standard assumptions justified via the random oracle model. We show the following:
Under the learning with errors (LWE) assumption, we construct selectively secure codes over the binary alphabet. For error detection, our codes achieve essentially optimal rate and relative error tolerance . For error correction, they can uniquely correct relative errors with a rate that essentially matches that of the best list-decodable codes with error tolerance . Both cases provide significant improvements over information-theoretic counterparts. The construction relies on a novel form of 2-input correlation intractable hash functions that we construct from LWE.
Assuming the exponential security of a natural collision-resistant hash function candidate based on the ``crypto dark matter'' approach of mixing linear functions over different moduli, we construct adaptively secure codes over the binary alphabet, for both error detection and correction. They achieve essentially the same trade-offs between error tolerance and rate as above, with the caveat that for error-correction they only do so for sufficiently small values of .
Experimentally studying path-finding problem between conjugates in supersingular isogeny graphs: Optimizing primes and powers to speed-up cycle finding
We study the problem of finding a path between conjugate supersingular elliptic curves over for a prime , which is important for cycle finding in supersingular isogeny graphs. We see that for any given , there is some corresponding to which accelerates the process of conjugate path-finding. Also, time-wise, the most efficient way of overviewing the graph is seeing steps at once. We have outlined methods in which the next vertex of any pseudo-random walk should be chosen to reach conjugate vertex faster. We have experimentally investigated the paths between frobenius conjugates for wide ranges of small prime . We introduce sets to experimentally learn about the structure of the isogeny graphs when short cycles are present.
BulletCT: Towards More Scalable Ring Confidential Transactions With Transparent Setup
RingCT signatures are essential components of Ring Confidential Transaction (RingCT) schemes on blockchain platforms, enabling anonymous transaction spending and significantly impacting the scalability of these schemes. This paper makes two primary contributions:
We provide the first thorough analysis of a recently developed Any-out-of-N proof in the discrete logarithm (DLOG) setting and the associated RingCT scheme, introduced by ZGSX23 (S&P '23). The proof conceals the number of the secrets to offer greater anonymity than K-out-of-N proofs and uses an efficient "K-Weight" technique for its construction. However, we identify for the first time several limitations of using Any-out-of-N proofs, such as increased transaction sizes, heightened cryptographic complexities and potential security risks. These limitations prevent them from effectively mitigating the longstanding scalability bottleneck.
We then continue to explore the potential of using K-out-of-N proofs to enhance scalability of RingCT schemes. Our primary innovation is a new DLOG-based RingCT signature that integrates a refined "K-Weight"-based K-out-of-N proof and an entirely new tag proof. The latter is the first to efficiently enable the linkability of RingCT signatures derived from the former, effectively resisting double-spending attacks.
Finally, we identify and patch a linkability flaw in ZGSX23's signature. We benchmark our scheme against this patched one to show that our scheme achieves a boost in scalability, marking a promising step forward.
Asymptotic improvements to provable algorithms for the code equivalence problem
We present several new provable algorithms for two variants of the code equivalence problem on linear error-correcting codes, the Linear Code Equivalence Problem (LCE) and the Permutation Code Equivalence Problem (PCE). Specifically, for arbitrary codes of block length and dimension over any finite field , we show:
1) A deterministic algorithm running in time for LCE.
2) A randomized algorithm running in time for LCE and PCE.
3) A quantum algorithm running in time for LCE and PCE.
The first algorithm complements the deterministic roughly -time algorithm of Babai (SODA 2011) for PCE.
The second two algorithms improve on recent work of Nowakowski (PQCrypto 2025), which gave algorithms with similar running times, but only for code equivalence on \emph{random} codes and only over fields of order .
Computing Quaternion Embeddings and Endomorphism rings of Supersingular Oriented Elliptic curves
In this paper, we investigate several computational problems motivated by post-quantum cryptosystems based on isogenies and ideal class group actions on oriented elliptic curves. Our main technical contribution is an efficient algorithm for embedding the ring of integers of an imaginary quadratic field into some maximal order of the quaternion algebra ramified at a prime and infinity. Assuming the Generalized Riemann Hypothesis (GRH), our algorithm runs in probabilistic polynomial time, improving upon previous results that relied on heuristics or required the factorization of . Notably, this algorithm may be of independent interest.
Our approach enhances the work of Love and Boneh on computing isogenies between -small elliptic curves by eliminating heuristics and improving computational efficiency. Furthermore, given a quadratic order in , we show that our algorithm reduces the computational endomorphism ring problem of -oriented elliptic curves to the Vectorization problem in probabilistic polynomial time, assuming the conductor of can be efficiently factorized. Previously, the best known result required the full factorization of , which may be exponentially large.
Additionally, when the conductor of can be efficiently factorized, we establish a polynomial-time equivalence between the Quaternion Order Embedding Problem, which asks to embed a quadratic order into a maximal order in , and computing horizontal isogenies between -oriented elliptic curves. Leveraging this reduction, we propose a rigorous algorithm, under GRH, that solves the quaternion order embedding problem in time , improving upon previous methods that required higher asymptotic time and relied on several heuristics.
AutoDiVer: Automatically Verifying Differential Characteristics and Learning Key Conditions
Differential cryptanalysis is one of the main methods of cryptanalysis and has been applied to a wide range of ciphers. While it is very successful, it also relies on certain assumptions that do not necessarily hold in practice. One of these is the hypothesis of stochastic equivalence, which states that the probability of a differential characteristic behaves similarly for all keys. Several works have demonstrated examples where this hypothesis is violated, impacting the attack complexity and sometimes even invalidating the investigated prior attacks. Nevertheless, the hypothesis is still typically taken for granted. In this work, we propose AutoDiVer, an automatic tool that allows to thoroughly verify differential characteristics. First, the tool supports calculating the expected probability of differential characteristics while considering the key schedule of the cipher. Second, the tool supports estimating the size of the space of keys for which the characteristic permits valid pairs, and deducing conditions for these keys. AutoDiVer implements a custom SAT modeling approach and takes advantage of a combination of features of advanced SAT solvers, including approximate model counting and clause learning. To show applicability to many different kinds of block ciphers like strongly aligned, weakly aligned, and ARX ciphers, we apply AutoDiVer to GIFT, PRESENT, RECTANGLE, SKINNY, WARP, SPECK, and SPEEDY.
NodeChain: Cheap Data Integrity Without Consensus
Blockchains enable decentralised applications that withstand Byzantine failures and do not need a central authority. Unfortunately, their massive replication requirements preclude their use on constrained devices.
We propose a novel blockchain-based data structure which forgoes replication without affecting the append-only nature of blockchains, making it suitable for maintaining data integrity over networks of storage-constrained devices. Our solution does not provide consensus, which is not required by our motivating application, namely securely storing sensor data of containers in cargo ships.
We elucidate the practical promise of our technique by following a multi-faceted approach: We (i) formally prove the security of our protocol in the
Universal Composition (UC) setting, as well as (ii) provide a small-scale proof-of-concept implementation, (iii) a performance simulation for large-scale deployments which showcases a reduction in storage of more than x compared to traditional blockchains, and (iv) a resilience simulation that predicts the practical effects of network jamming attacks.