Papers updated in last 31 days (Page 5 of 462 results)
Improved Subfield Curve Search For Specific Field Characteristics
Isogeny-based cryptography relies its security on the hardness of the supersingular isogeny problem: finding an isogeny between two supersingular curves defined over a quadratic field.
The Delfs-Galbraith algorithm is the most efficient procedure for solving the supersingular isogeny problem with a time complexity of operations. The bottleneck of the Delfs-Galbraith algorithm is the so-called subfield curve search (i.e., finding an isogenous supersingular elliptic curve defined over the prime field), which determines the time complexity of the aforementioned algorithm.
Given that, for efficiency, most recent isogeny-based constructions propose using finite fields with field characteristics equal to for some positive integers and .
This work focuses on primes of that particular form, and presents two new algorithms for finding subfield curves with a time complexity of operations and a memory complexity polynomial in . Such algorithms exploit the existence of large -torsion points and extend the subfield root detection algorithm of Corte-Real Santos, Costello, and Shi (Crypto, 2022) to a projective scenario where one can work over the curve coefficients instead of the explicit j-invariants of the curves.
These algorithms easily extend to primes of the form and with being a small integer.
This study also examines the usage of radical -isogenies with the proposed extended subfield root detection algorithm. In this context, the results indicate that the radical -isogeny approach is competitive compared to the state-of-the-art algorithms.
Public-key Cryptography Attacks Using Adiabatic Quantum Computer
We explore the application of the QUBO and Ising models to the integer factorization problem with implications for the security of public-key algorithms such as RSA. A key contribution is a program that applies existing algorithms to parameterize and simulate integer factorization through an Ising model in order to replicate previous works. Due to limited access to quantum hardware, we use classical heuristic methods to approximate solutions.
How Fast Does the Inverse Walk Approximate a Random Permutation?
For a finite field of size , the (patched) inverse permutation computes the inverse of over when and outputs when , and the (AddRoundKey) permutation adds a fixed constant to its input, i.e.,
We study the process of alternately applying the permutation followed by a random linear permutation , which is a random walk over the alternating (or symmetric) group that we call the {\em inverse walk}.
We show both lower and upper bounds on the number of rounds it takes for this process to approximate a random permutation over . We show that rounds of the inverse walk over the field of size with rounds generate a permutation that is -close (in variation distance) to a uniformly random even permutation (i.e. a permutation from the alternating group ). Our result is provided with explicit constants, and is tight, up to logarithmic factors.
Our result answers an open question from the work of Liu, Pelecanos, Tessaro, and Vaikuntanathan (CRYPTO 2023) by proving the -wise independence of (a variant of) AES for up to the square root of the field size, compared to the original result that only held for . It also constitutes a significant improvement on a result of Carlitz (Proc. American Mathematical Society, 1953) who showed a {\em reachability result}: namely, that every even permutation can be generated {\em eventually} by composing and . We show a {\em tight convergence result}, namely a tight quantitative bound on the number of rounds to reach a random (even) permutation.
Our work brings to the forefront the view of block ciphers as random walks and uses an entirely new set of (functional analytic) tools to study their pseudorandomness, both of which we hope will prove useful in the study of block ciphers.
Quantum Periodic Distinguisher Construction: Symbolization Method and Automated Tool
As one of the famous quantum algorithms, Simon's algorithm enables the efficient derivation of the period of periodic functions in polynomial time. However, the complexity of constructing periodic functions has hindered the widespread application of Simon's algorithm in symmetric-key cryptanalysis. Currently, aside from the exhaustive search-based testing method introduced by Canale et al. at CRYPTO 2022, there is no unified model for effectively searching for periodic distinguishers. Although Xiang et al. established a link between periodic functions and truncated differential theory at ToSC 2024, their approach lacks the ability to construct periods using unknown differentials and does not provide automated tools. This limitation underscores the inadequacy of existing methods in identifying periodic distinguishers for complex structures. In this paper, we address the challenge of advancing periodic distinguishers for symmetric-key ciphers. First, we propose a more generalized method for constructing periodic distinguishers, addressing the limitations of Xiang et al.'s theory in handling unknown differences. We further extend our method to probabilistic periodic distinguishers, thereby extending the separability property proposed by Hodžić et al. in 2020. As a result, our method can cover a wider range of periodic distinguishers. Second, we introduce a novel symbolic representation to simplify the search of periodic distinguishers. Based on this representation, we propose the first fully automated SMT-based search model, which efficiently addresses the challenges of manual searching in complex structures. Finally, we extend the model to SPN structures based on our new method. Our model has broad applicability through significant advancements in analyzing generalized Feistel structures (GFSs) and SPN-based ciphers. We have achieved new quantum distinguishers with the following round configurations: 10 rounds for GFS-4F, 10 rounds for LBlock, 10 rounds for TWINE, and 16 rounds for Skipjack-B, improving the previous best results by 1, 2, 2, and 3 rounds, respectively. In the domain of SPN-based ciphers, our model has enabled the identification of novel periodic distinguishers, including the first 9-round distinguisher for SKINNY and the first 12-round distinguisher for CRAFT. These achievements lay the foundation for quantum cryptanalysis of SPN-based ciphers using Simon’s algorithm.
Data Availability for Thousands of Nodes
Scalable data availability (DA) is essential for high-throughput, decentralized blockchains, enabling lightweight nodes to verify block availability without incurring the prohibitive costs of full data replication.
Reed-Solomon (RS) code commitment schemes underpin modern DA protocols by ensuring that dispersed data fragments can be verified as part of a valid codeword, even in the presence of malicious block producers.
However, state-of-the-art schemes such as FRIDA (Crypto'24), while computationally efficient, incur substantial per-node communication overhead at the scale of thousands of network nodes, often 5.7 the size of the actual data fragment.
In this work, we introduce CONDA, a new interleaved RS code commitment scheme that significantly reduces the communication overhead while retaining FRIDA's prover efficiency.
At its core is a novel evaluation consolidation technique for polynomial commitment scheme (PCS) that reduces the problem of proving evaluations at fixed points (one per verifier) to a single evaluation at a random point, using logarithmic communication.
This technique is lightweight, hash-based, and compatible with any multilinear PCS.
To further optimize for DA applications, we introduce LightLigero, a new multilinear PCS that improves upon DeepFold (Sec'25) with reduction in proof size and only slowdown in prover time.
Combining CONDA and LightLigero yields an efficient DA scheme for thousands of nodes.
Our implementation demonstrates a 4 reduction in communication cost compared to FRIDA, while incurring only a 25\% increase in prover time.
It also achieves near-best prover time and near-best communication cost simultaneously among all code commitment schemes.
CONDA also offers at least smaller proofs and faster provers than state-of-the-art verifiable secret sharing constructions such as ZXH+22 (Sec'22) and PolyFRIM (Sec'24).
DeepFold: Efficient Multilinear Polynomial Commitment from Reed-Solomon Code and Its Application to Zero-knowledge Proofs
This work presents Deepfold, a novel multilinear polynomial commitment scheme (PCS) based on Reed-Solomon code that offers optimal prover time and a more concise proof size. For the first time, Deepfold adapts the FRI-based multilinear PCS to the list decoding radius setting, requiring significantly fewer query repetitions and thereby achieving a 3 reduction in proof size compared to Basefold (Crypto'24), while preserving its advantages in prover time. Compared with PolyFRIM (USENIX Security'24), Deepfold achieves a 2 improvement in prover time, verifier time, and proof size. Another contribution of this work is a batch evaluation scheme, which enables the FRI-based multilinear PCS to handle polynomials whose size is not a power of two more efficiently.
Our scheme has broad applications in zk-SNARKs, since PCS is a key component in modern zk-SNARK constructions. For example, when replacing the PCS component of Virgo (S&P'20) with Deepfold, our scheme achieves a 2.5 faster prover time when proving the knowledge of a Merkle tree with 256 leaves, while maintaining the similar proof size. When replacing the PCS component of HyperPlonk (Eurocrypt'23) with Deepfold, our scheme has about 3.6 faster prover time. Additionally, when applying our arbitrary length input commitment to verifiable matrix multiplications for matrices of size
1200 768 and 768 2304, which are actual use cases in GPT-2 model, the performance showcases a 2.4 reduction in prover time compared to previous approaches.
Fheanor: a new, modular FHE library for designing and optimising schemes
Implementations of modern FHE schemes are available in various highly-optimized libraries. Many of these libraries are designed to allow developers who may not have deep expertise in FHE to build fast and secure privacy-preserving applications. To support such users, the API of these libraries often hides the internals of the schemes in question from the user. However, this design choice makes it hard for users of these libraries to modify existing schemes, or implement new ones; work that is often valuable when conducting research on FHE schemes.
We present our new Rust library Fheanor, which aims to facilitate such research on FHE schemes. The core target user is an FHE researcher, rather than an application developer. Most importantly, the design of Fheanor is very modular, and mirrors the mathematical structure of the available FHE schemes. By exposing the mathematical structure, but still hiding implementation details, it is easy to modify or extend the functionality of FHE schemes implemented in the library and still preserve high performance. Indeed, Fheanor demonstrates performance that is close to that of HElib or SEAL, with the potential for optimizations in the future.
Fheanor implements several features that have not, or have only rarely, been implemented in previous libraries. These include non-power-of-two cyclotomic rings, single-RNS based ring arithmetic, the CLPX/GBFV scheme, and bootstrapping for BFV and BGV.
In addition, this paper presents new theoretical contributions that are also implemented in Fheanor. The first is an extension of optimal digit extraction circuits, used in BFV/BGV bootstrapping, to the case 2^23. The second is a more efficient algorithm for computing the trace in the non-power-of-two cyclotomic setting.
Fly Away: Lifting Fault Security through Canaries and the Uniform Random Fault Model
Cryptographic implementations are vulnerable to active physical attacks where adversaries inject faults to extract sensitive information. Existing fault models, such as the threshold and random fault models, assume limitations on the amount or probability of injecting faults. Such models, however, insufficiently address the case of practical fault injection methods capable of faulting a large proportion of the wires in a circuit with high probability. Prior works have shown that this insufficiency can lead to concrete key recovery attacks against implementations proven secure in these models. We address this blind spot by introducing the uniform random fault model, which relaxes assumptions on the amount/probability of faults and instead assumes a uniform probabilistic faulting of all wires in a circuit or region. We then show that security in this new model can be reduced to security in the random fault model by inserting canaries in the circuit to ensure secret-independent fault detection. We prove that combining canaries with a more classical fault countermeasure such as redundancy can lead to exponential fault security in the uniform random fault model at a polynomial cost in circuit size in the security parameter. Finally, we discuss the interactions between our work and the practical engineering challenges of fault security, shedding light on how the combination of state-of-the-art countermeasures may protect against injections of many high probability faults, while opening a path to methodologies that formally analyze the guarantees provided by such countermeasures.
Critical Rounds in Multi-Round Proofs: Proof of Partial Knowledge, Trapdoor Commitments, and Advanced Signatures
Zero-knowledge simulators and witness extractors, initially developed for proving the security of proof systems, turned out to be also useful in constructing advanced protocols from simple three-move interactive proofs. However, in the context of multi-round public-coin protocols, the interfaces of these auxiliary algorithms become more complex, introducing a range of technical challenges that hinder the generalization of these constructions.
We introduce a framework to enhance the usability of zero-knowledge simulators and witness extractors in multi-round argument systems for protocol designs. Critical-round zero-knowledge relies on the ability to perform complete zero-knowledge simulations by knowing the challenge of just one specific round in advance. Critical-round special soundness aims to address a stringent condition for witness extraction by formalizing it to operate with a smaller tree of transcripts than the one needed for extended extraction, which either outputs the intended witness or solves the underlying hard problem in an argument system. We show that these notions are satisfied by diverse protocols based on MPC-in-the-Head, interactive oracle proofs, and split-and-fold arguments.
We demonstrate the usefulness of the critical round framework by constructing proofs of partial knowledge (Cramer, Damgård, and Schoenmakers, CRYPTO'94) and trapdoor commitments (Damgård, CRYPTO'89) from critical-round multi-round proofs. Furthermore, our results imply advancements in post-quantum secure adaptor signatures and threshold ring signatures based on MPC-in-the-Head, eliminating the need for (costly) generic NP reductions.
EndGame: Field-Agnostic Succinct Blockchain with Arc
We present EndGame, a novel blockchain architecture that achieves succinctness through Reed-Solomon accumulation schemes. Our construction enables constant-time verification of blockchain state while maintaining strong security properties. We demonstrate how to efficiently encode blockchain state transitions using Reed-Solomon codes and accumulate proofs of state validity using the ARC framework. Our protocol achieves optimal light client verification costs and supports efficient state management without trusted setup.
MOCHA: Mixnet Optimization Considering Honest Client Anonymity
Mix networks (mixnets) safeguard client anonymity by forwarding traffic through multiple intermediary nodes (mixnodes), which reorder and delay messages to obscure communication patterns against a global passive adversary capable of monitoring all network transmissions.
The anonymity provided by mixnets is usually assessed with a discrete-event simulator, gauging a target message's indistinguishability among output messages. While useful for comparative analysis, this approach only approximates the mixnet's anonymity potential. Hence, this paper sheds light on the necessity of considering the client (originator of messages) itself to gauge anonymity accurately. We further provide an algorithm (simulator) to simulate client anonymity for Loopix mixnets. We conduct experiments to optimize general Loopix mixnet parameters, considering both message and client anonymity. Our findings indicate that message anonymity often provides an upper bound and can yield misleading results for mixnet optimization, underscoring the importance of client anonymity. Additionally, we explore scenarios where client anonymity is significantly compromised due to an insufficient number of clients. To address these cases, we propose a multimixing strategy that enhances client anonymity by effectively merging varied traffic types with different mixing characteristics.
Secure Rate-Distortion-Perception Trade-off Over Channels: A Randomized Distributed Function Computation (RDFC) Application
Secure rate-distortion-perception (RDP) trade-offs arise in critical applications, such as semantic compression and privacy-preserving generative coding, where preserving perceptual quality while minimizing distortion is vital. This paper studies a framework for secure RDP over noiseless and noisy broadcast channels under strong secrecy constraints. We first characterize the exact secure RDP region for noiseless transmission channels. We then develop an inner bound on the secure RDP region for a memoryless broadcast channel with correlated noise components at the receivers' observations and prove its tightness under a more capable broadcast channel assumption. Our results demonstrate how optimized binning schemes simultaneously achieve high perceptual quality, low distortion, and strong secrecy, illuminating fundamental information-theoretic limits for next-generation trustworthy computation systems.
Zeroed Out: Cryptanalysis of Weak PRFs in Alternating Moduli
The growing adoption of secure multi-party computation (MPC) has driven the development of efficient symmetric key primitives tailored for MPC. Recent advances, such as the alternating moduli paradigm, have shown promise but leave room for cryptographic and practical improvements. In this paper, we analyze a family of weak pseudorandom functions (wPRF) proposed at Crypto 2024, focusing on their One-to-One parameter sets. We demonstrate that these configurations fail to achieve their intended one-to-one mappings and exploit this observation to develop an efficient key recovery attack.
Our analysis reveals critical vulnerabilities, reducing the complexity of key recovery to for the Standard One-to-One wPRF and for the Reversed Moduli variant -- both substantially below their claimed -bit security. We validate our findings through experimental evaluation, confirming alignment between predicted and observed attack complexities.
sPAR: (Somewhat) Practical Anonymous Router
Anonymous communication is one of the fundamental tools to achieve privacy for communication over the internet. Almost all existing design strategies (e.g., onion routing/Tor, mixnets) for anonymous communication rely on the existence of some honest server/router in the network infrastructure to provide anonymity. A recent seminal work by Shi and Wu (Eurocrypt 2021) proposes the first cryptographic design for a non-interactive anonymous router (NIAR) that can use a single untrusted server or router to permute a set of messages without revealing the permutation to the untrusted router. This work is a really important step towards showing the possibility of designing such protocol from standard cryptographic assumptions. However, their construction is only of theoretical nature and still leaves many open questions towards realizing such systems in practice: (1) the cryptographic building blocks (multi-client functional encryption, correlated pseudorandom function) used in their design are really difficult to implement in practice. (2) Their setup phase takes the permutation as an input to generate the encryption/decryption keys; which means that the messages from the same sender in different rounds will be at the same position in the output vector, unless the setup phase is run before every round with a new permutation. (3) It is not known how to realize such a setup procedure, that initializes a random permutation obliviously, without any trusted entities in the system.
In this paper, we propose the first (somewhat) practical design, which we call sPAR, that solves the above problems using homomorphic encryption techniques. Our design also relies on a one-time setup phase, however the setup phase does not take any specific permutation as input. Instead, our design generates a fresh permutation for every round based on the random values locally generated by the clients. Already existing practical instantiations of fully homomorphic encryption (FHE) schemes make our design implementable and deployable in practice. Our design presents a new direction for designing anonymous communication systems. Unlike some existing systems like Tor, sPAR does not scale to millions of users, however, we demonstrate with a proof-of-concept implementation that sPAR could easily support around hundred users with a few seconds of latency for each message.
On the Provable Dual Attack for LWE by Modulus Switching
As a theoretical cornerstone of post-quantum cryptography, the Learning With Errors (LWE) problem serves as the security foundation for standardized algorithms such as Kyber and Dilithium. Recently, a framework for provable dual attacks on LWE has been proposed by Pouly et al. in Eurocrypt 2024, addressing the limitations in effectiveness caused by existing methods' reliance on heuristic assumptions in LWE dual attacks. Their paper also poses an open problem on how to formally integrate modulus switching into this framework to reduce attack costs. The main purpose of this paper is to give a solution of this open problem by presenting an improved provable dual attack method that incorporates modulus switching and Chinese Remainder Theorem (CRT) techniques. First, we design a modulus switching mechanism that eliminates practical errors via the Poisson summation formula. By embedding the inherent noise from modulus switching into a rational lattice framework, our approach effectively preventing the risk of attack failure caused by the merging of such errors with LWE noise. Theoretical guarantees (Theorems 4 and 5) rigorously quantify the parameter ranges for successful attacks. Second, we introduce a CRT-based secret recovery method that aggregates partial secrets from independent sub-attacks. By leveraging the Chinese Remainder Theorem to reconstruct full secrets from congruence relations, our method adapts to arbitrary secret distributions. Furthermore, by using a tighter variant of Banaszczyk's measure inequality, we obtain a precise parameter range for the dual attack's efficacy through rigorous mathematical proof, and achieve the same complementary gap with the contradictory regime (proposed by Ducas et al.) as in Pouly et al.'s work. Experiments show 15-29 bit superior performance in attack estimation compared to the original framework.
A Framework for Group Action-Based Multi-Signatures and Applications to LESS, MEDS, and ALTEQ
A multi-signature scheme allows a list of signers to sign a common message. They are widely used in scenarios where the same message must be signed and transmitted by users, and, instead of concatenating individual signatures, employing a multi-signature can reduce the data to be sent.
In recent years there have been numerous practical proposals in the discrete logarithm setting, such as MuSig2 (CRYPTO'21) for the Schnorr signature. Recently, these attempts have been extended to post-quantum assumptions, with lattice-based proposals such as MuSig-L (CRYPTO'22).
Given the growth of group action-based signatures, a natural question is whether a multi-signature can be built on the same models. In this work, we present the first construction of such a primitive relying on group action assumptions. We obtain a 3-round scheme achieving concurrent security in the ROM.
Moreover, we instantiate it using the three candidates to the additional post-quantum NIST's call, namely LESS, MEDS and ALTEQ, obtaining a good compression rate for different parameters sets.
Encrypted Matrix-Vector Products from Secret Dual Codes
Motivated by applications to efficient secure computation, we consider the following problem of encrypted matrix–vector product (EMVP). Let be a finite field. In an offline phase, a client uploads an encryption of a matrix to a server, keeping only a short secret key. The server stores the encrypted matrix .
In the online phase, the client may repeatedly send encryptions of query vectors , which enables the client and the server to locally compute compact shares of the matrix–vector product . The server learns nothing about or .
The shared output can either be revealed to the client or processed by another protocol.
We present efficient EMVP protocols based on variants of the learning parity with noise (LPN) assumption and the related learning subspace with noise (LSN) assumption.
Our EMVP protocols are field-agnostic in the sense that the parties only perform arithmetic operations over , and are close to optimal with respect to both communication and computation. In fact, for sufficiently large (typically a few hundreds), the online computation and communication costs of our LSN-based EMVP can be \emph{less than twice} the costs of computing in the clear.
Combined with suitable secure post-processing protocols on the secret-shared output, our EMVP protocols are useful for a variety of secure computation tasks, including encrypted fuzzy search and secure ML.
Our technical approach builds on recent techniques for private information retrieval in the secret-key setting. The core idea is to encode the matrix and the queries using a pair of secret dual linear codes, while defeating algebraic attacks by adding noise.
Classify Directly: A Dynamic Time SPA Classification Method Based on DTW
Side-channel analysis remains a critical threat to public-key cryptographic implementations. Simple Power Analysis (SPA) techniques can extract secret keys from a single power trace, often using clustering-based classification methods. However, traces captured in real-world environments often suffer from misalignment and variable trace lengths due to unstable clocks and random delays. As a result, clustering methods are required to use alignment methods that may alter the original information of the traces. To address this problem, this work proposes Dynamic Time Classification (DTC) as an alternative approach to classify cryptographic operations in SPA based on Dynamic Time Warping. Unlike clustering methods, DTC inherently compares power traces without requiring fixed-length segments, which greatly improved the adaptability to unequal traces and thus allows us to classify different operations relatively stably. Experimental results on public-key cryptographic algorithms and post-quantum algorithm implementations show that DTC are no less accurate than clustering methods and are more robust to timing variations. This work also systematically divides the features of different operations and explores the effects of different SPA methods on different types of feature. This work also conducts experiments with and without random delays for different categories, compares the experimental accuracy of different alignment methods, and discusses the feasibility of DTW as a preprocessing method.
Testing the Tests - Opportunities for Corrections and Improvements in NIST SP 800-22r1a and its Reference Code
During an analysis of the NIST SP 800-22r1a document, which provides a test suite to validate random number generators and their reference implementation, various issues were identified, including imprecise probability constants, erroneous example calculations, and discrepancies within test descriptions.
Here, we provide a technical analysis of the Statistical Test Suite, documenting inconsistencies and deficiencies in both the theoretical specification and the official C reference implementation.
The analysis also reveals concrete implementation bugs and structural limitations in the reference codebase.
Rather than revising any of the statistical tests, this work documents these flaws to support the ongoing revision process of the standard and to encourage more reliable and maintainable implementations.
Scrutinizing the Security of AES-based Hashing and One-way Functions
AES has cemented its position as the primary symmetric-key primitive for a wide range of cryptographic applications, which motivates the analysis on the concrete security of AES in practical instantiations, for instance, the collision resistance of AES-based hashing, the key commitment security of AES-based authenticated encryption schemes, and the one-wayness of AES-based one-way functions in ZK and MPC protocols. In this work, we introduce single-color initial structures (SCIS) into meet-in-the-middle (MITM) attacks to address neutral word generation, a critical bottleneck in MITM collision attacks. The SCIS technique leverages new structural insights to enable efficient neutral word generation and leads to multiple improved results on AES compared to the state-of-the-art. In particular, we present the first classical one-block collision attack on 7-round AES-MMO/MP, marking the first advancement in the number of attacked rounds in over a decade and matching the best-known results in the quantum setting, as well as the first one-block collision attack on 4-round AES-128-DM, bridging the gap highlighted by Taiyama \textit{et al.} at Asiacrypt 2024 from a non-differential-based approach. Additionally, we provide a comprehensive list of new results on the security margins of AES-192, AES-256, Rijndael-192, and Rijndael-256 in multiple attack settings.
Posterior Security: Anonymity and Message Hiding of Standard Signatures
We introduce posterior security of digital signatures, the additional security features after the original signature is generated. It is motivated by the scenario that some people store their secret keys in secure hardware and can only obtain a standard signature through a standardized interface. In this paper, we consider two different posterior security features: anonymity and message hiding.
We first introduce incognito signature, a new mechanism to anonymize a standard signature. Different from other ring or group signatures, the signer generates a standard (non-anonymous) signature first. The signature is then anonymized by a converter before sending to the verifier, by hiding the signer public key with a set of decoy public keys. We then introduce concealed signature which hides the message in a commitment. The standard signature is converted such that it can be verified with the commitment. The models of posterior anonymity and posterior message hiding capture the separation of the signer and the converter. Anonymity or message hiding is provided by the converter after the creation of a standard signature by the signer.
We give generic constructions of incognito signature and concealed signature. It can be applied to standard signatures like Schnorr. It gives the first practical anonymized ECDSA signature, and the signature size is logarithmic to the number of decoy public keys . The existing ring signature scheme with ECDSA keys is at least 152 times longer than our scheme for .
The incognito signature and concealed signature can be composed to provide posterior anonymity and message hiding. It is useful in applications like two-tier central bank digital currency, where users want to hide their addresses (public keys) and transaction amounts (messages) when the payment is settled in the interbank layer.
Towards ML-KEM & ML-DSA on OpenTitan
This paper presents extensions to the OpenTitan hardware root of trust that aim at enabling high-performance lattice-based cryptography.
We start by carefully optimizing ML-KEM and ML-DSA --- the two algorithms primarily recommended and standardized by NIST --- in software targeting the OTBN accelerator.
Based on profiling results of these implementations, we propose tightly integrated extensions to OTBN, specifically an interface from OTBN to OpenTitan's Keccak accelerator (KMAC core) and extensions to the OTBN ISA to support operations on 256-bit vectors.
We implement these extensions in hardware and show that we achieve a speedup by a factor between 6 and 9 for different operations and parameter sets of ML-KEM and ML-DSA compared to our baseline implementation on unmodified OTBN.
This speedup is achieved with an increase in cell count of less than 17% in OTBN, which corresponds to an increase of less than 3% for the full Earl Grey OpenTitan core.
ProbeNav - Fast, precise and repeatable positioning of electromagnetic probes for local Side-Channel Attacks
Localized side-channel analysis makes it possible to evaluate only the relevant chip area by measuring near-field electromagnetic (EM) emanations. Compared to global power measurements, this can lead to more powerful attacks as the signal-to-noise ratio is higher and irrelevant circuit components are not included in the recorded measurements. Especially for profiled attacks and their reproduction, the probe position in a localized scenario is of utmost importance. Ideally a probe should be placed identically during the profiling and attack phases, as small variations can have a large impact on the success of the attack. In this work we present our methodology – ProbeNav – to accurately reposition an EM probe which is optimized for localized measurements, i.e., near-field measurements. We evaluate cross-correlation, Oriented Fast and rotated Brief (ORB) and particle filters to re-calibrate the coordinate system of our setup. As a result, our methodologies show that precise positioning on a STM32F303 microcontroller is possible for a profiled attack scenario with different EM probes. Furthermore, by requiring only a single trace per position, profiling is 3 times and repositioning 28 faster in terms of number of collected traces compared to the state of the art.
Practical Deniable Post-Quantum X3DH: A Lightweight Split-KEM for K-Waay
The Signal Protocol, underpinning secure messaging for billions, faces the challenge of migrating to a post-quantum world while preserving critical properties like asynchrony and deniability. While Signal’s PQXDH key exchange protocol addresses post-quantum confidentiality, full migration of the X3DH protocol remains elusive. Relying on a split KEM (K-Waay, USENIX ’24) offers a promising migration path, but it has so far suffered from size limitations compared to concurrent works leveraging
ring signatures.
This work introduces Sparrow-KEM and Sym-Sparrow-KEM, novel asymmetric and symmetric split KEMs respectively, i.e. for which keys can be used interchangeably for sending and receiving, or only in one direction. They are designed to optimize the K-Waay protocol for size efficiency. Leveraging the MLWE assumption, these constructions reduce by a factor 5.1× the communication of prior post-quantum X3DH based on split KEMs, plus provides a 40× speedup. Additionally, Sym-Sparrow-KEM is the first symmetric split-KEM to offer deniability, IND-1KCA, and IND-1BatchCCA security, capturing implicit authentication properties. We provide formal security proofs for both schemes, including deniability. Our results demonstrate the feasibility of a compact and deniable post-quantum X3DH protocol based on split KEMs.
An Optimized Instantiation of Post-Quantum MQTT protocol on 8-bit AVR Sensor Nodes
Since the selection of the National Institute of Standards and Technology (NIST) Post-Quantum Cryptography (PQC) standardization algorithms, research on integrating PQC into security protocols such as TLS/SSL, IPSec, and DNSSEC has been actively pursued. However, PQC migration for Internet of Things (IoT) communication protocols remains largely unexplored. Embedded devices in IoT environments have limited computational power and memory, making it crucial to optimize PQC algorithms for efficient computation and minimal memory usage when deploying them on low-spec IoT devices. In this paper, we introduce KEM-MQTT, a lightweight and efficient Key Encapsulation Mechanism (KEM) for the Message Queuing Telemetry Transport (MQTT) protocol, widely used in IoT environments. Our approach applies the NIST KEM algorithm Crystals-Kyber (Kyber) while leveraging MQTT’s characteristics and sensor node constraints. To enhance efficiency, we address certificate verification issues and adopt KEMTLS to eliminate the need for Post-Quantum Digital Signatures Algorithm (PQC-DSA) in mutual authentication. As a result, KEM-MQTT retains its lightweight properties while maintaining the security guarantees of TLS 1.3. We identify inefficiencies in existing Kyber implementations on 8-bit AVR microcontrollers (MCUs), which are highly resource-constrained. To address this, we propose novel implementation techniques that optimize Kyber for AVR, focusing on high-speed execution, reduced memory consumption, and secure implementation, including Signed LookUp-Table (LUT) Reduction. Our optimized Kyber achieves performance gains of 81%,75%, and 85% in the KeyGen, Encaps, and DeCaps processes, respectively, compared to the reference implementation. With approximately 3 KB of stack usage, our Kyber implementation surpasses all state-of-the-art Elliptic Curve Diffie-Hellman (ECDH) implementations. Finally, in KEM-MQTT using Kyber-512, an 8-bit AVR device completes the handshake preparation process in 4.32 seconds, excluding the physical transmission and reception times.
Neural-Inspired Advances in Integral Cryptanalysis
The study by Gohr et.al at CRYPTO 2019 and sunsequent related works have shown that neural networks can uncover previously unused features, offering novel insights into cryptanalysis. Motivated by these findings, we employ neural networks to learn features specifically related to integral properties and integrate the corresponding insights into optimized search frameworks. These findings validate the framework of using neural networks for feature exploration, providing researchers with novel insights that advance established cryptanalysis methods.
Neural networks have inspired the development of more precise integral search models. By comparing the integral distinguishers obtained via neural networks with those identified by classical methods, we observe that existing automated search models often fail to find optimal distinguishers. To address this issue, we develop a meet-in-the-middle search framework that balances model accuracy and computational efficiency. As a result, we reduce the number of active plaintext bits required for an 11-round integral distinguisher on SKINNY-64-64, and further identify a 12-round key-dependent integral distinguisher—achieving one additional round over the previous best-known result.
The integral distinguishers discovered by neural networks enable key-recovery attacks on more rounds. We identify a 7-round key-independent integral distinguisher from neural networks with even only one active plaintext cell, which is based on linear combinations of bits. This distinguisher enables a 15-round key-recovery attack on SKINNY-n-n through a strategy with 3 rounds of forward decryption and 5 rounds of backward encryption, improving upon the previous record by one round. The same distinguisher also enhances attacks on SKINNY-n-2n and SKINNY-n-3n. Additionally, we discover an 8-round key-dependent integral distinguisher using neural network that further reduces the time complexity of key-recovery attacks against SKINNY.
A Round-Optimal Near-Linear Third-Party Private Set Intersection Protocol
Third-party private set intersection (PSI) enables two parties, each holding a private set to compute their intersection and reveal the result only to an inputless third party. In this paper, we present an efficient round-optimal third-party PSI protocol. Our work is motivated by real-world applications such as contact tracing whereby expedition is essential while concurrently preserving privacy. Our construction only requires communication rounds and attains a near-linear computational complexity of for large dataset size , where is any fixed constant. Our improvements stem from algorithmic changes and the incorporation of new techniques along with precise parameter selections to achieve a tight asymptotic bound. Furthermore, we also present a third-party PSI cardinality protocol which has not been explored in prior third-party PSI work. In a third-party PSI cardinality setting, only the third-party obtains the size of the intersection and nothing else. Our construction to achieve the cardinality functionality attains a quasilinear computational complexity for the third-party.
RAMenPaSTA: Parallelizable Scalable Transparent Arguments of Knowledge for RAM Programs
Recent advances in Zero-Knowledge Proof and Argument (ZKP/ZKA) Systems allow efficiently verifiable computation in the circuit-based model (where programs can be represented as Boolean or arithmetic circuits). However, the circuit-based model is not widely popular due to its unsuitability for big programs (as the circuit size is linear to the program's size). Hence, the research community began looking into the Random-Access Machine model of programs, namely RAM programs, which is better suited for general-purpose programming. Consequently, a flurry of work began researching to construct ZKP protocols for the correct execution of RAM programs. In this paper, we also develop ZKP/ZKA for RAM programs, with a particular focus on two properties: (i) parallelizability, which significantly reduces prover (and verifier) computation, and (ii) genericity, which makes the construction requires minimal assumptions and can be instantiated with any existing ZKP protocols. To our knowledge, previous ZKP constructions for RAM programs either (i) are not known to support proving or verifying in parallel, (ii) require making non-black-box use of specific SNARK constructions, or (iii) even require proving computation of random oracle.
To this end, we propose the so-called Conditional Folding Scheme (CFS) as a building block to construct ZKP/ZKA for RAM programs. We provide a non-trivial practical construction for our CFS that is natively parallelizable, which significantly brings the runtime of proof generation down to (compared to in previous works), where is the witness size of the heaviest operation, and is the number of execution steps. We rigorously prove the security of our CFS (also for the case of folding in parallel). Our scheme can be made non-interactive using the Fiat-Shamir transform. Our CFS is generic and does not require a trusted setup (yielding a ``transparent'' argument). It can be adapted to construct Parallelizable Scalable Transparent Arguments of Knowledge for RAM Programs that we dub RAMenPaSTA.
USpt: Updatable Signature with Public Tokens
The Updatable Signature (US) allows valid signatures to be updated by an update token without accessing the newly generated signing key. Cini et al. (PKC’21) formally defined this signature and gave several constructions. However, their security model requires the secrecy of the update token, which is not applicable in many common application scenarios where existing signatures have been distributed to many parties. In addition, one can use the same token to update both the signing key and signatures, and all signatures can be updated by a single token, whereas the adversarial signature generated by an adversary might also be updated.
This work explores the (im)possibility of constructing an Updatable Signature with public tokens (USpt). Specifically, we first define the updatable signature with public tokens and present its security model. Then, from considering existing US schemes, we found that a secure USpt must properly handle a transform function from signature-update token to key-update token. We further formally proved the impossibility of constructing a secure USpt if (1) there is no transform function between key pairs and signatures, or (2) the signature-update token can be derived from the public keys of adjacent epochs. Finally, we present a concrete USpt scheme based on the BLS signature.
V rity: Verifiable Local Differential Privacy
Local differential privacy (LDP) enables individuals to report sensitive data while preserving privacy. Unfortunately, LDP mechanisms are vulnerable to poisoning attacks, where adversaries controlling a fraction of the reporting users can significantly distort the aggregate output--much more so than in a non-private solution where the inputs are reported directly. In this paper, we present two novel solutions that prevent poisoning attacks under LDP while preserving its privacy guarantees.
Our first solution, , addresses scenarios where the users report inputs with a ground truth available to a third party. The second solution, , tackles the more challenging case in which the users locally generate their input and there is no ground truth which can be used to bootstrap verifiable randomness generation.
Succinct Computational Secret Sharing for Monotone Circuits
Secret sharing is a cornerstone of modern cryptography, underpinning the secure distribution and reconstruction of secrets among authorized participants. In these schemes, succinctness—measured by the size of the distributed shares—has long been an area of both great theoretical and practical interest, with large gaps between constructions and best known lower bounds. In this paper, we present a novel computational secret sharing scheme for monotone Boolean circuits that achieves substantially shorter share sizes than previously known constructions in the standard model. Our scheme attains a public share size of and a user share size of , where n denotes the number of users, is the monotone circuit and is the security parameter, thus effectively eliminating the dependence on the circuit size. This marks a significant improvement over earlier approaches, which exhibited share sizes that grew with the number of gates in the circuit. Our construction makes use of indistinguishability obfuscation and injective one-way functions.
Unmasking TRaccoon: A Lattice-Based Threshold Signature with An Efficient Identifiable Abort Protocol
TRaccoon is an efficient 3-round lattice-based T-out-of-N threshold signature, recently introduced by del Pino et al. (Eurocrypt 2024). While the design resembles the classical threshold Schnorr signature, Sparkle (Crites et al., Crypto 2023), one shortfall is that it has no means to identify malicious behavior, a property highly desired in practice. This is because to resist lattice-specific attacks, TRaccoon relies on a technique called masking, informally blinding each partial signature with a one-time additive mask. del Pino et al. left it as an open problem to add a mechanism to identify malicious behaviors to TRaccoon.
In this work, we propose TRaccoon-IA, a TRaccoon with an efficient identifiable abort protocol, allowing to identify malicious signers when the signing protocol fails. The identifiable abort protocol is a simple add-on to TRaccoon, keeping the original design intact, and comes with an added communication cost of 60 + 6.4 |T| KB only when signing fails. Along the way, we provide the first formal security analysis of a variant of LaBRADOR (Beullens et al., Crypto 2023) with zero-knowledge, encountering several hurdles when formalizing it in detail. Moreover, we give a new game-based definition for interactive identifiable abort protocols, extending the popular game-based definition used to prove unforgeability of recent threshold signatures.
Evasive LWE Assumptions: Definitions, Classes, and Counterexamples
The evasive LWE assumption, proposed by Wee [Eurocrypt'22 Wee] for constructing a lattice-based optimal broadcast encryption, has shown to be a powerful assumption, adopted by subsequent works to construct advanced primitives ranging from ABE variants to obfuscation for null circuits. However, a closer look reveals significant differences among the precise assumption statements involved in different works, leading to the fundamental question of how these assumptions compare to each other. In this work, we initiate a more systematic study on evasive LWE assumptions:
(i) Based on the standard LWE assumption, we construct simple counterexamples against three private-coin evasive LWE variants, used in [Crypto'22 Tsabary, Asiacrypt'22 VWW, Crypto'23 ARYY] respectively, showing that these assumptions are unlikely to hold.
(ii) Based on existing evasive LWE variants and our counterexamples, we propose and define three classes of plausible evasive LWE assumptions, suitably capturing all existing variants for which we are not aware of non-obfuscation-based counterexamples.
(iii) We show that under our assumption formulations, the security proofs of [Asiacrypt'22 VWW] and [Crypto'23 ARYY] can be recovered, and we reason why the security proof of [Crypto'22 Tsabary] is also plausibly repairable using an appropriate evasive LWE assumption.
Cauchyproofs: Batch-Updatable Vector Commitment with Easy Aggregation and Application to Stateless Blockchains
Stateless blockchain designs have emerged to address the challenge of growing blockchain size using succinct global states. Previous works have developed vector commitments that support proof updates and aggregation to be used as such states. However, maintaining proofs for multiple users still demands significant computational resources, particularly to update proofs with every transaction. This paper introduces Cauchyproofs, a batch-updatable vector commitment that enables proof-serving nodes to efficiently update proofs in quasi-linear time relative to the number of users and transactions, utilizing an optimized KZG scheme to achieve complexity for users and transactions, compared to the previous approaches. This advancement reduces the computational burden on proof-serving nodes, allowing efficient proof maintenance across large user groups. We demonstrate that our approach is approximately eight times faster than the naive approach at the Ethereum-level transaction throughput if we perform batch update every hour. Additionally, we present a novel matrix representation for KZG proofs utilizing Cauchy matrices, enabling faster all-proof computations with reduced elliptic curve operations. Finally, we propose an algorithm for history proof query, supporting retrospective proof generation with high efficiency. Our contributions substantially enhance the scalability and practicality of proof-serving nodes in stateless blockchain frameworks.
The Sponge is Quantum Indifferentiable
The sponge is a cryptographic construction that turns a public permutation into a hash function. When instantiated with the Keccak permutation, the sponge forms the NIST SHA-3 standard. SHA-3 is a core component of most post-quantum public-key cryptography schemes slated for worldwide adoption.
While one can consider many security properties for the sponge, the ultimate one is \emph{indifferentiability from a random oracle}, or simply \emph{indifferentiability}. The sponge was proved indifferentiable against classical adversaries by Bertoni et al. in 2008. Despite significant efforts in the years since, little is known about sponge security against quantum adversaries, even for simple properties like preimage or collision resistance beyond a single round. This is primarily due to the lack of a satisfactory quantum analog of the lazy sampling technique for permutations.
In this work, we develop a specialized technique that overcomes this barrier in the case of the sponge. We prove that the sponge is in fact indifferentiable from a random oracle against quantum adversaries. Our result establishes that the domain extension technique behind SHA-3 is secure in the post-quantum setting. Our indifferentiability bound for the sponge is a loose , but we also give bounds on preimage and collision resistance that are tighter.
Secret and Shared Keys Recovery on Hamming Quasi-Cyclic with SASCA
Soft Analytical Side Channel Attacks (SASCA) are a powerful family of Side Channel Attacks (SCA) that allows the recovery of secret values with only a small number of traces. Their effectiveness lies in the Belief Propagation (BP) algorithm, which enables efficient computation of the marginal distributions of intermediate values. Post-quantum schemes such as Kyber, and more recently, Hamming Quasi-Cyclic (HQC), have been targets of SASCA. Previous SASCA on HQC focused on Reed-Solomon (RS) codes and successfully retrieved the shared key with a high success rate for high noise levels using a single trace. In this work, we present new SASCA on HQC, where both the shared key and the secret key are targeted. Our attacks are realized on simulations. Unlike the previous SASCA, we take a closer look at the Reed-Muller (RM) code. The advantage of this choice is that the RM decoder is applied before the RS decoder, enabling attacks targeting both the secret key and shared key. We build a factor graph of the Fast Hadamard Transform (FHT) function from the HQC reference implementation of April 2023. The information recovered from BP allows us to retrieve the shared key with a single trace. In addition to the previous SASCA targeting HQC, we also manage to recover the secret key with two different chosen ciphertext attacks. One of them requires a single trace and is successful until high noise levels.
On Graphs of Incremental Proofs of Sequential Work
In this work, we characterize graphs of \emph{(graph-labeling) incremental proofs of sequential work} (iPoSW). First, we define \emph{incremental} graphs and prove they are necessary for iPoSWs. Relying on space pebbling complexity of incremental graphs, we show that the depth-robust graphs underling the PoSW of Mahmoody et al.\ are not incremental, and hence, their PoSW cannot be transformed into an iPoSW.
Second, and toward a generic iPoSW construction, we define graphs whose structure is compatible with the incremental sampling technique (Döttling et al.). These are \emph{dynamic} graphs. We observe that the graphs underlying all PoSWs, standalone or incremental, are dynamic. We then generalize current iPoSW schemes by giving a generic construction that transforms any PoSW whose underlying graph is incremental and dynamic into an iPoSW. As a corollary, we get a new iPoSW based on the modified Cohen-Pietrzak graph (Abusalah et al.). When used in constructing blockchain light-client bootstrapping protocols (Abusalah et al.) such an iPoSW, results in the most efficient bootstrappers/provers, in terms of both proof size and space complexity.
Along the way, we show that previous iPoSW definitions allow for trivial solutions. To overcome this, we provide a refined definition that captures the essence of iPoSWs and is satisfied by all known iPoSW constructions.
Deterministic algorithms for class group actions
We present an algorithm for the CSIDH protocol that is fully deterministic and strictly constant time. It does not require dummy operations and can be implemented without conditional branches. Our proof-of-concept C implementation shows that a key exchange can be performed in a constant (i.e. fixed) number of finite field operations, independent of the secret keys. The algorithm relies on a technique reminiscent of the standard Montgomery ladder, and applies to the computation of isogenies that divide an endomorphism of smooth degree represented by its kernel. We describe our method in the general context of class group actions on oriented elliptic curves, giving rise to a large family of non-interactive key exchanges different from CSIDH.
Faster Hash-based Multi-valued Validated Asynchronous Byzantine Agreement
Multi-valued Validated Byzantine Agreement (MVBA) is vital for asynchronous distributed protocols like asynchronous BFT consensus and distributed key generation, making performance improvements a long-standing goal. Existing communication-optimal MVBA protocols rely on computationally intensive public-key cryptographic tools, such as non-interactive threshold signatures, which are also vulnerable to quantum attacks. While hash-based MVBA protocols have been proposed to address these challenges, their higher communication overhead has raised concerns about practical performance.
We present a novel MVBA protocol with adaptive security, relying exclusively on hash functions to achieve post-quantum security. Our protocol delivers near-optimal communication, constant round complexity, and significantly reduced latency compared to existing schemes, though it has sub-optimal resilience, tolerating up to 20% Byzantine corruptions instead of the typical 33%. For example, with and input size 1.75 MB, it reduces latency by 81% over previous hash-based approaches.
Full-Authority Data Sharing Systems: Ciphertext-Dependent Proxy Re-Encryption with Dynamic Key Generation
Proxy re-encryption (PRE) is a powerful primitive for secure cloud storage sharing. Suppose Alice stores encrypted datasets (ciphertexts) in a cloud server (proxy). If Bob requests data sharing, Alice shares the ciphertexts by computing and sending a re-encryption key to the proxy, which will perform the re-encryption operation that generates the ciphertexts that are decryptable to Bob. Still, the proxy cannot access the plaintexts/datasets. Traditionally, the re-encryption key can convert all of Alice's ciphertexts, and the proxy should operate the re-encryption on the ciphertexts selected by the users (Alice/Bob). There is a trust issue: Alice must grant full decryption rights (losing control) to rely on proxy-enforced access policies (vulnerable to collusion). Existing PRE schemes fail to reconcile fine-grained control with collusion resistance. If Alice uses different keys to encrypt each dataset, the re-encryption complexity is linear to the number of requested datasets. We propose full-authority data sharing, a novel paradigm combining ciphertext-dependent PRE (cdPRE) and dynamic key generation (dKG). Unlike traditional PRE, cdPRE binds re-encryption keys to specific ciphertexts, ensuring collusion resistance (i.e., proxy + Bob cannot access unauthorised data). dKG dynamically connects keys via key derivation functions; for example, the chain system reduces per-dataset delegation cost to for sequential release in publication/subscription systems (vs. in trivial solutions, where is the number of datasets). We instantiate this paradigm with Kyber (NIST-PQC standardised) and AES, prove its security, and experimentally verify the high efficiency of the scheme.
Walnut: A Generic Framework with Enhanced Scalability for BFT Protocols
The performance of traditional BFT protocols significantly decreases as grows ( for the number of replicas), and thus, they support up to a few hundred replicas. Such scalability issues severely limit the application scenarios of BFT. Meanwhile, the committee sampling technique has the potential to scale the replica size significantly by selecting a small portion of replicas as the committee and then conveying the consensus results to the rest.
However, this technique is rarely used in BFT, and there is still a lack of methods to scale the traditional BFT protocol being deployed to support more replicas rather than the costly re-deployment of new protocols.
This paper introduces Walnut, a secure and generic committee-sampling-based modular consensus. Specifically, we use the verifiable random function for committee election and integrate committee rotation with the consensus. This resulting construction ensures that each selected committee is of a fixed size and acknowledged by all replicas, even in a partially synchronous network. For Walnut, we provide a rigorous definition and outline the necessary properties of each module to achieve safety and liveness.
To clarify Walnut's effectiveness, we apply this framework to HotStuff to obtain the Walnut-HS protocol, together with a proof of fit-in. We also implement Walnut-HS and compare its performance with HotStuff, using up to 100 Amazon EC2 instances in WAN. The experiments show that Walnut-HS can easily scale to 1,000 replicas with only a slight performance degradation, while HotStuff performs poorly or even breaks when . Besides, Walnut-HS performs well in comparison with Hotstuff for small-scale experiments. For example, the peak throughput of Walnut-HS is at most 38.6% higher than HotStuff for .
DUPLEX: Scalable Zero-Knowledge Lookup Arguments over RSA Group
Lookup arguments enable a prover to convince a verifier that a committed vector of lookup elements is contained within a predefined table . These arguments are particularly beneficial for enhancing the performance of SNARKs in handling non-arithmetic operations, such as batched range checks or bitwise operations. While existing works have achieved efficient and succinct lookup arguments, challenges remain, particularly when dealing with large vectors of lookup elements in privacy-sensitive applications.
In this paper, we introduce , a scalable zero-knowledge lookup argument scheme that offers significant improvements over previous approaches. Notably, we present the first lookup argument designed to operate over the RSA group. Our core technique allows for the transformation of elements into prime numbers to ensure compatibility with the RSA group, all without imposing substantial computational costs on the prover. Given lookup elements, achieves an asymptotic proving time of , with constant-sized proofs, constant-time verification, and a public parameter size independent of the table size . Additionally, ensures the privacy of lookup elements and is robust against dynamic table updates, making it highly suitable for scalable verifiable computation in real-world applications.
We implemented and empirically evaluated , comparing it with the state-of-the-art zero-knowledge lookup argument Caulk [CCS'22]. Our experimental results demonstrate that significantly outperforms Caulk in proving time for both single and batched lookup arguments, while maintaining practical proof size and verification time.
BPDTE: Batch Private Decision Tree Evaluation via Amortized Efficient Private Comparison
Machine learning as a service requires clients to entrust their information to the server, raising privacy concerns. Private Decision Tree Evaluation (PDTE) are proposal to address these concerns in decision trees, which are fundamental models in machine learning. However, existing solutions perform poorly with massive datasets in real-world applications, therefore, we focus on the batching variant called BPDTE, which supports a single evaluation on multiple data and significantly improves performance.
Firstly, we propose three private comparison (PrivCMP) algorithms that privately compare two numbers to determine which one is larger, by utilizing thermometer encoding and a novel folklore-inspired dichotomy method. These algorithms are non-interactive, batchable, and high-precision, achieving an amortized cost of less than 1 ms at 32-bit precision.
Secondly, we propose six BPDTE schemes based on our PrivCMP and the Clear Rows Relation (CRR) algorithm, introduced to ensure batching security. Experimental results show that our schemes improve amortized efficiency, offer more flexible batch sizes, and achieve higher precision.
Finally, we provide a formal security analysis of these schemes.
Post-Quantum PKE from Unstructured Noisy Linear Algebraic Assumptions: Beyond LWE and Alekhnovich's LPN
Noisy linear algebraic assumptions with respect to random matrices, in particular Learning with Errors ( ) and Alekhnovich Learning Parity with Noise (Alekhnovich ), are among the most investigated assumptions that imply post-quantum public-key encryption (PKE). They enjoy elegant mathematical structure. Indeed, efforts to build post-quantum PKE and advanced primitives such as homomorphic encryption and indistinguishability obfuscation have increasingly focused their attention on these two assumptions and their variants.
Unfortunately, this increasing reliance on these two assumptions for building post-quantum cryptography leaves us vulnerable to potential quantum (and classical) attacks on Alekhnovich and . Quantum algorithms is a rapidly advancing area, and we must stay prepared for unexpected cryptanalytic breakthroughs. Just three decades ago, a short time frame in the development of our field, Shor's algorithm rendered most then-popular number theoretic and algebraic assumptions quantumly broken. Furthermore, within the last several years, we have witnessed major classical and quantum
breaks on several assumptions previously introduced for post-quantum cryptography. Therefore, we ask the following question:
In a world where both and Alekhnovich are broken, can there still exist noisy linear assumptions that remain plausibly quantum hard and imply PKE?
To answer this question positively, we introduce two natural noisy-linear algebraic assumptions that are both with respect to random matrices, exactly like and Alekhnovich , but with different error distributions. Our error distribution combines aspects of both small norm and sparse error distributions. We design a PKE from these assumptions and give evidence that these assumptions are likely to still be secure even in a world where both the and Alekhnovich assumptions are simultaneously broken. We also study basic properties of these assumptions, and show that in the parameter settings we employ to build PKE, neither of them are ``lattice'' assumptions in the sense that we don't see a way to attack them using a lattice closest vector problem solver, except via -completeness reductions.
Rerandomizable Garbling, Revisited
In key-and-message homomorphic encryption (KMHE), the key space is a subset of the message space, allowing encryption of secret keys such that the same homomorphism can be applied to both the key and the message of a given ciphertext.
KMHE with suitable security properties is the main building block for constructing rerandomizable garbling schemes (RGS, Gentry et al., CRYPTO 2010), which enable advanced cryptographic applications like multi-hop homomorphic encryption, the YOSO-like MPC protocol SCALES (Acharya et al., TCC 2022 and CRYPTO 2024), and more.
The BHHO scheme (Boneh et al., CRYPTO 2008) is currently the only known KMHE scheme suitable for constructing RGS. An encryption of a secret key consists of group elements, where is the security parameter, which incurs a significant bandwidth and computational overhead that makes the scheme itself, and the RGS protocols building upon it, impractical.
We present a new, more efficient KMHE scheme with linear-size ciphertexts.
Despite using heavier cryptographic tools (pairings instead of plain DDH-hard groups), the concrete ciphertext size and computational costs are very significantly reduced. We are able to shrink the ciphertext by 97.83 % (from 16.68 MB to 360 kB) and reduce the estimated computations for encryption by 99.996 % (from 4 minutes to 0.01 seconds) in comparison to BHHO.
Additionally, we introduce gate KMHE as a new tool to build more efficient RGS. Our RGS construction shrinks the size of a garbled gate by 98.99 % (from 133.43 MB to 1.35 MB) and decreases the estimated cost of garbling by 99.998 % (from 17 minutes to 0.02 seconds per gate) in comparison to Acharya et al.
In summary, our work shows for the first time that RGS and the SCALES protocol (and hence YOSO-like MPC) are practically feasible for simple circuits.
Fully Asymmetric Anamorphic Homomorphic Encryption from LWE
As introduced by Persiano {\it et al.} (Eurocrypt'22), anamorphic encryption (AE) is a primitive enabling private communications against a dictator that forces users to surrender their decryption keys. In its fully asymmetric flavor (defined by Catalano {\it et al.}, Eurocrypt'24), anamorphic channels can work as hidden public-key mechanisms in the sense that anamorphic encryptors are not necessarily able to decrypt anamorphic ciphertexts. Unfortunately, fully asymmetric AE is hard to come by and even impossible to obtain from ordinary public-key encryption via black-box constructions. So far, only three schemes are known to rely on well-established assumptions. In this paper, we exhibit constructions from the standard LWE assumption based on Regev's cryptosystem and its dual version. In both cases, we retain the additive homomorphism of the schemes. We additionally show that dual Regev is public-key anamorphic in the sense of Persiano {\it et al.} (Crypto'24). In the FHE setting, we show that the dual GSW system provides fully asymmetric AE (while preserving its leveled homomorphism) when instantiated with binary/ternary secret keys. Along the way, we discuss the extent to which our schemes satisfy a generalization of Banfi {\it et al.}'s notion of robustness (Eurocrypt'24) to the case of homomorphically evaluated ciphertexts.
Improvements on the schemes VOX and QR UOV When minus is a plus
In this article, we present an improvement for QR UOV and a reparation of VOX.
Thanks to the use of the perturbation minus we are able to fully exploit the parameters in order to reduce the public key. It also helps us to repair the scheme VOX. With QR UOV- we are able to significantly reduce the size of the public key at the cost of a small increase of the signature size. While with VOX- we can obtain a public key as short as . VOX- also maintains a very short signature size.
We also show that the use of minus perturbation is very useful with schemes that uses the QR (Quotient ring perturbation).
MPC with Publicly Identifiable Abort from Pseudorandomness and Homomorphic Encryption
Publicly identifiable abort is a critical feature for ensuring accountability in outsourced computations using secure multiparty computation (MPC). Despite its importance, no prior work has specifically addressed identifiable abort in the context of outsourced computations. In this paper, we present the first MPC protocol that supports publicly identifiable abort with minimal overhead for external clients. Our approach minimizes client-side computation by requiring only a few pseudorandom function evaluations per input. On the server side, the verification process involves lightweight linear function evaluations using homomorphic encryption. This results in verification times of a few nanoseconds per operation for servers, with client overhead being approximately two orders of magnitude lower. Additionally, the public verifiability of our protocol reduces client input/output costs compared to SPDZ-based protocols, on which we base our protocol. For example, in secure aggregation use cases, our protocol achieves over twice the efficiency during the offline phase and up to an 18 % speedup in the online phase, significantly outperforming SPDZ.
Verifiable E-Voting with a Trustless Bulletin Board
Voter privacy and end-to-end (E2E) verifiability are critical features of electronic voting (e-voting) systems to safeguard elections. To achieve these properties commonly a perfect bulletin board (BB) is assumed that provides consistent, reliable, and tamper-proof storage and transmission of voting data. However, in practice, BBs operate in asynchronous and unreliable networks, and hence, are susceptible to vulnerabilities such as equivocation attacks and dropped votes, which can compromise both verifiability and privacy. Although prior research has weakened the perfect BB assumption, it still depends on trusting certain BB components.
In this work, we present and initiate a formal exploration of designing e-voting systems based on fully untrusted BBs. For this purpose, we leverage the notion of accountability and in particular use accountable BBs. Accountability ensures that if a security breach occurs, then cryptographic evidence can identify malicious parties. Fully untrusted BBs running in asynchronous networks bring new challenges. Among others, we identify several types of attacks that a malicious but accountable BB might be able to perform and propose a new E2E verifiability notion for this setting. Based on this notion and as a proof of concept, we construct the first e-voting system that is provably E2E verifiable and provides vote privacy even when the underlying BB is fully malicious. This establishes an alternative to traditional e-voting architectures that rely on (threshold) trusted BB servers.
Convolution-Friendly Image Compression with FHE
During the past few decades, the field of image processing has grown to cradle hundreds of applications,
many of which are outsourced to be computed on trusted remote servers.
More recently, Fully Homomorphic Encryption (FHE) has grown
in parallel as a powerful tool enabling computation on encrypted data,
and transitively on untrusted servers. As a result, new FHE-supported applications have emerged, but not all
have reached practicality due to hardware, bandwidth
or mathematical constraints inherent to FHE. One example is processing encrypted images, where practicality is closely related to bandwidth availability.
In this paper, we propose and implement a novel technique for
FHE-based image compression and decompression. Our technique is a stepping stone
towards practicality of encrypted image-processing and
applications such as private inference, object recognition, satellite-image searching
or video editing.
Inspired by the JPEG standard, and with new FHE-friendly
compression/decompression algorithms, our technique allows a client
to compress and encrypt images before sending them to a server,
greatly reducing the required bandwidth.
The server homomorphically decompresses a ciphertext to obtain
an encrypted image to which generic
pixel-wise processing or convolutional filters can be applied.
To reduce the round-trip bandwidth requirement, we also propose
a method for server-side post-processing compression.
Using our pipeline, we demonstrate that a high-definition grayscale image ( ) can be homomorphically decompressed, processed and
re-compressed in s with a compression ratio of 100/34.4 on a standard personal computer
without compromising on fidelity.
T-Spoon: Tightly Secure Two-Round Multi-Signatures with Key Aggregation
Multi-signatures over pairing-free cyclic groups have seen significant advancements in recent years, including achieving two-round protocols and supporting key aggregation.
Key aggregation enables the combination of multiple public keys into a single succinct aggregate key for verification and has essentially evolved from an optional feature to a requirement.
To enhance the concrete security of two-round schemes, Pan and Wagner (Eurocrypt 2023, 2024) introduced the first tightly secure constructions in this setting.
However, their schemes do not support key aggregation, and their approach inherently precludes a single short aggregate public key.
This leaves the open problem of achieving tight security and key aggregation simultaneously.
In this work, we solve this open problem by presenting the first tightly secure two-round multi-signature scheme in pairing-free groups supporting key aggregation.
As for Pan and Wagner's schemes, our construction is based on the DDH assumption.
In contrast to theirs, it also has truly compact signatures, with signature size asymptotically independent of the number of signers.
Correlation power analysis of LESS and CROSS
This paper presents a side-channel attack targeting the LESS and CROSS post-quantum digital signature schemes, resulting in full key recovery for both. These schemes have advanced to the second round of NIST’s call for additional signatures. By leveraging correlation power analysis and horizontal attacks, we are able to recover the secret key by observing the power consumption during the multiplication of an ephemeral secret vector with a public matrix. The attack path is enabled by the presence of a direct link between the secret key elements and the ephemeral secret, given correct responses. This attack targets version 1.2 of both schemes. In both settings we can recover the secret key in a single trace for the NIST’s security level I parameter set. Additionally, we propose improvements to the existing horizontal attack on CROSS, reducing the required rounds that need to be observed by an order of magnitude for the same parameter sets.
Post-Quantum Cryptography in eMRTDs: Evaluating PAKE and PKI for Travel Documents
Passports, identity cards and travel visas are examples of machine readable travel documents (MRTDs) or eMRTDs for their electronic variants. The security of the data exchanged between these documents and a reader is secured with a standardized password authenticated key exchange (PAKE) protocol known as PACE.
A new world-wide protocol migration is expected with the arrival of post-quantum cryptography (PQC) standards. In this paper, we focus on the impact of this migration on constrained embedded devices as used in eMRTDs. We present a feasibility study of a candidate post-quantum secure PAKE scheme as the replacement for PACE on existing widely deployed resource-constrained chips. In a wider context, we study the size, performance and security impact of adding post-quantum cryptography with a focus on chip storage and certificate chains for existing eMRTDs.
We show that if the required post-quantum certificates for the eMRTD fit in memory, the migration of existing eMRTD protocols to their post-quantum secure equivalent is already feasible but a performance penalty has to be paid. When using a resource constrained SmartMX3 P71D600 smart card, designed with classical cryptography in mind, then execution times of a post-quantum secure PAKE algorithm using the recommended post-quantum parameter of the new PQC standard ML-KEM can be done in under a second. This migration will be aided by future inclusion of dedicated hardware accelerators and increased memory to allow storage of larger keys and improve performance.
Chosen Ciphertext Security via BARGs
In this paper, we show a new set of cryptographic primitives that generically leads to chosen ciphertext secure (CCA secure) public-key encryption (PKE). Specifically, we show how a (non-interactive, publicly verifiable) batch argument (BARG) for NP can be combined with a chosen plaintext secure (CPA secure) PKE scheme to achieve a CCA secure one. The requirement of the succinctness of the proof size of a BARG used as a building block is arguably very mild: We require it to be only at most for some non-negative constant and polynomials , where denotes the security parameter, denotes the statement size, and denotes the batch size (i.e. the number of statements whose correctness is simultaneously proved), and thus it can even be (slightly) linear in . A BARG with such succinctness is so weak that it cannot be used in the recent constructions of a non-interactive zero-knowledge proof system for NP based on a BARG (and a one-way function) by Bitansky et al. (STOC 2024) and Bradley, Waters, and Wu (TCC 2024). Therefore, our result gives a new building block that can upgrade CPA security into CCA security.
Last updated: 2025-05-11
KeyJoin: Privacy-Focused CoinJoin Protocol for Bitcoin
Bitcoin is based on the Blockchain, an open ledger containing information about each transaction in the Bitcoin network. Blockchain serves many purposes, but it allows anyone to track all transactions and activities of each Bitcoin address. The privacy of the network is being threatened by some organizations that track transactions. Tracking and subsequent filtering of coins lead to the loss of exchangeability of Bitcoin.
Despite Bitcoin’s transparency, it is possible to increase user privacy using a variety of existing methods. One of these methods is called CoinJoin, was proposed by Bitcoin developer Greg Maxwell in 2013. This technology involves combining several users transactions to create a single transaction with multiple inputs and outputs, which makes transaction analysis more complicated.
This work describes the KeyJoin, a privacy-focused CoinJoin protocol based on the keyed-verification anonymous credentials (KVAC).
Towards Optimal Differential Attacks on FLY and PIPO
Lightweight block ciphers such as PIPO and FLY are designed to operate efficiently and securely in constrained environments. While the differential attack on PIPO-64-128 has already been studied by the designers, no concrete differential attack had been conducted for PIPO-64-256 and FLY. Motivated by this gap, we revisit the security of PIPO against differential attacks and generalize the analysis framework to make it applicable to structurally related ciphers. Based on this generalized framework, we search for key-recovery-attack-friendly distinguishers and apply clustering techniques to enhance their effectiveness in key-recovery attacks. As a result, we improve the previously proposed differential attack on PIPO-64-128, reducing the time complexity by a factor of . Furthermore, we propose a 13-round differential attack on PIPO-64-256, which covers two more rounds than the previous result. We also apply the same methodology to FLY and present the first differential attack on 12-round FLY, reaching one round beyond the best-known distinguisher. We believe this work improves the understanding of the structures of FLY and PIPO, and provides a basis for future research on advanced key-recovery attacks for related cipher designs.
An Attack on TON’s ADNL Secure Channel Protocol
We present an attack on the Abstract Datagram Network Layer (ADNL) protocol used in The Open Network (TON), currently the tenth largest blockchain by market capitalization. In its TCP variant, ADNL secures communication between clients and specialized nodes called liteservers, which provide access to blockchain data. We identify two cryptographic design flaws in this protocol: a handshake that permits session-key replay and a non-standard integrity mechanism whose security critically depends on message confidentiality. We transform these vulnerabilities into an efficient plaintext-recovery attack by exploiting two ADNL communication patterns, allowing message reordering across replayed sessions. We then develop a plaintext model for this scenario and construct an efficient algorithm that recovers the keystream using a fraction of known plaintexts and a handful of replays. We implement our attack and show that an attacker intercepting the communication between a TON liteserver and a widely deployed ADNL client can recover the keystream used to encrypt server responses by performing eight connection replays to the server. This allows the decryption of sensitive data, such as account balances and user activity patterns. Additionally, the attacker can modify server responses to manipulate blockchain information displayed to the client, including account balances and asset prices.
Efficient CPA Attack on Hardware Implementation of ML-DSA in Post-Quantum Root of Trust
Side-channel attacks (SCA) pose a significant threat to cryptographic implementations, including those designed to withstand the computational power of quantum computers.
This paper introduces the first side-channel attack on an industry-grade post-quantum cryptography implementation.
Specifically, we present a Correlation Power Analysis (CPA) attack targeting the open-source hardware implementation of ML-DSA within a Silicon Root of Trust framework developed through a multi-party collaboration involving leading technology companies.
Our attack focuses on the modular reduction process that follows the Number Theoretic Transform-based polynomial pointwise multiplication.
By exploiting side-channel leakage from a distinctive unique reduction algorithm and leveraging the zeroization mechanism used to securely erase sensitive information by clearing internal registers, we significantly enhance the attack's efficacy.
Our findings reveal that an adversary can extract the secret keys using only 10,000 power traces.
With access to these keys, an attacker could forge signatures for certificate generation, thereby compromising the integrity of the root of trust.
This work highlights the vulnerabilities of industry-standard root-of-trust systems to side-channel attacks. It underscores the urgent need for robust countermeasures to secure commercially deployed systems against such threats.
Registered Functional Encryption for Attribute-Weighted Sums with Access Control
In this work, we present Functional Encryption (FE) schemes for Attribute-Weighted Sums (AWS), introduced by Abdalla, Gong and Wee (Crypto 2020) in the registration-based setting (RFE). In such a setting, users sample their own public/private key pairs ; a key curator registers user public keys along with their functions ; encryption takes as input attribute-value pairs where is public and is private; and decryption recovers the weighted sum while leaking no additional information about . Recently, Agrawal, Tomida and Yadav (Crypto 2023) studied the attribute-based case of AWS (AB-AWS) providing fine-grained access control, where the function is described by a tuple , the input is extended to and decryption recovers the weighted sum only if . Our main results are the following:
- We build the first RFE for (AB-)1AWS functionality, where , that achieves adaptive indistinguishability-based security under the (bilateral) -Lin assumption in prime-order pairing groups. Prior works achieve RFE for linear and quadratic functions without access control in the standard model, or for attribute-based linear functions in the generic group model.
- We develop the first RFE for AB-AWS functionality, where is unbounded, that achieves very selective simulation-based security under the bilateral -Lin assumption. Here, “very selective” means that the adversary declares challenge attribute values, all registered functions and corrupted users upfront. Previously, SIM-secure RFEs were only constructed for linear and quadratic functions without access control in the same security model.
We devise a novel nested encoding mechanism that facilitates achieving attribute-based access control and unbounded inputs in the registration-based setting for AWS functionalities, proven secure in the standard model. In terms of efficiency, our constructions feature short public parameters, secret keys independent of , and compact ciphertexts unaffected by the length of public inputs. Moreover, as required by RFE properties, all objective sizes and algorithm costs scale poly-logarithmically with the total number of registered users in the system.
The Meta-Complexity of Secret Sharing
A secret-sharing scheme allows the distribution of a secret among parties, such that only certain predefined “authorized” sets of parties can reconstruct the secret, while all other “unauthorized” sets learn nothing about . The collection of authorized/unauthorized sets is defined by a monotone function . It is known that any monotone function can be realized by a secret-sharing scheme; thus, the smallest achievable \emph{total share size}, , serves as a natural complexity measure.
In this paper, we initiate the study of the following meta-complexity question: Given a monotone function , is it possible to efficiently distinguish between cases where the secret-sharing complexity of is small versus large? We examine this question across several computational models, yielding the following main results.
(Hardness for formulas and circuits): Given a monotone formula of size , it is coNP-hard to distinguish between ``cheap'' functions, where the maximum share size is 1 bit and the total share size is , and ``expensive'' functions, where the maximum share size is and the total share size is .
This latter bound nearly matches known secret-sharing constructions yielding a total share size of bits. For monotone circuits, we strengthen the bound on the expensive case to a maximum share size of and a total share size of . These results rule out the existence of instance-optimal compilers that map a formula to a secret-sharing scheme with complexity polynomially related to .
(Hardness for truth tables): Under cryptographic assumptions, either (1) every -bit slice function can be realized by a -size secret-sharing scheme, or (2) given a truth-table representation of of size , it is computationally infeasible to distinguish in time between cases where and . Option (1) would be considered a breakthrough result, as the best-known construction for slices has a sub-exponential complexity of (Liu, Vaikuntanathan, and Wee; Eurocrypt 2018). Our proof introduces a new worst-case-to-average-case reduction for slices, which may be of independent interest.
(Hardness for graphs): We examine the simple case where is given as a 2-DNF, represented by a graph whose edges correspond to 2-terms, and ask whether it is possible to distinguish between cases where the share size is constant and those where the share size is large, say . We establish several connections between this question and questions in communication complexity. For instance, we show that graphs admitting constant-cost secret sharing form a subclass of graphs with constant randomized communication complexity and constant-size adjacency sketches (Harms, Wild, and Zamaraev; STOC 2022). We leverage these connections to establish new lower bounds for specific graph families, derive a combinatorial characterization of graphs with constant-size linear secret-sharing schemes, and show that a natural class of myopic algorithms fails to distinguish cheap graphs from expensive ones.
Engorgio: An Arbitrary-Precision Unbounded-Size Hybrid Encrypted Database via Quantized Fully Homomorphic Encryption
This work proposes an encrypted hybrid database framework that combines vectorized data search and relational data query over quantized fully homomorphic encryption (FHE). We observe that, due to the lack of efficient encrypted data ordering capabilities, most existing encrypted database (EDB) frameworks do not support hybrid queries involving both vectorized and relational data. To further enrich query expressiveness while retaining evaluation efficiency, we propose Engorgio, a hybrid EDB framework based on quantized data ordering techniques over FHE. Specifically, we design a new quantized data encoding scheme along with a set of novel comparison and permutation algorithms to accurately generate and apply orders between large-precision data items. Furthermore, we optimize specific query types, including full table scan, batched query, and Top-k query to enhance the practical performance of the proposed framework. In the experiment, we show that, compared to the state-of-the-art EDB frameworks, Engorgio is up to 28x--854x faster in homomorphic comparison, 65x--687x faster in homomorphic sorting and 15x--1,640x faster over a variety of end-to-end relational, vectorized, and hybrid SQL benchmarks. Using Engorgio, the amortized runtime for executing a relational and hybrid query on a 48-core processor is under 3 and 75 seconds, respectively, over a 10K-row hybrid database.
Generic Construction of Secure Sketches from Groups
Secure sketches are designed to facilitate the recovery of originally enrolled data from inputs that may vary slightly over time. This capability is important in applications where data consistency cannot be guaranteed due to natural variations, such as in biometric systems and hardware security. Traditionally, secure sketches are constructed using error-correcting codes to handle these variations effectively. Additionally, principles of information theory ensure the security of these sketches by managing the trade-off between data recoverability and confidentiality. In this paper, we show how to construct a new family of secure sketches generically from groups. The notion of groups with unique factorization property is first introduced, which is of independent interest and serves as a building block for our secure sketch construction. Next, an in-depth study of the underlying mathematical structures is provided, and some computational and decisional hardness assumptions are defined. As a result, it is argued that our secure sketches are efficient; can handle a linear fraction of errors with respect to the norm 1 distance; and that they are reusable and irreversible. To our knowledge, such generic group-based secure sketch construction is the first of its kind, and it offers a viable alternative to the currently known secure sketches.
- « Previous
- 1
- ...
- 4
- 5