Papers updated in last 7 days (79 results)
Succinct Computational Secret Sharing
A secret-sharing scheme enables a dealer to share a secret among parties such that only authorized subsets of parties, specified by a monotone access structure , can reconstruct from their shares. Other subsets of parties learn nothing about .
The question of minimizing the (largest) share size for a given has been the subject of a large body of work. However, in most existing constructions for general access structures , the share size is not much smaller than the size of some natural computational representation of , a fact that has often been referred to as the ``representation size barrier'' in secret sharing.
In this work, we initiate a systematic study of succinct computational secret sharing (SCSS), where the secrecy requirement is computational and the goal is to substantially beat the representation size barrier. We obtain the following main results.
(1) SCSS via Projective PRGs. We introduce the notion of a *projective PRG*, a pseudorandom generator for which any subset of the output bits can be revealed while keeping the other output bits hidden, using a *short* projective seed. We construct projective PRGs with different levels of succinctness under a variety of computational assumptions, and apply them towards constructing SCSS for graph access structures, monotone CNF formulas, and (less succinctly) useful subclasses of monotone circuits and branching programs. Most notably, under the sub-exponential RSA assumption, we obtain a SCSS scheme that, given an arbitrary access structure , represented by a truth table of size , produces shares of size polylog(N)=\poly(n) in time . For comparison, the share size of the best known information-theoretic schemes is .
(2) SCSS via One-way Functions. Under the (minimal) assumption that one-way functions exist, we obtain a near-quadratic separation between the total share size of computational and information-theoretic secret sharing. This is the strongest separation one can hope for, given the state of the art in secret sharing lower bounds. We also construct SCSS schemes from one-way functions for useful classes of access structures, including forbidden graphs and monotone DNF formulas. This leads to constructions of fully-decomposable conditional disclosure of secrets (also known as privacy-free garbled circuits) for general functions, represented by a truth table of size , with share size polylog(N) and computation time , assuming sub-exponentially secure one-way functions.
Privacy-Preserving in Cloud Networks: An Efficient, Revocable and Authenticated Encrypted Search Scheme
With the widespread development of cloud networks, performing searches on encrypted data (without decryption) has become a critical issue. Public key authenticated encryption with keyword search (PAEKS) allows for the retrieval of encrypted data while resisting insider keyword guessing attacks (IKGAs). Most PAEKS schemes do not support access control in multi-receiver models. To address this limitation, attribute-based encryption has been introduced. However, the access privileges for the ciphertext may change, and it is vulnerable to quantum computing attacks, which limits its applicability and compromises security in cloud networks. In this paper, we propose RABAEKS, the first revocable and authenticated attribute-based encrypted search scheme over lattice in cloud networks. Our design allows the cloud server to enforce access control for data receivers during the search process. For practical implementation, we introduce a revocation mechanism for receivers, enabling dynamic access control. We then formalize and analyze the security of our scheme rigorously. Through performance evaluations and comparisons, we demonstrate that the search time, ciphertext size, and trapdoor size of our RABAEKS scheme are independent of the number of keywords. Additionally, the computational overhead for ciphertext generation, trapdoor generation, and the search phase in RABAEKS is, at most, 0.91%, 39.21%, and 80.95% of that of previous approaches, respectively, indicating it is efficient and practical in cloud networks.
Transistor: a TFHE-friendly Stream Cipher
Fully Homomorphic Encryption (FHE) allows computations on encrypted data without requiring decryption, ensuring data privacy during processing. However, FHE introduces a significant expansion of ciphertext sizes compared to plaintexts, which results in higher communication. A practical solution to mitigate this issue is transciphering, where only the master key is homomorphically encrypted, while the actual data is encrypted using a symmetric cipher, usually a stream cipher. The server then homomorphically evaluates the stream cipher to convert the encrypted data into a homomorphically encrypted form.
We introduce Transistor, a stream cipher specifically designed for efficient homomorphic evaluation within the TFHE scheme, a widely-used FHE framework known for its fast bootstrapping and ability to handle low-precision data. Transistor operates on which is chosen to optimize TFHE performances. Its components are carefully engineered to both control noise growth and provide strong security guarantees. First, a simple TFHE-friendly implementation technique for LFSRs allows us to use such components to cheaply increase the state size. At the same time, a small Finite State Machine is the only part of the state updated non-linearly, each non-linear operation corresponding in TFHE to a rather expensive Programmable Bootstrapping. This update is done using an AES-round-like transformation. But, in contrast to other stream ciphers like SNOW or LEX, our construction comes with information-theoretic security arguments proving that an attacker cannot obtain any information about the secret key from three or fewer consecutive keystream outputs. These information-theoretic arguments are then combined with a thorough analysis of potential correlations to bound the minimal keystream length required for recovering the secret key.
Our implementation of Transistor significantly outperforms the state of the art of TFHE transciphering, achieving a throughput of over 60 bits/s on a standard CPU, all while avoiding the need for an expensive initialization process.
Information Set Decoding for Ring-Linear Codes
Information Set Decoding (ISD) algorithms currently offer the most powerful tool to solve the two archetypal problems of coding theory, namely the codeword finding roblem and the syndrome decoding problem. Traditionally, ISD have primarily been studied for linear codes over finite fields, equipped with the Hamming metric.
However, recently, other possibilities have also been explored. These algorithms have been adapted to different ambient spaces and metrics, such as the rank metric or the Lee metric over .
In this paper, we propose a general framework for decoding ring-linear codes that exploits the underlying ring structure to improve traditional approaches. The core idea is to project the decoding instance onto a smaller alphabet, which may enable more efficient decoding algorithms. Our framework is compatible with any coordinate-additive metric, including Hamming and Lee, and can also be adapted to the rank metric. However, we show that the effectiveness of this approach strongly depends on the chosen metric. We illustrate how this framework can be leveraged to design decoding algorithms for the two aforementioned problems in Hamming, rank, and Lee metrics, along with their range of effectiveness. For each case, we provide the average computational complexity of the resulting algorithms.
Leap: A Fast, Lattice-based OPRF With Application to Private Set Intersection
Oblivious pseudorandom functions (OPRFs) are an important primitive in privacy-preserving cryptographic protocols. The growing interest in OPRFs, both in theory and practice, has led to the development of numerous constructions and variations. However, most of these constructions rely on classical assumptions. Potential future quantum attacks may limit the practicality of those OPRFs for real-world applications.
To close this gap, we introduce Leap, a novel OPRF based on heuristic lattice assumptions. Fundamentally, Leap builds upon the Spring [BBL+15] pseudorandom function (PRF), which relies on the learning with rounding assumption, and integrates techniques from multi-party computation, specifically Oblivious Transfer (OT) and Oblivious Linear Evaluation (OLE). With this combination of oblivious protocols, we construct an OPRF that evaluates in less than a millisecond on a modern computer.
Efficiency-wise, our prototype implementation achieves computation times of just 11 microseconds for the client and 750 microseconds for the server, excluding some base OT preprocessing overhead. Moreover, Leap requires an online communication cost of 23 kB per evaluation, where the client only has to send around 380 bytes online. To demonstrate the practical applicability of Leap, we present an efficient private set intersection (PSI) protocol built on top of Leap. This application highlights the potential for the integration of Leap into various privacy-preserving applications: We can compute an unbalanced set intersection with set sizes of 2^24 and 2^15 in under a minute of online time and just over two minutes overall.
Sassafras: Efficient Batch Single Leader Election
In a single secret leader election (SSLE), a set of participants elect exactly one leader, who remains anonymous until they announce themselves by providing a proof. SSLE protocols are used in proof-of-stake blockchains to elect the leader who publishes the next block. Anonymity of the leader is an important security property, as the leader makes for an attractive target and may be subject to denial-of-service (DOS) attacks.
In this work, we propose a novel single leader election protocol, called Sassafras. We depart from the common approach of shuffling for constructing SSLE and instead employ a ring verifiable random function, which hides the identity of the leader within a ring of participants. Moreover, Sassafras is designed for batch leader elections, in which a single leader is selected for several elections at once. This allows the rate of leader election to match the rate of block production, an often-sought property not met by most SSLE protocols in the literature. We characterize single leader election with batching in the form of an ideal functionality in the Universal Composability (UC) framework and prove that Sassafras realizes this functionality. Sassafras is secure against an adaptive adversary, while achieving a slightly relaxed notion of anonymity for leaders. Sassafras features exceptionally low communication and computational complexity, outperforming other SSLE protocols by an order of magnitude or more.
Adversarially Robust Bloom Filters: Privacy, Reductions, and Open Problems
A Bloom filter is a space-efficient probabilistic data structure that represents a set of elements from a larger universe . This efficiency comes with a trade-off, namely, it allows for a small chance of false positives. When you query the Bloom filter about an element x, the filter will respond 'Yes' if . If , it may still respond 'Yes' with probability at most . We investigate the adversarial robustness and privacy of Bloom filters, addressing open problems across three prominent frameworks: the game-based model of Naor-Oved-Yogev (NOY), the simulator-based model of Filic et. al., and learning-augmented variants. We prove the first formal connection between the Filic and NOY models, showing that Filic correctness implies AB-test resilience. We resolve a longstanding open question by proving that PRF-backed Bloom filters fail the NOY model's stronger BP-test. Finally, we introduce the first private Bloom filters with differential privacy guarantees, including constructions applicable to learned Bloom filters. Our taxonomy organizes the space of robustness and privacy guarantees, clarifying relationships between models and constructions.
A Theoretical Perspective on the Formal Verification of IoT Protocols Using LTL and Rewriting Logic in Maude
As Internet of Things (IoT) systems become increasingly complex, verifying communication protocols is essential to guarantee their security and correctness. This paper introduces a brief introduction to the theoretical concepts of how to use formal techniques, LTL and rewriting logic within the Maude system to verify the security and proper behavior of the protocols. While rewriting logic explains how various events occur over time, LTL explains how a property can be formulated. This theoretical perspective is intended to inform future applications and research.
Experimentally studying path-finding problem between conjugates in supersingular isogeny graphs: Optimizing primes and powers to speed-up cycle finding
We study the problem of finding a path between conjugate supersingular elliptic curves, a sub-routine of cycle finding algorithm in the graph of all supersingular elliptic curves over , connected by isogenies for primes . We prove that asymptotically the most time-efficient way of overviewing the graph to find conjugate paths is seeing steps together, a question posed in Remark 3.5 by Eisentr{\"a}ger \etal~\cite{eisentrager2020}. We see both theoretically and experimentally that selecting optimally speeds up the cycle finding.
Parallel Repetition for Post-Quantum Arguments
In this work, we show that parallel repetition of public-coin interactive arguments reduces the soundness error at an exponential rate even in the post-quantum setting. Moreover, we generalize this result to hold for threshold verifiers, where the parallel repeated verifier accepts if and only if at least of the executions are accepted (for some threshold ). Prior to this work, these results were known only when the cheating prover was assumed to be classical.
We also prove a similar result for three-message private-coin arguments. Previously, Bostanci, Qian, Spooner, and Yuen (STOC 2024) proved such a parallel repetition result in the more general setting of quantum protocols, where the verifier and communication may be quantum. We consider only protocols where the verifier is classical, but obtain a simplified analysis, and for the more general setting of threshold verifiers.
Improving Gaudry-Schost algorithm for multi-dimensional discrete logarithm calculations: Implementations relevant to electronic voting and cash schemes
We focus on improving the Gaudry-Schost algorithm, which solves multi-dimensional discrete logarithm problem. We have proposed a modified algorithm that reduces the cost of each iteration of Gaudry-Schost algorithm by a factor of where is the dimension of the problem, a specially designed quantity, is the cost of computing any function , are functions and . The cost of our algorithm for subgroups modulo prime , which arises in electronic voting and cash schemes diminishes by a factor of where 's are the iterations to reach a certain type of points.
Our implementation of the algorithm confirms theoretical analysis. The reduction of cost per iteration sums to be much advantegeous when complete number of iterations have to be done to find the logarithms. Also, both theory and experiments confirm that the new algorithm reduces the dominant multiplication cost along with
other additional costs, and the gain would be more as we increase the size of the group. We obtained about times speed-up for groups of size bits.
Our algorithm will lead to a reduction of security of schemes based on multi-dimensional discrete logarithm problem or using multi-dimensional pseudo-random walk like electronic voting, cash schemes point-counting, speeding-up elliptic-curve arithmetic, group-actions, CSIDH etc.
Our work aims to minimize interaction in secure computation due to the high cost and challenges associated with communication rounds, particularly in scenarios with many clients. In this work, we revisit the problem of secure aggregation in the single-server setting where a single evaluation server can securely aggregate client-held individual inputs. Our key contribution is the introduction of One-shot Private Aggregation ( ) where clients speak only once (or even choose not to speak) per aggregation evaluation. Since each client communicates only once per aggregation, this simplifies managing dropouts and dynamic participation, contrasting with multi-round protocols and aligning with plaintext secure aggregation, where clients interact only once.
We construct based on LWR, LWE, class groups, DCR and demonstrate applications to privacy-preserving Federated Learning (FL) where clients {speak once}. This is a sharp departure from prior multi-round FL protocols whose study was initiated by Bonawitz et al. (CCS, 2017). Moreover, unlike the YOSO (You Only Speak Once) model for general secure computation, eliminates complex committee selection protocols to achieve adaptive security. Beyond asymptotic improvements, is practical, outperforming state-of-the-art solutions. We benchmark logistic regression classifiers for two datasets, while also building an MLP classifier to train on MNIST, CIFAR-10, and CIFAR-100 datasets.
We build two flavors of (1) from (threshold) key homomorphic PRF and (2) from seed homomorphic PRG and secret sharing.
The threshold Key homomorphic PRF addresses shortcomings observed in previous works that relied on DDH and LWR in the work of Boneh et al. (CRYPTO, 2013), marking it as an independent contribution to our work. Moreover, we also present new threshold key homomorphic PRFs based on class groups or DCR or the LWR assumption.
Schnorr Signatures are Tightly Secure in the ROM under a Non-interactive Assumption
We show that the widely-used Schnorr signature scheme meets existential unforgeability under chosen-message attack (EUF-CMA) in the random oracle model (ROM) if the circular discrete-logarithm (CDL) assumption holds in the underlying group. CDL is a new, non-interactive and falsifiable variant of the discrete-logarithm (DL) assumption that we introduce. Our reduction is completely tight, meaning the constructed adversary against CDL has essentially the same running time and success probability as the assumed forger. This serves to justify the size of the underlying group for Schnorr signatures used in practice.
To our knowledge, we are the first to exhibit such a reduction. Indeed,
prior work required interactive and non-falsifiable assumptions (Bellare and Dai, INDOCRYPT 2020) or additional idealized models beyond the ROM like the algebraic group model (Fuchsbauer, Plouviez and Seurin, EUROCRYPT 2020). To further demonstrate the applicability of CDL, we show that Sparkle+ (Crites, Komlo and Maller, CRYPTO 2023), a threshold signing scheme for Schnorr, is tightly secure (under static corruptions) assuming CDL. Finally, we justify CDL by showing it holds in two carefully chosen idealized models that idealize different aspects of the assumption.
Revisiting Discrete Logarithm Reductions
A reduction showing that the hardness of the discrete logarithm ( ) assumption implies the hardness of the computational Diffie-Hellman ( ) assumption in groups of order , where is smooth, was first presented by den Boer [Crypto, 88].}
We also consider groups
of prime order , where is somewhat smooth (say, every prime that divides is less than ).
Several practically relevant groups satisfy this condition.
1. We present a concretely efficient version of the reduction for such groups.
In particular, among practically relevant groups, we obtain the most efficient and tightest reduction in the literature for BLS12-381, showing that = .
2. By generalizing the reduction, we show that in these groups the -Power ( - ) assumption implies -Diffie-Hellman Exponent ( - ) assumption, where is polynomial in the security parameter.
On the negative side, we show there is no generic reduction, which could demonstrate that - implies the -Generalized Diffie-Hellman Exponent ( - ) assumption.
This is in stark contrast with the algebraic group model, where this implication holds.
PSYLOCKE: Provably Secure Logic Locking with Practical Efficiency
Logic locking is an obfuscation technique designed to protect the intellectual property of hardware designs and has attracted considerable attention for over a decade. However, most logic locking schemes have been developed heuristically, leading the field into a cat-and-mouse game between attackers and defenders. Indeed, several proposed schemes have already been broken. While recent works have introduced provably secure logic locking, they often incur impractical overhead or fail to support the ASIC design paradigm while offering strong theoretical security guarantees.
In this work, we propose PSYLOCKE, a provably secure and practically efficient logic locking scheme that balances formal security guarantees with implementation feasibility. We introduce a new security paradigm that formalizes logic locking under predetermined allowable leakage, such as circuit topology, and we provide refined definitions of resilience against specific attacks. Our analysis bridges general security definitions and attack resilience, quantifying how leakage impacts the success of real-world attacks. PSYLOCKE is provably secure under topology leakage and achieves significant efficiency improvement compared to existing provably secure logic locking schemes. Alongside our theoretical analysis, we demonstrate through quantitative evaluations using widely-used benchmark circuits that PSYLOCKE strikes a favorable balance between practical performance and security. Concretely, PSYLOCKE reduced the Area-Power-Delay overhead by an order of magnitude while achieving the same security level, compared to the existing provably secure logic locking scheme.
Ideally HAWKward: How Not to Break Module-LIP
The module-Lattice Isomorphism Problem (module-LIP) was introduced by Ducas et al. (ASIACRYPT 22) in~\cite{HAWK:cryptoeprint:2022/1155}, and used within the signature scheme and NIST candidate HAWK. In~\cite{modLIPtotallyreal}, Mureau et al. (EUROCRYPT24) pointed out that over certain number fields , the problem can be reduced to enumerating solutions of (where is given and are the unknowns). Moreover one can always reduce to a similar equation which has only \textit{few} solutions. This key insight led to a heuristic polynomial-time algorithm for solving module-LIP on those specific instances. Yet this result doesn't threaten HAWK for which the problem can be reduced to enumerating solutions of (where is given and are the unknowns). We show that, in all likelihood, solving this equation requires the enumeration of a \textit{too large} set to be feasible, thereby making irrelevant a straightforward adaptation of the approach in~\cite{modLIPtotallyreal}.
Delegated Multi-party Private Set Intersection from Secret Sharing
In this work, we address the problem of Delegated PSI (D-PSI), where a cloud server is introduced to handle most computational and communication tasks. D-PSI enables users to securely delegate their private sets to the cloud, ensuring the privacy of their data while allowing efficient computation of the intersection. The cloud operates under strict security requirements, learning nothing about the individual sets or the intersection result. Moreover, D-PSI minimizes user-to-user communication and supports "silent" processing, where the cloud can perform computations independently without user interaction, apart from set delegation and result retrieval.
We formally define the D-PSI problem and propose a novel construction that extends beyond two-party scenarios to support multi-party settings. Our construction adheres to the D-PSI requirements, including security against semi-honest adversaries, and achieves computational and communication complexities close to the ideal "perfect" D-PSI protocol. Additionally, we demonstrate the practicality of our approach through a baseline implementation and an optimized version that further reduces computational overhead. Our results establish a strong foundation for secure and efficient PSI in real-world cloud computing scenarios.
An Innovative Lightweight Symmetric Encryption Algorithm Integrating NeoAlzette ARX S-box and XCR CSPRNG
This paper introduces "Little OaldresPuzzle_Cryptic," a novel lightweight symmetric encryption algorithm.
At the core of this algorithm are two main cryptographic components: the NeoAlzette permutation S-box based on ARX (Addition-Rotation-XOR) primitives and the innovative pseudo-random number generator XorConstantRotation (XCR), used exclusively in the key expansion process. The NeoAlzette S-box, a non-linear function for 32-bit pairs, is meticulously designed for both encryption strength and operational efficiency, ensuring robust security in resource-constrained environments. During the encryption and decryption processes, a pseudo-randomly selected mixed linear diffusion function, distinct from XCR, is applied, enhancing the complexity and unpredictability of the encryption.
We comprehensively explore the various technical aspects of the Little OaldresPuzzle_Cryptic algorithm.
Its design aims to balance speed and security in the encryption process, particularly for high-speed data transmission scenarios. Recognizing that resource efficiency and execution speed are crucial for lightweight encryption algorithms, without compromising security, we conducted a series of statistical tests to validate the cryptographic security of our algorithm. These tests included assessments of resistance to linear and differential cryptanalysis, among other measures.
By combining the NeoAlzette S-box with sophisticated key expansion using XCR, and integrating the pseudo-randomly selected mixed linear diffusion function in its encryption and decryption processes, our algorithm significantly enhances its capability to withstand advanced cryptographic analysis techniques while maintaining lightweight and efficient operation. Our test results demonstrate that the Little OaldresPuzzle_Cryptic algorithm effectively supports the encryption and decryption needs of high-speed data, ensuring robust security and making it an ideal choice for various modern cryptographic application scenarios.
Keywords: Symmetric Encryption Algorithm, Lightweight Cryptography, ARX Primitives, PRNG, NeoAlzette S-boxes, XorConstantRotation, Diffusion and Confusion Layers, Cryptographic Security, Statistical Tests, Resource-Constrained Environments.
Computational Attestations of Polynomial Integrity Towards Verifiable Back-Propagation
Recent advancements in machine learning accuracy and utility have been driven by the effective combination of sophisticated models with high-performance computational scaling. As the development of large-scale models shifts away from commodity hardware to outsourced computation, it becomes paramount to ensure that the training process is executed with integrity and transparency. This encompasses verifying that adequate computational resources were expended and that the resulting model is accurate, rather than the product of skipped steps or resource-saving shortcuts by the external provider. Building on our previous efforts, which demonstrated the computational feasibility of using this system to argue correctness for differentially-private linear regression, we extend those results to achieve fully provable back-propagation—a cornerstone operation in modern machine learning training. Our system achieves complete zero-knowledge, revealing nothing about the input data during training, and ensures quantum security by relying on no weak cryptographic primitives. Efficiency is substantially increased through the use of a fixed-point decimal representation, reducing the computational overhead typically associated with floating-point arithmetic. Notably, our solution is doubly efficient, achieving a logarithmic-time verifier and a linear-time prover. Implemented entirely in Rust without reliance on external machine learning libraries, and executed within a cryptographically secure virtual machine, this work represents a significant advancement toward verifiable, secure, and efficient outsourced machine learning computations.
Hydrangea: Optimistic Two-Round Partial Synchrony with One-Third Fault Resilience
We present Hydrangea, a partially synchronous Byzantine fault-tolerant state machine replication protocol that achieves a latency of two rounds optimistically while maintaining high adversarial resilience. In particular, for a system of parties, if up to parties are faulty, then the protocol can obtain a latency of two rounds. Otherwise, the protocol can obtain a latency of three rounds while tolerating Byzantine faults and crash faults {\em simultaneously}.
SEAF: Secure Evaluation on Activation Functions with Dynamic Precision for Secure Two-Party Inference
Secure evaluation of non-linear functions is one of the most expensive operations in secure two-party computation, particularly for activation functions in privacy preserving machine learning (PPML). This work introduces SEAF, a novel framework for efficient Secure Evaluation on Activation Functions. SEAF is based on the linear approximation approach, but enhances it by introducing two key innovations: Trun-Eq based interval test protocols and linear approximation with dynamic precision, which have the potential for broader applicability. Furthermore, we classify common activation functions into several categories, and present specialized methods to evaluate them using our enhanced techniques. Our implementation of SEAF demonstrates to speedup on activation functions and compared to SirNN (S\&P'21). When applied on , SEAF outperforms Iron (NeurIPS'22) by more than and Bolt (S\&P'24) by up to . For end-to-end secure inference on BERT, the original accounts for and of the total runtime in Iron and Bolt, respectively. In contrast, our optimized reduces these proportions to and , eliminating as a bottleneck in secure inference.
Quantum Computing without the Linear Algebra
Quantum computing is often introduced through the lens of linear algebra with notation that is inherited from quantum mechanics. In this paper, we take an operational view of quantum computing that is easy to demonstrate programmatically. The hope is that this viewpoint will (1) demystify quantum computing and make it more accessible to a wider audience, particularly computer science students and software engineers, and (2) possibly serve as the basis of a formal foundation for automatically reasoning about quantum programs.
We treat the state of a quantum computer as a set and present the operations of a quantum computer—quantum gates and measurements—using familiar functional set transformations (think map, filter, fold, etc.). By the end of the paper, we will have implemented a simple quantum circuit simulator that can be used to simulate small quantum circuits. The code is available at https://github.com/qqq-wisc/qwla.
CAPSS: A Framework for SNARK-Friendly Post-Quantum Signatures
In this paper, we present a general framework for constructing SNARK-friendly post-quantum signature schemes based on minimal assumptions, specifically the security of an arithmetization-oriented family of permutations. The term "SNARK-friendly" here refers to the efficiency of the signature verification process in terms of SNARK constraints, such as R1CS constraints. Within the CAPSS framework, signature schemes are designed as proofs of knowledge of a secret preimage of a one-way function, where the one-way function is derived from the chosen permutation family. To obtain compact signatures with SNARK-friendly verification, we rely on SmallWood, a recently proposed hash-based zero-knowledge argument scheme well suited for statements arising in this context. From this proof system which we tweak towards SNARK-friendliness, the CAPSS framework offers a generic transformation of any arithmetization-oriented permutation family into a SNARK-friendly post-quantum signature scheme. We provide concrete instances built on permutations such as Rescue-Prime, Poseidon, Griffin, and Anemoi. For the Anemoi family, achieving 128-bit security, our approach produces signatures of sizes ranging from 9 to 13.4 KB, with R1CS constraints between 19K and 29K. This represents a 4-6x reduction in signature size and a 5-8x reduction in R1CS constraints compared to Loquat (CRYPTO 2024), a SNARK-friendly post-quantum signature scheme based on the Legendre PRF.
A Framework for Compiling Custom Languages as Efficiently Verifiable Virtual Machines
In this work, we develop a framework for compiling languages into efficient Interactive Oracle Proofs (IOPs, or ‘circuits’), motivated by applications in verifiable Virtual Machine (zkVM) design. We provide a set of sufficient conditions on a language under which it can be compiled into an efficient IOP, alongside corresponding performance costs. We identify a subclass of languages, which we denote as traversable, and demonstrate how traversable languages can be efficiently compiled as circuits using established techniques.
To demonstrate the efficacy of our compilation framework, we develop a zkVM for the Nock programming language by (1) formalizing the existing Nock specification, and (2) applying our techniques to design an efficient IOP representation for the Nock VM. The resulting circuit is small, on par with existing state-of-the-art zkVM designs and can be generated for any traversable language in a generic way.
Kahrobaei--Koupparis DSS: universal forgery
Regardless of the choice of parameters, knowledge of a single signed message, i.e., a pair message/signature, produced by Kahrobaei-Koupparis digital signature scheme is sufficient to forge a valid signature for any other message.
Laconic PSI on Authenticated Inputs and Applications
A common issue with using secure computation in practice is that its security does not place any restrictions on what an adversary can use as input in the protocol. In this work, we focus on the practically-motivated setting of (two-message, labeled) private set intersection (PSI), and advocate for a clean and versatile solution to this problem: PSI on authenticated inputs.
Our central contributions are summarized as follows.
- We formulate a novel definition of PSI on authenticated inputs that has the potential for use in several applications, from content moderation in end-to-end encrypted systems to watchlists in anonymous e-cash systems.
- We design a concretely-efficient and laconic (i.e., the size of the receiver's message is independent of its set size) protocol for PSI on authenticated inputs.
- We build on our PSI protocol to obtain the first laconic set pre-constrained group signature scheme, improving on that of Bartusek et al. (Eurocrypt 23).
We also explore various optimizations to our basic protocol, including reducing the receiver's concrete run time, and a tradeoff between crs size and message size.
Early Stopping is Cheap
Minimizing both the round and communication complexity of Byzantine agreement (BA) is fundamental question in distributed computing. A long line of works has focused on early-stopping deterministic protocols that can terminate within a number of synchronous rounds that is proportional to , where is the \emph{actual number} of corruptions in an execution, as opposed to an upper bound . This can lead to major benefits when . A very different style of randomized protocol has focused on \emph{player replaceable} BA protocols with communication complexity linear in the number of parties and adaptive security, which rely on only a small and rotating subcommittee of parties to ever speak in the protocol. One downside of existing player-replaceable protocols is that they require rounds to terminate with overwhelming probability .
For applications demanding high security guarantees, this can easily become the bottleneck of a player replaceable protocol. Motivated by this gap in the literature, we give the first protocol that is simultaneously player-replaceable \emph{and} early stopping (with overwhelming probability). Let and be constants and let and denote suitable security parameters. Our protocol is secure against up to adaptive Byzantine corruptions and terminates in many rounds with probability , given corruptions. Moreover, our protocol has constant expected round complexity and communication bounded by
b4M: Holistic Benchmarking for MPC
Secure Multi-Party Computation (MPC) is becoming more and more usable in practice. The practicality origins primarily from well-established general-purpose MPC frameworks, such as MP-SPDZ. However, to evaluate the practicality of an MPC program in the envisioned environments, still many benchmarks need to be done. We identified three challenges in the context of performance evaluations within the MPC domain: first, the cumbersome process to holistically benchmark MPC programs; second, the difficulty to find the best-possible MPC setting for a given task and envisioned environment; and third, to have consistent evaluations of the same task or problem area across projects and papers. In this work, we address the gap of tedious and complex benchmarking of MPC. Related works so far mostly provide a comparison for certain programs with different engines.
To the best of our knowledge, for the first time the whole benchmarking pipeline is automated; provided by our open-sourced framework Holistic Benchmarking for MPC (b4M). b4M is easy to configure using TOML files, outputs ready-to-use graphs, and provides even the MPC engine itself as own benchmark dimension. Furthermore it takes three relatively easy steps to add further engines: first, integrate engine-specific commands into b4M’s runner class; second, output performance metrics in b4M’s format; third, provide a Docker container for the engine’s parties.
To showcase b4M, we provide an exemplary evaluation for the computation of the dot product and logistic regression using a real-world dataset. With this work, we move towards fully-automated evaluations of MPC programs, protocols, and engines, which smoothens the setup process and viewing various trade-offs. Hence, b4M advances MPC development by improving the benchmarking usability aspect of it.
Low-cost anonymous reputation update for IoT applications
Uncategorized
Uncategorized
This paper presents a novel approach to zero-trust anonymous reputation update in crowd sensing IoT applications. We use a suite of cryptographic functions to achieve anonymity, including unlinkability of sensing reports to the principals that submit them and to one another, while enabling the infrastructure to reliably quantify the degree of trust expressed as a reputation level. The protocol is low-cost for the anonymous participant due to the use of cheap standard algorithms: low-exponent modular exponentia- tion and cryptographic hashing, which makes it quite suitable for IoT.
Improvements on the schemes VOX and QR UOV When minus is a plus
In this article, we present an improvement for QR UOV and a reparation of VOX.
Thanks to the use of the perturbation minus we are able to fully exploit the parameters in order to reduce the public key. It also helps us to repair the scheme VOX. With QR UOV- we are able to significantly reduce the size of the public key at the cost of a small increase of the signature size. We show that VOX does not bring significant advantage in terms of sizes compared to QR but gives a double security aspect.
We also show that the use of minus perturbation is very useful with schemes that uses the QR (Quotient ring perturbation).
Better GBFV Bootstrapping and Faster Encrypted Edit Distance Computation
We propose a new iterative method to convert a ciphertext from the Generalized BFV (GBFV) to the regular BFV scheme. In particular, our conversion starts from an encrypted plaintext that lives in a large cyclotomic ring modulo a small-norm polynomial , and gradually changes the encoding to a smaller cyclotomic ring modulo a larger integer . Previously, only a trivial conversion method was known, which did not change the underlying cyclotomic ring.
Using our improved conversion algorithm, we can bootstrap the GBFV scheme almost natively, in the sense that only a very small fraction of the operations is computed inside regular BFV. Specifically, we evaluate (an adapted version of) the slot-to-coefficient transformation entirely in the GBFV scheme, whereas the previous best method used the BFV scheme for that transformation. This insight allows us to bootstrap either with less noise growth, or much faster than the state-of-the-art.
We implement our new bootstrapping in Microsoft SEAL. Our experiments show that, for the same remaining noise budget, our bootstrapping runs in only 800 ms when working with ciphertexts containing 1024 slots over with . This is faster than the state-of-the-art.
Finally, we use our improved GBFV bootstrapping in an application that computes an encrypted edit distance. Compared to the recent TFHE-based Leuvenshtein algorithm, our GBFV version is almost two orders of magnitude faster in the amortized sense.
Universally Composable Succinct Vector Commitments and Applications
We develop a toolbox for modular construction and analysis of succinct, non-interactive commitments and vector commitments in the random oracle model, while guaranteeing universally composable security. To demonstrate its power, we use the toolbox to construct and analyze a modular variant of the Kilian-Micali ZK-SNARK. Along the way we also propose a new UC formulation of a global random oracle, that avoids a weakness in existing formulations and also enables expressing more nuanced, session-specific abstractions. We hope that this toolbox will be useful for building secure applications in settings where both succinctness and non-interactivity are key.
TEEMS: A Trusted Execution Environment based Metadata-protected Messaging System
Ensuring privacy of online messaging remains a challenge. While the contents or data of online communications are often protected by end-to-end encryption, the metadata of communications are not. Metadata such as who is communicating with whom, how much, and how often, are leaked by popular messaging systems today.
In the last four decades we have witnessed a rich literature of designs towards metadata-protecting communications systems (MPCS). While recent MPCS works often target metadata-protected messaging systems, no existing construction simultaneously attains four desirable properties for messaging systems, namely (i) low latency, (ii) high throughput, (iii) horizontal scalability, and (iv) asynchronicity. Existing designs often capture disjoint subsets of these properties. For example, PIR-based approaches achieve low latency and asynchronicity but have low throughput and lack horizontal scalability, mixnet-based approaches achieve high throughput and horizontal scalability but lack asynchronicity, and approaches based on trusted execution environments (TEEs) achieve high throughput and asynchronicity but lack horizontal scalability.
In this work, we present TEEMS, the first MPCS designed for metadata-protected messaging that simultaneously achieves all four desirable properties. Our distributed TEE-based system uses an oblivious mailbox design to provide metadata-protected messaging. TEEMS presents novel oblivious routing protocols that adapt prior work on oblivious distributed sorting. Moreover, we introduce the notion of ID and token channels to circumvent shortcomings of prior designs. We empirically demonstrate TEEMS' ability to support clients engaged in metadata-protected conversations in under 1 s, with 205 cores, achieving an 18× improvement over prior work for latency and throughput, while supporting significantly better scalability and asynchronicity properties.
Adding Feeding Forward Back to the Sponge Construction
Avoiding feeding forward seems to be a major goal of the sponge construction. We make a step back and investigate adding feeding forward back to sponge. The obtained sponge-with-feeding-forward construction has a number of benefits: (1) In the random permutation model, its preimage and second preimage security bounds are much better than the standard sponge with the same capacity, while collision and indifferentiability security bounds are comparable; (2) Its collision and (second) preimage security can be reduced to well-defined properties of the underlying permutation, i.e., correlation intractability w.r.t. certain family of evasive relations.
We further incorporate several somewhat new ideas to form detailed hash and XOF constructions SpongeFwd: (1) Feeding-forward is only applied to the capacity part, and the final output(s) is the capacity part (with the rate part discarded). In this way, when equals the primary hash output size , a single permutation-call suffices for squeezing. This also simplifies the underlying evasive relations for the reduction security proof. (2) We replace the hash IV with the first message block to have the best possible efficiency. (3) We employ a parallel structure to define an XOF variant. (4) We use HAIFI-style counter inputs to achieve both length-independent second-preimage security and domain separation for XOF.
The better (second) preimage security enables constructing 512-bit output hash function from Keccak-p[800]: with 512-bit capacity, its collision and (second) preimage security bounds are the same as the standard SHA-3-512, while its hardware area is reduced by roughly 38% (according to our preliminary estimation).
Breaking the IEEE Encryption Standard – XCB-AES in Two Queries
Tweakable enciphering modes (TEMs) provide security in various storage and space-critical applications, including disk and file-based encryption and packet-based communication protocols. XCB-AES (originally introduced as XCBv2) is specified in the IEEE 1619.2 standard for encryption of sector-oriented storage media and comes with a formal security proof for block-aligned messages.
In this work, we present the first plaintext recovery attack on XCB-AES the shared difference attack, demonstrating that the security of XCB-AES is fundamentally flawed. Our plaintext recovery attack is highly efficient and requires only two queries (one enciphering and one deciphering), breaking the claimed , as well as the basic security. Our shared difference attack exploits an inherent property of polynomial hash functions called separability.
We pinpoint the exact flaw in the security proof of XCB-AES, which arises from the separability of polynomial hash functions. We show that this vulnerability in the XCB design strategy has gone unnoticed for over 20 years and has been inadvertently replicated in many XCB-style TEM designs, including the IEEE 1619.2 standard XCB-AES. We also apply the shared difference attack to other TEMs based on XCB XCBv1, HCI, and MXCB, invalidating all of their security claims, and discuss some immediate countermeasures.
Our findings are the first to highlight the need to reassess the present IEEE 1619.2 standard as well as the security and potential deployments of XCB-style TEMs.
How to (not) combine Oblivious Pseudorandom Functions
An oblivious pseudorandom function (OPRF) is a cryptographic tool that enables fast and secure authentication and key derivation from passwords. In the past few years, the adoption of OPRFs has flourished and today they are at the core of the PIN-protected backup methods of WhatsApp and Signal, and of privacy-enhancing browser technologies. All vendors deploy the so-called 2Hash-Diffie-Hellman (2HashDH) OPRF, which relies on discrete-logarithm-type assumptions that are standard yet known to be prone to quantum attacks.
Recent advancements in cryptographic research (e.g., Beullens et al., Eurocrypt 2025) have brought up post-quantum OPRFs that are fast enough to deploy them in the setting of, e.g., WhatsApp or Signal. Yet none of these constructions are based on standard assumptions.
In this work, we investigate combiners for OPRFs, namely a "best-of-both'' combination of a classical and a post-quantum OPRF that is secure as long as one of them is. First, we give formal evidence that so-called black-box combiners do not exist, indicating that combining OPRFs is subtle and bears similarities with other powerful yet hard-to-combine cryptographic primitives like oblivious transfer (OT).
We then give a (non-black-box) combiner for OPRFs and show that it can be instantiated with 2HashDH and the currently most efficient post-quantum OPRFs based on Legendre symbols. In particular, the reliance on the less standard Legendre-based hardness assumption does not harm the security of 2HashDH. This gives vendors a viable path to lift the security of their OPRF deployments to a post-quantum level.
A Deniability Analysis of Signal's Initial Handshake PQXDH
Many use messaging apps such as Signal to exercise their right to private communication. To cope with the advent of quantum computing, Signal employs a new initial handshake protocol called PQXDH for post-quantum confidentiality, yet keeps guarantees of authenticity and deniability classical. Compared to its predecessor X3DH, PQXDH includes a KEM encapsulation and a signature on the ephemeral key. In this work we show that PQXDH does not meet the same deniability guarantees as X3DH due to the signature on the ephemeral key. Our analysis relies on plaintext awareness of the KEM, which Signal's implementation of PQXDH does not provide. As for X3DH, both parties (initiator and responder) obtain different deniability guarantees due to the asymmetry of the protocol.
For our analysis of PQXDH, we introduce a new model for deniability of key exchange that allows a more fine-grained analysis. Our deniability model picks up on the ideas of prior work and facilitates new combinations of deniability notions, such as deniability against malicious adversaries in the big brother model, i.e. where the distinguisher knows all secret keys. Our model may be of independent interest.
A Note on Zero-Knowledge for NP and One-Way Functions
We present a simple alternative exposition of the the recent result of Hirahara and Nanashima (STOC’24) showing that one-way functions exist if (1) every language in NP has a zero-knowledge proof/argument and (2) ZKA contains non-trivial languages. Our presentation does not rely on meta-complexity and we hope it may be useful for didactic purposes. We also remark that the same result hold for (imperfect) iO for 3CNF, or Witness Encryption for NP.
A Note on One Authentication and Key Agreement Scheme for UAV-Assisted VANETs for Emergency Rescue
We show that the key agreement scheme [IEEE TNSE, 1454--1468, 2004] is insecure against ephemeral secret leakage attack and smart card loss attack, not as claimed. This failure results from its simple encryption, in which only bitwise XOR operations are used to encrypt the messages.
Tanuki: New Frameworks for (Concurrently Secure) Blind Signatures from Post-Quantum Groups Actions
Blind signatures are fundamental cryptographic primitives enabling privacy-preserving authentication and have seen renewed interest in the post-quantum literature.
Existing efficient constructions predominantly rely on Fischlin’s generic paradigm instantiated over lattice assumptions, while blinding techniques for sigma-protocol-based blind signatures remain sparse beyond lattices. Moreover, achieving provable concurrent security under polynomially many sessions has been a longstanding open challenge for this approach in the post-quantum literature as evidenced by the recent attacks in EC’24 and PKC’24.
This work broadens the landscape of post-quantum blind signatures by introducing novel techniques and proposing four frameworks based on general cryptographic group actions, without requiring commutativity. Our constructions admit instantiations under diverse post-quantum assumptions, including CSIDH (isogeny-based), LESS (code-based, NIST round-two), and more. These frameworks offer flexible trade-offs in assumptions (from interactive one-more to the standard inversion problem) and key/signature sizes, and culminate in a construction that achieves security under polynomially many concurrent sessions. This enables the first efficient blind signatures from isogenies and codes with provable concurrent security with 3.9 and 56 KB respectively. We also outline several directions for optimization and further instantiations for future work.
Lattice-Based Accumulator and Application to Anonymous Credential Revocation
An accumulator is a cryptographic system for compactly representing a set of elements such that every element in the set has a short membership witness. A dynamic accumulator, furthermore, allows elements to be added to and deleted from the accumulator. Camenisch and Lysyanskaya (CRYPTO'02) constructed the first dynamic accumulator under the strong-RSA assumption and showed how it can be used to enable revocation of anonymous credentials. In this paper, we give a lattice-based dynamic accumulator tailor-made for enabling revocation of post-quantum anonymous credential systems. As a concrete example, we instantiate our dynamic accumulator on top of the anonymous credential system implemented in the LaZer library (ACM CCS 2024).
Server-Aided Anonymous Credentials
This paper formalizes the notion of server-aided anonymous credentials (SAACs), a new model for anonymous credentials (ACs) where, in the process of showing a credential, the holder is helped by additional auxiliary information generated in an earlier (anonymous) interaction with the issuer. This model enables lightweight instantiations of 'publicly verifiable and multi-use' ACs from pairing-free elliptic curves, which is important for compliance with existing national standards. A recent candidate for the EU Digital Identity Wallet, BBS#, roughly adheres to the SAAC model we have developed; however, it lacks formal security definitions and proofs.
In this paper, we provide rigorous definitions of security for SAACs, and show how to realize SAACs from the weaker notion of keyed-verification ACs (KVACs) and special types of oblivious issuance protocols for zero-knowledge proofs. We instantiate this paradigm to obtain two constructions: one achieves statistical anonymity with unforgeability under the Gap -SDH assumption, and the other achieves computational anonymity and unforgeability under the DDH assumption.
On the Independence Heuristic in the Dual Attack
Post-quantum cryptography deals with the development and analysis of cryptographic schemes that are assumed to be secure even against attackers with access to a powerful quantum computers. Along the main candidates for quantum-safe solutions are cryptographic schemes, whose security are based on classic lattice problems such
as the bounded-distance decoding (BDD) problem or learning with errors (LWE) problem. In this work we contribute to the analysis of an attack category against these problems called dual attack.
Our first contributions is to provide theoretical counterarguments against a so-called independence assumption, which was used in earlier works on this attack, and which was shown to be contradicting practical experiments. Then, we provide estimates on the success probability and the cost of the dual attack against the decisional version of the BDD problem. These estimates are derived both rigorously and heuristically. Finally, we also provide experimental evidence that confirms these results.
Isogeny-based key exchange from orientations of large discriminant
We describe an algorithm to efficiently evaluate class group actions on supersingular elliptic curves that are oriented by an imaginary quadratic order of arbitrarily large discriminant. Contrary to CSIDH, this allows to increase the post-quantum security of the group action without increasing the size of the base field. In particular, we describe instances where Kuperberg's algorithm loses to generic supersingular isogeny path finding. Our algorithm is fully deterministic, strictly constant time, dummy free, and can be implemented without conditional branches. We show that the (restricted effective) group action can be employed in a non-interactive key exchange protocol, that we argue is asymptotically more efficient than CSIDH.
Circular Insecure Encryption: from Long Cycles to Short Cycles
A length encryption cycle consists of a sequence of keys, each encrypting the next, forming a cycle, and an encryption scheme is -circular secure if a length encryption cycle is computationally indistinguishable from encryptions of zeros. An interesting problem is whether CPA security implies circular security. This is shown to be not true. Using standard cryptographic assumptions and LWE, it was shown that within the class of CPA secure encryption schemes, for any , there exists an -circular insecure encryption scheme. Furthermore, there exists a particular encryption scheme that is -circular insecure for all . Following these results, it is natural to ask whether a circular insecurity of a particular length implies circular insecurity of different lengths and of multiple lengths. We answer this problem with an affirmative in this paper. We constructively prove that a CPA secure encryption scheme that is insecure in the presence of encryption cycles of length implies the existence of such a scheme for encryption cycles of any length less than . The constructed -circular insecure construction may have the same message space as the -circular insecure encryption scheme, and our results apply to both public key and symmetric key settings.
Oracle-Based Multistep Strategy for Solving Polynomial Systems Over Finite Fields and Algebraic Cryptanalysis of the Aradi Cipher
The multistep solving strategy consists in a divide-and-conquer approach: when a multivariate polynomial system is computationally infeasible to solve directly, one variable is assigned over the elements of the base finite field, and the procedure is recursively applied to the resulting simplified systems. In a previous work by the same authors (among others), this approach proved effective in the algebraic cryptanalysis of the Trivium cipher.
In this paper, we present a new implementation of the corresponding algorithm based on a Depth-First Search strategy, along with a novel complexity analysis leveraging tree structures. We further introduce the notion of an ``oracle function'' as a general predictive tool for deciding whether the evaluation of a new variable is necessary to simplify the current polynomial system. This notion allows us to unify all previously proposed variants of the multistep strategy, including the classical hybrid approach, by appropriately selecting the oracle function.
Finally, we apply the multistep solving strategy to the cryptanalysis of the low-latency block cipher Aradi, recently introduced by the NSA. We present the first full-round algebraic attack, raising concerns about the cipher’s actual security with respect to its key length.
CuFDFB: Fast and Private Computation on Non-Linear Functions Using FHE
Privacy-preserving neural network inference using Fully Homomorphic Encryption (FHE) faces significant challenges in efficiently evaluating non-polynomial functions, such as activation functions, which are critical for introducing non-linearity in neural networks. Full-Domain Functional Bootstrap (FDFB) algorithms provide a promising solution by enabling the evaluation of arbitrary functions while simultaneously refreshing ciphertexts to manage noise accumulation. Despite their theoretical advantages, the practicality of FDFB algorithms has been limited by excessive computational overhead, often exceeding 1000 ms per ciphertext, which restricts their scalability for large neural networks.
To overcome the computational bottlenecks of FDFB, we have re-engineered the algorithms for massively parallel execution on GPUs. Our primary contribution is a hierarchical parallelization strategy that exploits concurrency at the thread, stream, and device levels. A key optimization involves the use of CUDA streams to create a data pipeline that effectively mitigates the overhead of memory transfers between the host and device. This optimized architecture achieves a significant speedup of up to 524 compared to CPU-based implementations. Our implementation maintains full precision for evaluating various activation functions, confirming its viability for large-scale, privacy-preserving machine learning tasks and paving the way for practical FHE-based deep learning.
Tweakable Permutation-based Luby-Rackoff Constructions
Liskov, Rivest, and Wagner, in their seminal work, formulated tweakable blockciphers and proposed two blockcipher-based design paradigms, LRW1 and LRW2, where the basic design strategy is to xor the masked tweak to the input and output of a blockcipher. The 2-round cascaded LRW2 and 4-round cascaded LRW1 have been proven to be secure up to queries, but -bit optimal security still remains elusive for these designs. In their paper, Liskov also posed an open challenge of embedding the tweak directly in the blockcipher, and to address this, Goldenberg et al. introduced the tweakable Luby-Rackoff (LR) constructions. They showed that if the internal primitives are random functions, then for tweaks with blocks, the construction needs rounds to be optimally -bit CPA secure and rounds to be optimally -bit CCA secure, where respectively and rounds were required to process the tweaks. Since blockciphers can be designed much more efficiently than pseudorandom functions, in many practical applications the internal primitives of LR ciphers are instantiated as blockciphers, which however would lead to a birthday-bound factor, which is not ideal for say lightweight cryptography.
This paper addresses the following two key questions affirmatively: (1) Can Goldenberg et al.'s results be extended to LR constructions with random permutations as internal primitives without compromising the optimal -bit security? (2) Can the number of rounds required for handling long tweaks be reduced?
We formally define TLR-compatible functions, for processing the tweak, which when composed with 4-rounds and 5-rounds of LR construction with random permutations as internal primitives gives us respectively -bit CPA and CCA secure tweakable permutations. For the security analysis, we proved general Mirror Theory result for three permutations. We also propose instantiating TLR-compatible functions with one round LR where a permutation (resp, two AXU hash functions) is used to mask single-block tweaks (resp., variable-length tweaks), thus proposing the -bit CPA (resp., CCA) secure tweakable permutation candidates, and (resp., and ), using (resp., ) LR rounds, which is a significant reduction from the tweak-length-dependent results of Goldenberg et al.
We further extend the implications of our analysis of permutation-based LR as follows:
(1) We show -bit CPA (resp., CCA) security of -rounds (resp. -rounds) permutation-based LR construction, which is quite an improvement over the existing -bit security proved by Guo et al.
(2) We propose a new design paradigm, , for -bit secure MACs, that involves hashing the message, then passing the diblock tag through four rounds of permutation-based LR and then truncating the left block.
Merkle Mountain Ranges are Optimal: On Witness Update Frequency for Cryptographic Accumulators
We study append-only set commitments with efficient updates and inclusion proofs, or cryptographic accumulators. In particular, we examine how often the inclusion proofs (or witnesses) for individual items must change as new items are added to the accumulated set. Using a compression argument, we show unconditionally that to accumulate a set of items, any construction with a succinct commitment ( storage) must induce at least total witness updates as items are sequentially added. In a certain regime, we strengthen this bound to total witness updates. These lower bounds hold not just in the worst case, but with overwhelming probability over a random choice of the accumulated set. Our results show that a close variant of the Merkle Mountain range, an elegant construction that has become popular in practice, is essentially optimal.
Assessing the Impact of a Variant of MATZOV's Dual Attack on Kyber
The dual attacks on the Learning With Errors (LWE) problem are currently a subject of controversy. In particular, the results of [Matzov,2022], which claim to significantly lower the security level of CRYSTALS-Kyber, a lattice-based cryptosystem currently being standardized by NIST, are not widely accepted. The analysis behind their attack depends on a series of assumptions that, in certain scenarios, have been shown to contradict established theorems or well-tested heuristics [Ducas,Pulles,CRYPTO2023].
In this paper, we introduce a new dual lattice attack on LWE, drawing from ideas in coding theory. Our approach revisits the dual attack proposed by [Matzov,2022], replacing modulus switching with an efficient decoding algorithm. This decoding is achieved by generalizing polar codes over Zq, and we confirm their strong distortion properties through benchmarks. This modification enables a reduction from small-LWE to plain-LWE, with a notable decrease in the secret dimension. Additionally, we replace the enumeration step in the attack by assuming the secret is zero for the portion being enumerated, iterating this assumption over various choices for the enumeration part.
We make an analysis of our attack without using the flawed independence assumptions used in [Matzov,2022] and we fully back up our analysis with experimental evidences.
Lastly, we assess the complexity of our attack on CRYSTALS-Kyber; showing that the security levels for Kyber-512/768/1024 are 3.5/11.9/12.3 bits below the NIST requirements (143/207/272 bits) in the same nearest-neighbor cost model as in the Kyber submission.
All in all the cost of our attack matches and even slightly beat in some cases the complexities originally claimed by the attack of [Matzov,2022].
The Large Block Cipher Family Vistrutah
Vistrutah is a large block cipher with block sizes of 256 and 512 bits. It iterates a step function that applies two AES rounds to each 128-bit block of the state, followed by a state-wide cell permutation. Like Simpira, Haraka, Pholkos, and ASURA, Vistrutah leverages AES instructions to achieve high performance.
For each component of Vistrutah, we conduct a systematic evaluation of functions that can be efficiently implemented on both Intel and Arm architectures. We therefore expect them to perform efficiently on any recent vector instruction set architecture (ISA) with AES support. Our evaluation methodology combines latency estimation on an abstracted vector ISA with security analysis. The goal is to maximize the ratio
of "bits of security per unit of time", i.e., to achieve the highest security for a given performance target, or equivalently, the best performance for a given security level within this class of designs. Implementations confirm the accuracy of our latency model. Vistrutah even performs significantly better than Rijndael-256-256.
We support our security claims with a comprehensive ad-hoc cryptanalysis. An isomorphism between Vistrutah-512, the 512-bit wide variant, and the AES, allows us to also leverage the extensive cryptanalysis of AES and apply it to Vistrutah-512.
A core design principle is the use of an inline key schedule: all round keys are computed during each encryption or decryption operation without requiring memory storage. In fact, rekeying has no associated overheads. Key schedules like the AES’s must precompute and store round keys in memory for acceptable performance. However, in 2010 Kamal and Youssef showed this makes cold boot attacks more effective. Vistrutah’s approach minimizes leakage to at most one value during context switches. Furthermore, expensive key schedules reduce key agility, limiting the design of modes of operation.
Vistrutah is particularly well-suited for Birthday-Bound modes of operation, including Synthetic IV modes and Accordion modes for 256-bit block ciphers. It can serve as a building block for compression functions (such as Matyas-Meyer-Oseas) in wide Merkle-Damgard hash functions. Additionally, it can implement "ZIP" wide pseudo-random functions as recently proposed by Florez-Gutierrez et al. in 2024.
Finally, we present short, i.e., reduced-round versions of Vistrutah which are analyzed taking into account the restrictions posed on attackers by specific modes of operation. In particular, we model the use of the block ciphers in Hash-Encrypt-Hash (HEH) constructions such as HCTR2 as well as in ForkCiphers. These short versions of Vistrutah can be used to accelerate modes of operation without sacrificing security.
Another Look at the Quantum Security of the Vectorization Problem with Shifted Inputs
Cryptographic group actions provide a basis for simple post-quantum generalizations of many cryptographic protocols based on the discrete logarithm problem (DLP). However, many advanced group action-based protocols do not solely rely on the core group action problem (the so-called vectorization problem), but also on variants of this problem, to either improve efficiency or enable new functionalities. In particular, the security of the CSI-SharK threshold signature protocol relies on the hardness of the Vectorization Problem with Shifted Inputs where (in DLP formalism) the adversary not only receives and , but also for multiple known values of . A natural open question is whether the extra data provided to the adversary in this variant allows them to solve the underlying problem more efficiently.
In this paper, we revisit the concrete quantum security of this problem. We start from a quantum multiple hidden shift algorithm of Childs and van Dam, which to the best of our knowledge was never applied in cryptography before. We specify algorithms for its subroutines and we provide concrete complexity estimates for both these subroutines and the overall algorithm.
We apply our analysis to the CSI-SharK protocol. In prior analyses based on Kuperberg's algorithms, group action evaluations contributed to a significant part of the overall T-gate cost. For CSI-SharK suggested parameters, our new approach requires significantly fewer calls to the group action evaluation subroutine, leading to significant complexity improvements overall. We describe two instances of our approach, one which lowers the T-gate complexity, and the other the QRAM requirements. We obtain significant speedups -- even in both metrics simultaneously -- and we quantify the degradation of the quantum security of the protocol when the number of curves in the public key increases.
More Efficient Isogeny Proofs of Knowledge via Canonical Modular Polynomials
Proving knowledge of a secret isogeny has recently been proposed as a means to generate supersingular elliptic curves of unknown endomorphism ring, but is equally important for cryptographic protocol design as well as for real world deployments. Recently, Cong, Lai and Levin (ACNS'23) have investigated the use of general-purpose (non-interactive) zero-knowledge proof systems for proving the knowledge of an isogeny of degree between supersingular elliptic curves. In particular, their approach is to model this relation via a sequence of successive steps of a walk in the supersingular isogeny graph and to show that the respective -invariants are roots of the second modular polynomial. They then arithmetize this relation and show that this approach, when compared to state-of-the-art tailor-made proofs of knowledge by Basso et al. (EUROCRYPT'23), gives a 3-10 improvement in proof and verification times, with comparable proof sizes.
In this paper we ask whether we can further improve the modular polynomial-based approach and generalize its application to primes , as used in some recent isogeny-based constructions. We will answer these questions affirmatively, by designing efficient arithmetizations for each that achieve an improvement over Cong, Lai and Levin of up to 48%.
Our main technical tool and source of efficiency gains is to switch from classical modular polynomials to canonical modular polynomials. Adapting the well-known results on the former to the latter polynomials, however, is not straight-forward and requires some technical effort. We prove various interesting connections via novel use of resultant theory, and advance the understanding of canonical modular polynomials, which might be of independent interest.
Key Updatable Identity-Based-Signature Schemes
Identity-based signature ( ) schemes eliminate the need for certificate management, reducing both signature size and verification time. However, the challenge of updating stolen identity-based signature keys (or revoking and re-issueing them) has received limited attention. Existing generic solutions, such as managing revocation lists or periodically renewing user keys, are inefficient and introduce significant networking overhead in dynamic environments.
In this work, we address this gap by introducing a symmetric element that enables key updates in schemes via a single multicast message. The network overhead of our solutions scales logarithmic with the number of system users, while computation and memory overhead are constant. Furthermore, we generalize our method by proposing a framework to transform any scheme into a key-updatable signature scheme ( ), and we define the token security (unforgeability), forward security, and post-compromise security requirements for such transformations. We demonstrate the versatility of our framework by providing five instantiations of based on Schnorr-type , pairing-based , and isogeny-based . Finally, we analyze the security of these instantiations.
On the Power of Polynomial Preprocessing: Proving Computations in Sublinear Time, and More
Cryptographic proof systems enable a verifier to be convinced of a computation's correctness without re-executing it; common efficiency requirements include both succinct proofs and fast verification. In this work we put forth the general study of cryptographic proof systems with \textit{sublinear} proving time (after a preprocessing).
Prior work has achieved sublinear proving only for limited computational settings (e.g., vector commitments and lookup arguments), relying on specific assumptions or through non-black-box use of cryptographic primitives. In this work we lift many of these limitations through the systematic study of a specific object: polynomial commitments (PC) with sublinear proving time, a choice motivated by the crucial role that PC play in the design of efficient cryptographic schemes.
We show a simple construction of a PC with sublinear prover based on any vector commitment scheme (VC) and any preprocessing technique for fast polynomial evaluation. We prove that this PC satisfies \emph{evaluation binding}, which is the standard security notion for PC, and show how to expand our construction to achieve the stronger notion of knowledge soundness (extractability).
Next, we study applications of PCs with sublinear prover. The first one is to construct \emph{index-efficient} SNARKs, that are SNARKs where the prover is sublinear, after preprocessing, in the size of the index (i.e., the NP-relation describing the proven statement). We propose a method to transform a class of standard Polynomial Interactive Oracle Proofs (PIOPs) into index-efficient PIOPs, obtaining two results that advance the state of the art: the first lookup argument for unstructured tables in which the prover is sublinear in the size of the table, while making only black-box use of a VC and thus allowing instantiations from generic assumptions such as collision-resistant hash functions, and the first transparent SNARK for circuits where the prover is sublinear in the number of addition gates.
Finally, our last application is a transformation that builds \emph{UC-secure} SNARKs from simulation-extractable ones, with an approximately linear overhead in proving time (as opposed to quadratic in prior work).
Cryptography meets worst-case complexity: Optimal security and more from iO and worst-case assumptions
We study several problems in the intersection of cryptography and complexity theory based on the following high-level thesis.
1) Obfuscation can serve as a general-purpose worst-case to average-case reduction, reducing the existence of various forms of cryptography to corresponding worst-case assumptions.
2) We can therefore hope to overcome barriers in cryptography and average-case complexity by (i) making worst-case hardness assumptions beyond , and (ii) leveraging worst-case hardness reductions, either proved by traditional complexity-theoretic methods or facilitated further by cryptography.
Concretely, our results include:
- Optimal hardness. Assuming sub-exponential indistinguishability obfuscation, we give fine-grained worst-case to average case reductions for circuit-SAT. In particular, if finding an -witness requires nearly brute-force time in the worst case, then the same is true for some efficiently sampleable distribution. In fact, we show that under these assumptions, there exist families of one-way functions with optimal time-probability security tradeoffs.
Under an additional, stronger assumption -- the optimal non-deterministic hardness of refuting circuit-SAT -- we construct additional cryptographic primitives such as PRGs and public-key encryption that have such optimal time-advantage security tradeoffs.
- Direct Product Hardness. Again assuming and optimal non-deterministic hardness of SAT refutation, we show that the ``(search) -fold SAT problem'' -- the computational task of finding satisfying assignments to circuit-SAT instances simultaneously -- has (optimal) hardness roughly for time algorithms. In fact, we build ``optimally secure one-way product functions'' (Holmgren-Lombardi, FOCS '18), demonstrating that optimal direct product theorems hold for some choice of one-way function family.
- Single-Input Correlation Intractability. Assuming either or , we show a worst-case to average-case reduction for strong forms of single-input correlation intractability. That is, powerful forms of correlation-intractable hash functions exist provided that a collection of \emph{worst-case} ``correlation-finding'' problems are hard.
- Non-interactive Proof of Quantumness. Assuming sub-exponential and OWFs, we give a non-interactive proof of quantumness based on the worst-case hardness of the white-box Simon problem. In particular, this proof of quantumness result does not explicitly assume quantum advantage for an average-case task.
To help prove our first two results, we show along the way how to improve the Goldwasser-Sipser ``set lower bound'' protocol to have communication complexity quadratically smaller in the multiplicative approximation error .
Novel approximations of elementary functions in zero-knowledge proofs
In this paper, we study the computation of complex mathematical functions in statements executed on top of zero-knowledge proofs (ZKP); these functions may include roots, exponentials and logarithms, trigonometry etc. While existing approaches to these functions in privacy-preserving computations (and sometimes also in general-purpose processors) have relied on polynomial approximation, more powerful methods are available for ZKP. In this paper, we note that in ZKP, all algebraic functions are exactly computable. Recognizing that, we proceed to the approximation of transcendental functions with algebraic functions. We develop methods of approximation, instantiate them on a number of common transcendental functions, and benchmark their precision and efficiency in comparison with best polynomial approximations.
On the Concrete Security of BBS/BBS+ Signatures
BBS/BBS+ signatures are the most promising solution to instantiate practical and lightweight anonymous credentials. They underlie standardization efforts by the W3C and the IRTF. Due to their potential for large scale deployment, it is paramount to understand their concrete security, but a number of questions have been left open by prior works. To this end, the security proofs by Au et al. (SCN '06), Camenisch et al. (TRUST '16), and Tessaro and Zhu (EUROCRYPT '23) show reductions from -SDH in groups of prime order , where is the number of issued signatures.
However, these prior works left the possibility open that BBS/BBS+ is "even more secure" than what can be guaranteed by such proofs. Indeed, while the -SDH assumption is subject to an attack that uses group exponentiations (Cheon, EUROCRYPT '06) for several choices of , no attack with a similar complexity appears to affect either of BBS+ and "deterministic" BBS, for which the best known attacks amount to recovering the secret key by breaking the discrete logarithm problem. The assumption that this attack is best possible also seemingly justifies the choice of parameters in practice.
Our result shows that this expectation is not true. We show new attacks against BBS+ and deterministic BBS which, after seeing signatures, allow us to recover the secret key with the same complexity as solving the -Discrete Logarithm problem, which in turn is proportional to for many choices of . Further, we also extend the attack to a reduction showing that the security of BBS+ and deterministic BBS implies the -SDH assumption.
OwlC: Compiling Security Protocols to Verified, Secure, High-Performance Libraries
Cryptographic security protocols, such as TLS or WireGuard, form the foundation of a secure Internet; hence, a long line of research has shown how to formally verify their high-level designs. Unfortunately, these formal guarantees have not yet reached real-world implementations of these protocols, which still rely on testing and ad-hoc manual audits for security and correctness. This gap may be explained, in part, by the substantial performance and/or development overhead imposed by prior efforts to verify implementations.
To make it more practical to deploy verified implementations of security protocols, we present OwlC, the first fully automated, security-preserving compiler for verified, high-performance implementations of security protocols. From a high-level protocol specification proven computationally secure in the Owl language, OwlC emits an efficient, interoperable, side-channel resistant Rust library that is automatically formally verified to be correct.
We produce verified libraries for all previously written Owl protocols, and we also evaluate OwlC on two new verified case studies: WireGuard and Hybrid Public-Key Encryption (HPKE). Our verified implementations interoperate with existing implementations, and their performance matches unverified industrial baselines on end-to-end benchmarks.
Shorter VOLE-in-the-Head-based Signatures from Vector Semi-Commitment
The VOLE-in-the-Head (VOLEitH) paradigm transforms VOLE-based zero-knowledge proofs into post-quantum signature schemes by allowing public verification. We introduce reduced VOLE-in-the-Head (rVOLEitH), which incorporates the Vector Semi-Commitment (VSC) technique. VSC, originally developed for MPC-in-the-Head (MPCitH) schemes, reduces commitment size while maintaining security by relaxing the binding property. We adapt the ideal cipher version of VSC (IC-VSC) into the VOLEitH framework, leading to a reduction in signature size. Our security analysis proves that rVOLEitH achieves existential unforgeability under chosen-message attacks (EUF-CMA) in the ideal cipher model. Compared to existing VOLEitH-based signatures, our approach reduces signature size by up to 6.0\% while improving computational efficiency.
Furthermore, we analyze the impact of eliminating individual seed commitments and demonstrate a practical attack against a recently proposed VOLEitH variant that lacks such commitments. Our results establish rVOLEitH as an optimized and secure alternative for post-quantum signatures, improving both efficiency and security in the VOLEitH paradigm.
SCIF: Privacy-Preserving Statistics Collection with Input Validation and Full Security
Secure aggregation is the distributed task of securely computing a sum of values (or a vector of values) held by a set of parties, revealing only the output (i.e., the sum) in the computation. Existing protocols, such as Prio (NDSI’17), Prio+ (SCN’22), Elsa (S&P’23), and Whisper (S&P’24), support secure aggregation with input validation to ensure inputs belong to a specified domain. However, when malicious servers are present, these protocols primarily guarantee privacy but not input validity. Also, malicious server(s) can cause the protocol to abort. We introduce SCIF, a novel multi-server secure aggregation protocol with input validation, that remains secure even in the presence of malicious actors, provided fewer than one-third of the servers are malicious. Our protocol overcomes previous limitations by providing two key properties: (1) guaranteed output delivery, ensuring malicious parties cannot prevent the protocol from completing, and (2) guaranteed input inclusion, ensuring no malicious party can prevent an honest party’s input from being included in the computation. Together, these guarantees provide strong resilience against denial-of-service attacks. Moreover, SCIF offers these guarantees without increasing client costs over Prio and keeps server costs moderate. We present a robust end-to-end implementation of SCIF and demonstrate the ease with which it can be instrumented by integrating it in a simulated Tor network for privacy-preserving measurement.
Double Auction Meets Blockchain: Consensus from Scored Bid-Assignment
A double auction system, where buyers and sellers trade through bids, requires a transparent and immutable mechanism to record allocation results. This demand can be met with robust ledgers that ensure persistence and liveness, as exemplified by the Bitcoin blockchain (EuroCrypt {'}15). While existing blockchain-aided auction systems often rely on secure smart contracts or layer- techniques, this work proposes a more fundamental approach by constructing a provably secure blockchain protocol directly from the computation of bid allocations. The core component is an alternative proof-of-work (PoW) scheme based on a scored generalized multiple assignment problem (SGMAP), integrated into a tailored blockchain protocol. Unlike conventional PoW-based protocols, our leader selection is driven by block scores derived from the SGMAP scoring function, which is designed to be flexible enough to define the difficulty level and accommodate real-life requirements of the underlying double auction system. We prove persistence and a modified liveness property for our design, and present implementation results to validate its robustness and practicality.
FABLE: Batched Evaluation on Confidential Lookup Tables in 2PC
Abstract
Secure two-party computation (2PC) is a cryptographic technique that enables two mutually distrusting parties to jointly evaluate a function over their private inputs. We consider a 2PC primitive called confidential lookup table (LUT) evaluation, which is useful in privacy-preserving ML inference and data analytics. In this setting, a server holds a confidential LUT and evaluates it over an input secret-shared between a client and the server, producing a secret-shared output. Existing approaches for 2PC LUT evaluation suffer from high asymptotic complexity and practical inefficiency, with some designs lacking confidentiality guarantees for the LUT. Recognizing that many applications involving confidential LUT evaluation require processing multiple inputs with the same LUT, we propose FABLE, a system designed to efficiently evaluate a LUT on a large batch of queries simultaneously. Compared to the state-of-the-art confidential LUT evaluation methods, FABLE achieves up to 28.46-101.47 speedup in LAN environments and up to 50.10-392.93 speedup in WAN environments.
Multi-Key Fully Homomorphic Encryption: removing noise flooding in distributed decryption via the smudging lemma on discrete Gaussian distribution
The current Multi-key Fully Homomorphic Encryption (MKFHE) needs to add exponential noise in the distributed decryption phase to ensure the simulatability of partial decryption. Such a large noise causes the ciphertext modulus of the scheme to increase exponentially compared to the Single-key Fully Homomorphic Encryption (FHE), further reducing the efficiency of the scheme and making the hardness problem on the lattice on which the scheme relies have a sub-exponential approximation factor (which means that the security of the scheme is reduced). To address this problem, this paper analyzes in detail the noise in partial decryption of the MKFHE based on the LWE problem. It points out that this part of the noise is composed of private key and the noise in initial ciphertext. Therefore, as long as the encryption scheme is leak-resistant and the noise in partial decryption is independent of the noise in the initial ciphertext, the semantic security of the ciphertext can be guaranteed. In order to make the noise in the initial ciphertext independent of the noise in the partial decryption, this paper proves the smudging lemma on discrete Gaussian distribution and achieves this goal by multiplying the initial ciphertext by a ``dummy" ciphertext with a plaintext of 1. Based on the above method, this paper removes the exponential noise in the distributed decryption phase for the first time and reduces the ciphertext modulus of MKFHE from to as the same level as the FHE.
Fast, private and regulated payments in asynchronous networks
We propose a decentralized asset-transfer system that enjoys full privacy: no party can learn the details of a transaction, except for its issuer and its recipient. Furthermore, the recipient is only aware of the amount of the transaction. Our system does not rely on consensus or synchrony assumptions, and therefore, it is responsive, since it runs at the actual network speed. Under the hood, every transaction creates a consumable coin equipped with a non-interactive zero-knowledge proof (NIZK) that confirms that the issuer has sufficient funds without revealing any information about her identity, the recipient's identity, or the payment amount. Moreover, we equip our system with a regulatory enforcement mechanism that can be used to regulate transfer limits or restrict specific addresses from sending or receiving funds, while preserving the system's privacy guarantees.
Finally, we report on Paxpay, our implementation of Fully Private Asset Transfer (FPAT) that uses the Gnark library for the NIZKs. In our benchmark, Paxpay exhibits better performance than earlier proposals that either ensure only partial privacy, require some kind of network synchrony or do not implement regulation features. Our system thus reconciles privacy, responsiveness, regulation enforcement and performance.
Hierarchical Identity-Based Matchmaking Encryption
Identity-based matchmaking encryption (IB-ME) is an advanced encryption scheme that allows both the sender and the receiver to specify their respective identities. We study the notion of \emph{hierarchical IB-ME (HIB-ME)}, which augments IB-ME with delegation capabilities.
Specifically, we first formalize HIB-ME and construct it based on hierarchical identity-based encryption and hierarchical identity-based signature. Moreover, as applications of the HIB-ME, we show two chosen ciphertext secure (H)IB-ME constructions for different security levels.
Concrete Treatment of Signal Handshake’s Deniability: Efficient Post-Quantum Deniable Ring Signature
The Signal protocol relies on a handshake protocol, formerly X3DH and now PQXDH, to set up secure conversations. One of its privacy properties, of value to Signal, is deniability, allowing users to deny participation in communications. Prior analyses of deniability for these protocols, including post-quantum variants, use models highly tailored to the individual protocols and generally make ad-hoc adaptations to ``standard'' AKE definitions, obscuring the concrete deniability guarantees and complicating comparisons across protocols. Building on Hashimoto et al.’s abstraction for Signal handshake protocols (USENIX’25)'s abstraction for Signal handshake protocols (USENIX'25), we address this gap by presenting a unified framework for analyzing their deniability. We analyze Signal's classically secure X3DH and harvest-now-decrypt-later-secure PQXDH, and show that PQXDH is deniable against harvest-now-judge-later attacks, where a quantum judge retrospectively assesses the participation of classical users. We further analyze post-quantum alternatives like RingXKEM, whose deniability relies on ring signatures (RS).
By introducing a novel metric inspired by differential privacy, we provide relaxed, pragmatic guarantees for deniability. We also use this metric to define deniability for RS, a relaxation of anonymity, allowing us to build an efficient RS from NIST-standardized Falcon (and MAYO), which is not anonymous, but is provably deniable.
ZHE: Efficient Zero-Knowledge Proofs for HE Evaluations
Homomorphic Encryption (HE) allows computations on encrypted data without decryption. It can be used where the users’ information are to be processed by an untrustful server, and has been a popular choice in privacy-preserving applica- tions. However, in order to obtain meaningful results, we have to assume an honest-but-curious server, i.e., it will faithfully follow what was asked to do. If the server is malicious, there is no guarantee that the computed result is correct. The notion of verifiable HE (vHE) is introduced to detect malicious server’s behaviors, but current vHE schemes are either more than four orders of magnitude slower than the underlying HE operations (Atapoor et. al, CIC 2024) or fast but incompatible with server- side private inputs (Chatel et. al, CCS 2024).
In this work, we propose a vHE framework ZHE: effi- cient Zero-Knowledge Proofs (ZKPs) that prove the correct execution of HE evaluations while protecting the server’s private inputs. More precisely, we first design two new highly- efficient ZKPs for modulo operations and (Inverse) Number Theoretic Transforms (NTTs), two of the basic operations of HE evaluations. Then we build a customized ZKP for HE evaluations, which is scalable, enjoys a fast prover time and has a non-interactive online phase. Our ZKP is applicable to all Ring-LWE based HE schemes, such as BGV and CKKS. Finally, we implement our protocols for both BGV and CKKS and conduct extensive experiments on various HE workloads. Compared to the state-of-the-art works, both of our prover time and verifier time are improved; especially, our prover cost is only roughly 27-36× more expensive than the underlying HE operations, this is two to three orders of magnitude cheaper than state-of-the-arts.
XHMQV: Better Efficiency and Stronger Security for Signal’s Initial Handshake based on HMQV
The Signal protocol is the most widely deployed end-to-end-encrypted messaging protocol. Its initial handshake protocol X3DH allows parties to asynchronously derive a shared session key without the need to be online simultaneously, while providing implicit authentication, forward secrecy, and a form of offline deniability. The X3DH protocol has been extensively studied in the cryptographic literature and is acclaimed for its strong "maximum-exposure" security guarantees, hedging against compromises of users' long-term keys and medium-term keys but also the ephemeral randomness used in the handshake. This maximum-exposure security is achieved by deriving keys from the concatenation of 3–4 Diffie–Hellman (DH) secrets, each combining two long-term, medium-term, or ephemeral DH shares.
Remarkably, X3DH's approach of concatenating plain DH combinations is sub-optimal, both in terms of maximum-exposure security and performance. Indeed, Krawczyk's well-known HMQV protocol (Crypto '05) is a high-performance, DH-based key exchange that provides strong security against long-term and ephemeral key compromise. One might hence wonder: why not base Signal's initial handshake on HMQV?
In this work, we study this question and show that a carefully adapted variant of HMQV, which we call XHMQV, indeed enables stronger security and efficiency while matching the constraints of Signal's initial handshake. Most notably, HMQV does not work as a drop-in replacement for X3DH, as the latter's asynchronicity requires the protocol to handle cases where one party runs out of ephemeral keys (pre-uploaded to the Signal server). Our XHMQV design hence augments HMQV with medium-term keys analogous to those used in X3DH. We prove that XHMQV provides security in all 3–4 compromise scenarios where X3DH does and additionally in 1–2 further scenarios, strengthening the handshake's maximum-exposure guarantees while using more efficient group operations. We further confirm that our XHMQV design achieves deniability guarantees comparable to X3DH. Our security model is the first to capture Signal's long-term key reuse between DH key exchange and signatures, which may be of independent interest.
Rugged Pseudorandom Permutations with Beyond-Birthday-Bound Security
A rugged pseudorandom permutation (RPRP) is a security notion for variable-length tweakable ciphers that is strictly weaker than the traditional notion of a strong pseudorandom permutation. Being a weaker security notion it admits more efficient constructions. Yet the notion is strong enough so that any such construction can lend itself to a number of practical applications. It can be used to construct onion encryption, misuse-resistant AEAD, and AEAD secure under the release of unverified plaintext. Two recent works have introduced the notion, explored some of its applications, and studied a number of constructions that meet this notion. However, no constructions are known to achieve this notion with beyond-birthday-bound security. Current cloud applications are processing amounts of data that go well beyond the traditional barrier, and is becoming the new target. As such, the need for encryption with beyond-birthday-bound security has become a very practical concern. In this work, we present the first constructions for variable-length tweakable ciphers that satisfy RPRP security beyond the birthday bound. From these constructions, we readily obtain efficient AEAD schemes that are optimally secure against once misuse and the release of unverified plaintext.
Verifiable Decapsulation: Recognizing Faulty Implementations of Post-Quantum KEMs
Cryptographic schemes often contain verification steps that are essential for security. Yet, faulty implementations missing these steps can easily go unnoticed, as the schemes might still function correctly. A prominent instance of such a verification step is the re-encryption check in the Fujisaki-Okamoto (FO) transform that plays a prominent role in the post-quantum key encapsulation mechanisms (KEMs) considered in NIST's PQC standardization process. In KEMs built from FO, decapsulation performs a re-encryption check that is essential for security, but not for functionality. In other words, it will go unnoticed if this essential step is omitted or wrongly implemented, opening the door for key recovery attacks. Notably, such an implementation flaw was present in HQC's reference implementation and was only noticed after 19 months.
In this work, we develop a modified FO transform that binds re-encryption to functionality, ensuring that a faulty implementation which skips re-encryption will be exposed through basic correctness tests. We do so by adapting the "verifiable verification" methodology of Fischlin and Günther (CCS 2023) to the context of FO-based KEMs. More concretely, by exporting an unpredictable confirmation code from the public key encryption and embedding it into the key derivation function, we can confirm that (most of) the re-encryption step was indeed performed during decapsulation. We formalize this concept, establish modified FO transforms, and prove how unpredictable PKE confirmation codes turn into noticeable correctness errors for faulty implementations. We show how to apply this technique to ML-KEM and HQC, both with negligible overhead, by leveraging the entropy lost through ciphertext compression or truncation. We confirm that our approach works through mathematical proofs, as well as experimental data. Our experiments show that the implementation flaw in HQC's reference implementation indeed makes basic test cases fail when following our approach.
Nominal State-Separating Proofs
State-separating proofs are a powerful tool to structure cryptographic arguments, so that they are amenable for mechanization, as has been shown through implementations, such as SSProve. However, the treatment of separation for heaps has never been satisfactorily addressed. In this work, we present the first comprehensive treatment of nominal state separation in state-separating proofs using nominal sets. We provide a Rocq library, called Nominal-SSProve, that builds on nominal state separation supporting mechanized proofs that appear more concise and arguably more elegant.
“Check-Before-you-Solve”: Verifiable Time-lock Puzzles
Time-lock puzzles are cryptographic primitives that guarantee to the generator that the puzzle cannot be solved in less than sequential computation steps. They have recently found numerous applications, e.g., in fair contract signing and seal-bid auctions. However, solvers have no a priori guarantee about the solution they will reveal, e.g., about its ``usefulness'' within a certain application scenario. In this work, we propose verifiable time-lock puzzles (VTLPs) that address this by having the generator publish a succinct proof that the solution satisfies certain properties (without revealing anything else about it). Hence solvers are now motivated to ``commit'' resources into solving the puzzle. We propose VTLPs that support proving arbitrary NP relations about the puzzle solution.
At a technical level, to overcome the performance hurdles of the ``naive'' approach of simply solving the puzzle within a SNARK that also checks , our scheme combines the ``classic'' RSA time-lock puzzle of Rivest, Shamir, and Wagner, with novel building blocks for ``offloading'' expensive modular group exponentiations and multiplications from the SNARK circuit. We then propose a second VTLP specifically for checking RSA-based signatures and verifiable random functions (VRFs). Our second scheme does not rely on a SNARK and can have several applications, e.g., in the context of distributed randomness generation. Along the road, we propose new constant-size proofs for modular exponent relations over hidden-order groups that may be of independent interest. Finally, we experimentally evaluate the performance of our schemes and report the findings and comparisons with prior approaches.
Fabric-X: Redesigning Hyperledger Fabric Architecture for High-throughput Regulated Asset Exchange Applications
The adoption of Distributed Ledger Technology (DLT) for critical
financial infrastructures like Central Bank Digital Currencies (CB-
DCs) is hindered by a significant performance gap. Permissioned
blockchains such as Hyperledger Fabric, while conceptually suit-
able, are limited by architectural bottlenecks in their monolithic
peer design and consensus mechanisms, preventing them from
achieving the required scale.
This paper presents a fundamental re-architecture of Hyper-
ledger Fabric that addresses these challenges end-to-end. We de-
compose the monolithic peer into independently scalable microser-
vices for endorsement, validation, and committing. To maximize
parallelism, we introduce a transaction dependency graph that en-
ables the safe, concurrent validation of transactions across multiple
blocks. Complementing the peer redesign, we introduce Arma, a
novel sharded Byzantine Fault Tolerant (BFT) ordering service that
dramatically increases throughput by ordering compact transaction
digests rather than full transaction payloads. We implemented and
benchmarked this framework with a UTXO-based CBDC applica-
tion. Our evaluation demonstrates a peak throughput exceeding
200,000 transactions per second (TPS)—a two-orders-of-magnitude
improvement over the standard implementation. This work proves
that permissioned DLTs can be engineered for national-scale pay-
ment systems, providing a resilient and highly performant foun-
dation for practical CBDC deployments and the integration of ad-
vanced, computationally intensive features.
Homomorphic Field Trace Revisited : Breaking the Cubic Noise Barrier
We present a novel homomorphic trace evaluation algorithm , which mitigates the phase amplification problem that comes with the definition of the field trace. Our overcomes the phase amplification with only a negligible computational overhead, thereby improving the usability of the homomorphic field trace algorithm. Moreover, our tweak also improves the noise propagation of the and breaks the traditional variance bound in previous works into .
Our experimental results obtained by integrating into state-of-the-art homomorphic encryption algorithms further demonstrate the usefulness of our algorithm. Specifically, improves the noise accumulation of the (high precision) circuit bootstrapping, which also achieves maximal speedup by replacing the costly high precision trace evaluation. Also, based on our idea of , we present low latency, high precision LWE-to-GLWE packing algorithm - . We also show that our - significantly reduces the packing error without severe degradation of performance.
Homomorphic Encryption for Large Integers from Nested Residue Number Systems
Existing fully homomorphic encryption (FHE) schemes primarily support a plaintext space defined over a relatively small prime. However, in some important applications of FHE one needs arithmetic over a large prescribed prime. In this paper we construct a new FHE system that is specifically designed for this purpose. Our system composes three layers of residue systems to enable much better performance than was previously possible. Our experiments show that for arithmetic modulo a 256-bit integer, when compared to the TFHE-rs implementation of 256-bit arithmetic, our new system achieves a factor of two thousand better multiplication throughput and a factor of twenty better latency. Moreover, for a 2048-bit prime modulus we achieve far better performance than was previously possible.
Quantum-safe Signatureless DNSSEC
We present : a backward-compatible protocol that leverages a quantum-safe KEM and a MAC to perform signature-less DNSSEC validations in a single UDP query/response style. Our experiments targeting NIST level I security for QTYPE A query resolution show that is practically equivalent to the presently deployed RSA-2048 in terms of bandwidth usage and resolution speeds. Compared to post-quantum signatures, reduces bandwidth consumption and resolution times by up to and , respectively. Moreover, with response size query size bytes, obviates the long-standing issues of IP fragmentation, TCP re-transmits and DDoS amplification attacks.
Fairness in the Wild: Secure Atomic Swap with External Incentives
Atomic swaps enable asset exchanges across blockchains without relying on trusted intermediaries, and are a key component of decentralized finance (DeFi) ecosystems. Recently, Chung, Masserova, Shi, and Thyagarajan introduced Rapidash (Financial Cryptography 2025), an atomic swap protocol that remains incentive compatible under user-miner collusion, by ensuring that the honest strategy forms a coalition-resistant Nash equilibrium. However, their model assumes a closed system where players act solely based on internal protocol incentives. In practice, participants may be influenced by external incentives such as off-chain rewards or adversarial bribes, which can undermine such equilibrium guarantees.
In this work, we introduce a new game-theoretic notion, bounded maximin fairness, which ensures that honest participants remain protected against rational adversaries with arbitrary but bounded external incentives. We construct an atomic swap protocol that satisfies this notion, while preserving the equilibrium properties of prior work in the absence of external influence.
As we show, our protocol is easy to implement and can be instantiated even in Bitcoin’s limited scripting language.
SmallWood: Hash-Based Polynomial Commitments and Zero-Knowledge Arguments for Relatively Small Instances
Zero-knowledge proofs (ZKPs) are a fundamental building block in cryptography, enabling powerful privacy-preserving and verifiable computations. In the post-quantum era, hash-based ZKPs have emerged as a promising direction due to their conjectured resistance to quantum attacks, along with their simplicity and efficiency.
In this work, we introduce SmallWood, a hash-based polynomial commitment scheme (PCS) and zero-knowledge argument system optimized for relatively small instances. Building on the recent degree-enforcing commitment scheme (DECS) from the Threshold-Computation-in-the-Head (TCitH) framework, we refine its formalization and combine it with techniques from Brakedown. This results in a new hash-based PCS that is particularly efficient for polynomials of relatively small degree —typically up to — outperforming existing approaches in this range.
Leveraging this new PCS, we design a hash-based zero-knowledge argument system that outperforms the state-of-the-art in terms of proof sizes for witness size ranging from to . Additionally, we present exact zero-knowledge arguments for lattice-based problems using SmallWood, demonstrating highly competitive performance: our scheme yields proof sizes under 25 KB across a wide range of lattice parameters, including Kyber and Dilithium instances.