All papers in 2025 (1112 results)

Last updated:  2025-06-13
Computational Attestations of Polynomial Integrity Towards Verifiable Back-Propagation
Dustin Ray and Caroline El Jazmi
Recent advancements in machine learning accuracy and utility have been driven by the effective combination of sophisticated models with high-performance computational scaling. As the development of large-scale models shifts away from commodity hardware to outsourced computation, it becomes paramount to ensure that the training process is executed with integrity and transparency. This encompasses verifying that adequate computational resources were expended and that the resulting model is accurate, rather than the product of skipped steps or resource-saving shortcuts by the external provider. Building on our previous efforts, which demonstrated the computational feasibility of using this system to argue correctness for differentially-private linear regression, we extend those results to achieve fully provable back-propagation—a cornerstone operation in modern machine learning training. Our system achieves complete zero-knowledge, revealing nothing about the input data during training, and ensures quantum security by relying on no weak cryptographic primitives. Efficiency is substantially increased through the use of a fixed-point decimal representation, reducing the computational overhead typically associated with floating-point arithmetic. Notably, our solution is doubly efficient, achieving a logarithmic-time verifier and a linear-time prover. Implemented entirely in Rust without reliance on external machine learning libraries, and executed within a cryptographically secure virtual machine, this work represents a significant advancement toward verifiable, secure, and efficient outsourced machine learning computations.
Last updated:  2025-06-13
Hydrangea: Optimistic Two-Round Partial Synchrony with One-Third Fault Resilience
Nibesh Shrestha, Aniket Kate, and Kartik Nayak
We present Hydrangea, a partially synchronous Byzantine fault-tolerant state machine replication protocol that achieves a latency of two rounds optimistically while maintaining high adversarial resilience. In particular, for a system of parties, if up to parties are faulty, then the protocol can obtain a latency of two rounds. Otherwise, the protocol can obtain a latency of three rounds while tolerating Byzantine faults and crash faults {\em simultaneously}.
Last updated:  2025-06-13
SEAF: Secure Evaluation on Activation Functions with Dynamic Precision for Secure Two-Party Inference
Hao Guo, Zhaoqian Liu, Ximing Fu, and Zhusen Liu
Secure evaluation of non-linear functions is one of the most expensive operations in secure two-party computation, particularly for activation functions in privacy preserving machine learning (PPML). This work introduces SEAF, a novel framework for efficient Secure Evaluation on Activation Functions. SEAF is based on the linear approximation approach, but enhances it by introducing two key innovations: Trun-Eq based interval test protocols and linear approximation with dynamic precision, which have the potential for broader applicability. Furthermore, we classify common activation functions into several categories, and present specialized methods to evaluate them using our enhanced techniques. Our implementation of SEAF demonstrates to speedup on activation functions and compared to SirNN (S\&P'21). When applied on , SEAF outperforms Iron (NeurIPS'22) by more than and Bolt (S\&P'24) by up to . For end-to-end secure inference on BERT, the original accounts for and of the total runtime in Iron and Bolt, respectively. In contrast, our optimized reduces these proportions to and , eliminating as a bottleneck in secure inference.
Last updated:  2025-06-12
A Framework for Compiling Custom Languages as Efficiently Verifiable Virtual Machines
Assimakis A. Kattis, Brian Klatt, Philip Quirk, and Logan Allen
In this work, we develop a framework for compiling languages into efficient Interactive Oracle Proofs (IOPs, or ‘circuits’), motivated by applications in verifiable Virtual Machine (zkVM) design. We provide a set of sufficient conditions on a language under which it can be compiled into an efficient IOP, alongside corresponding performance costs. We identify a subclass of languages, which we denote as traversable, and demonstrate how traversable languages can be efficiently compiled as circuits using established techniques. To demonstrate the efficacy of our compilation framework, we develop a zkVM for the Nock programming language by (1) formalizing the existing Nock specification, and (2) applying our techniques to design an efficient IOP representation for the Nock VM. The resulting circuit is small, on par with existing state-of-the-art zkVM designs and can be generated for any traversable language in a generic way.
Last updated:  2025-06-12
Kahrobaei--Koupparis DSS: universal forgery
Alexander Ushakov
Regardless of the choice of parameters, knowledge of a single signed message, i.e., a pair message/signature, produced by Kahrobaei-Koupparis digital signature scheme is sufficient to forge a valid signature for any other message.
Last updated:  2025-06-12
Laconic PSI on Authenticated Inputs and Applications
James Bartusek, Sanjam Garg, Abhishek Jain, and Guru-Vamsi Policharla
A common issue with using secure computation in practice is that its security does not place any restrictions on what an adversary can use as input in the protocol. In this work, we focus on the practically-motivated setting of (two-message, labeled) private set intersection (PSI), and advocate for a clean and versatile solution to this problem: PSI on authenticated inputs. Our central contributions are summarized as follows. - We formulate a novel definition of PSI on authenticated inputs that has the potential for use in several applications, from content moderation in end-to-end encrypted systems to watchlists in anonymous e-cash systems. - We design a concretely-efficient and laconic (i.e., the size of the receiver's message is independent of its set size) protocol for PSI on authenticated inputs. - We build on our PSI protocol to obtain the first laconic set pre-constrained group signature scheme, improving on that of Bartusek et al. (Eurocrypt 23). We also explore various optimizations to our basic protocol, including reducing the receiver's concrete run time, and a tradeoff between crs size and message size.
Last updated:  2025-06-12
Early Stopping is Cheap
Fatima Elsheimy, Simon Holmgaard Kamp, and Julian Loss
Minimizing both the round and communication complexity of Byzantine agreement (BA) is fundamental question in distributed computing. A long line of works has focused on early-stopping deterministic protocols that can terminate within a number of synchronous rounds that is proportional to , where is the \emph{actual number} of corruptions in an execution, as opposed to an upper bound . This can lead to major benefits when . A very different style of randomized protocol has focused on \emph{player replaceable} BA protocols with communication complexity linear in the number of parties and adaptive security, which rely on only a small and rotating subcommittee of parties to ever speak in the protocol. One downside of existing player-replaceable protocols is that they require rounds to terminate with overwhelming probability . For applications demanding high security guarantees, this can easily become the bottleneck of a player replaceable protocol. Motivated by this gap in the literature, we give the first protocol that is simultaneously player-replaceable \emph{and} early stopping (with overwhelming probability). Let and be constants and let and denote suitable security parameters. Our protocol is secure against up to adaptive Byzantine corruptions and terminates in many rounds with probability , given corruptions. Moreover, our protocol has constant expected round complexity and communication bounded by
Last updated:  2025-06-12
b4M: Holistic Benchmarking for MPC
Karl W. Koch, Dragos Rotaru, and Christian Rechberger
Secure Multi-Party Computation (MPC) is becoming more and more usable in practice. The practicality origins primarily from well-established general-purpose MPC frameworks, such as MP-SPDZ. However, to evaluate the practicality of an MPC program in the envisioned environments, still many benchmarks need to be done. We identified three challenges in the context of performance evaluations within the MPC domain: first, the cumbersome process to holistically benchmark MPC programs; second, the difficulty to find the best-possible MPC setting for a given task and envisioned environment; and third, to have consistent evaluations of the same task or problem area across projects and papers. In this work, we address the gap of tedious and complex benchmarking of MPC. Related works so far mostly provide a comparison for certain programs with different engines. To the best of our knowledge, for the first time the whole benchmarking pipeline is automated; provided by our open-sourced framework Holistic Benchmarking for MPC (b4M). b4M is easy to configure using TOML files, outputs ready-to-use graphs, and provides even the MPC engine itself as own benchmark dimension. Furthermore it takes three relatively easy steps to add further engines: first, integrate engine-specific commands into b4M’s runner class; second, output performance metrics in b4M’s format; third, provide a Docker container for the engine’s parties. To showcase b4M, we provide an exemplary evaluation for the computation of the dot product and logistic regression using a real-world dataset. With this work, we move towards fully-automated evaluations of MPC programs, protocols, and engines, which smoothens the setup process and viewing various trade-offs. Hence, b4M advances MPC development by improving the benchmarking usability aspect of it.
Last updated:  2025-06-12
Low-cost anonymous reputation update for IoT applications
Uncategorized
Alex Shafarenko
Show abstract
Uncategorized
This paper presents a novel approach to zero-trust anonymous reputation update in crowd sensing IoT applications. We use a suite of cryptographic functions to achieve anonymity, including unlinkability of sensing reports to the principals that submit them and to one another, while enabling the infrastructure to reliably quantify the degree of trust expressed as a reputation level. The protocol is low-cost for the anonymous participant due to the use of cheap standard algorithms: low-exponent modular exponentia- tion and cryptographic hashing, which makes it quite suitable for IoT.
Last updated:  2025-06-12
Better GBFV Bootstrapping and Faster Encrypted Edit Distance Computation
Robin Geelen and Frederik Vercauteren
We propose a new iterative method to convert a ciphertext from the Generalized BFV (GBFV) to the regular BFV scheme. In particular, our conversion starts from an encrypted plaintext that lives in a large cyclotomic ring modulo a small-norm polynomial , and gradually changes the encoding to a smaller cyclotomic ring modulo a larger integer . Previously, only a trivial conversion method was known, which did not change the underlying cyclotomic ring. Using our improved conversion algorithm, we can bootstrap the GBFV scheme almost natively, in the sense that only a very small fraction of the operations is computed inside regular BFV. Specifically, we evaluate (an adapted version of) the slot-to-coefficient transformation entirely in the GBFV scheme, whereas the previous best method used the BFV scheme for that transformation. This insight allows us to bootstrap either with less noise growth, or much faster than the state-of-the-art. We implement our new bootstrapping in Microsoft SEAL. Our experiments show that, for the same remaining noise budget, our bootstrapping runs in only 800 ms when working with ciphertexts containing 1024 slots over with . This is faster than the state-of-the-art. Finally, we use our improved GBFV bootstrapping in an application that computes an encrypted edit distance. Compared to the recent TFHE-based Leuvenshtein algorithm, our GBFV version is almost two orders of magnitude faster in the amortized sense.
Last updated:  2025-06-12
Universally Composable Succinct Vector Commitments and Applications
Ran Canetti and Megan Chen
We develop a toolbox for modular construction and analysis of succinct, non-interactive commitments and vector commitments in the random oracle model, while guaranteeing universally composable security. To demonstrate its power, we use the toolbox to construct and analyze a modular variant of the Kilian-Micali ZK-SNARK. Along the way we also propose a new UC formulation of a global random oracle, that avoids a weakness in existing formulations and also enables expressing more nuanced, session-specific abstractions. We hope that this toolbox will be useful for building secure applications in settings where both succinctness and non-interactivity are key.
Last updated:  2025-06-12
TEEMS: A Trusted Execution Environment based Metadata-protected Messaging System
Sajin Sasy, Aaron Johnson, and Ian Goldberg
Ensuring privacy of online messaging remains a challenge. While the contents or data of online communications are often protected by end-to-end encryption, the metadata of communications are not. Metadata such as who is communicating with whom, how much, and how often, are leaked by popular messaging systems today. In the last four decades we have witnessed a rich literature of designs towards metadata-protecting communications systems (MPCS). While recent MPCS works often target metadata-protected messaging systems, no existing construction simultaneously attains four desirable properties for messaging systems, namely (i) low latency, (ii) high throughput, (iii) horizontal scalability, and (iv) asynchronicity. Existing designs often capture disjoint subsets of these properties. For example, PIR-based approaches achieve low latency and asynchronicity but have low throughput and lack horizontal scalability, mixnet-based approaches achieve high throughput and horizontal scalability but lack asynchronicity, and approaches based on trusted execution environments (TEEs) achieve high throughput and asynchronicity but lack horizontal scalability. In this work, we present TEEMS, the first MPCS designed for metadata-protected messaging that simultaneously achieves all four desirable properties. Our distributed TEE-based system uses an oblivious mailbox design to provide metadata-protected messaging. TEEMS presents novel oblivious routing protocols that adapt prior work on oblivious distributed sorting. Moreover, we introduce the notion of ID and token channels to circumvent shortcomings of prior designs. We empirically demonstrate TEEMS' ability to support clients engaged in metadata-protected conversations in under 1 s, with 205 cores, achieving an 18× improvement over prior work for latency and throughput, while supporting significantly better scalability and asynchronicity properties.
Last updated:  2025-06-12
A Note on One Authentication and Key Agreement Scheme for UAV-Assisted VANETs for Emergency Rescue
Zhengjun Cao and Lihua Liu
We show that the key agreement scheme [IEEE TNSE, 1454--1468, 2004] is insecure against ephemeral secret leakage attack and smart card loss attack, not as claimed. This failure results from its simple encryption, in which only bitwise XOR operations are used to encrypt the messages.
Last updated:  2025-06-11
Tanuki: New Frameworks for (Concurrently Secure) Blind Signatures from Post-Quantum Groups Actions
Lucjan Hanzlik, Yi-Fu Lai, Marzio Mula, Eugenio Paracucchi, Daniel Slamanig, and Gang Tang
Blind signatures are fundamental cryptographic primitives enabling privacy-preserving authentication and have seen renewed interest in the post-quantum literature. Existing efficient constructions predominantly rely on Fischlin’s generic paradigm instantiated over lattice assumptions, while blinding techniques for sigma-protocol-based blind signatures remain sparse beyond lattices. Moreover, achieving provable concurrent security under polynomially many sessions has been a longstanding open challenge for this approach in the post-quantum literature as evidenced by the recent attacks in EC’24 and PKC’24. This work broadens the landscape of post-quantum blind signatures by introducing novel techniques and proposing four frameworks based on general cryptographic group actions, without requiring commutativity. Our constructions admit instantiations under diverse post-quantum assumptions, including CSIDH (isogeny-based), LESS (code-based, NIST round-two), and more. These frameworks offer flexible trade-offs in assumptions (from interactive one-more to the standard inversion problem) and key/signature sizes, and culminate in a construction that achieves security under polynomially many concurrent sessions. This enables the first efficient blind signatures from isogenies and codes with provable concurrent security with 3.9 and 56 KB respectively. We also outline several directions for optimization and further instantiations for future work.
Last updated:  2025-06-11
Lattice-Based Accumulator and Application to Anonymous Credential Revocation
Victor Youdom Kemmoe, Anna Lysyanskaya, and Ngoc Khanh Nguyen
An accumulator is a cryptographic system for compactly representing a set of elements such that every element in the set has a short membership witness. A dynamic accumulator, furthermore, allows elements to be added to and deleted from the accumulator. Camenisch and Lysyanskaya (CRYPTO'02) constructed the first dynamic accumulator under the strong-RSA assumption and showed how it can be used to enable revocation of anonymous credentials. In this paper, we give a lattice-based dynamic accumulator tailor-made for enabling revocation of post-quantum anonymous credential systems. As a concrete example, we instantiate our dynamic accumulator on top of the anonymous credential system implemented in the LaZer library (ACM CCS 2024).
Last updated:  2025-06-11
Isogeny-based key exchange from orientations of large discriminant
Marc Houben
We describe an algorithm to efficiently evaluate class group actions on supersingular elliptic curves that are oriented by an imaginary quadratic order of arbitrarily large discriminant. Contrary to CSIDH, this allows to increase the post-quantum security of the group action without increasing the size of the base field. In particular, we describe instances where Kuperberg's algorithm loses to generic supersingular isogeny path finding. Our algorithm is fully deterministic, strictly constant time, dummy free, and can be implemented without conditional branches. We show that the (restricted effective) group action can be employed in a non-interactive key exchange protocol, that we argue is asymptotically more efficient than CSIDH.
Last updated:  2025-06-11
Oracle-Based Multistep Strategy for Solving Polynomial Systems Over Finite Fields and Algebraic Cryptanalysis of the Aradi Cipher
Roberto La Scala and Sharwan K. Tiwari
The multistep solving strategy consists in a divide-and-conquer approach: when a multivariate polynomial system is computationally infeasible to solve directly, one variable is assigned over the elements of the base finite field, and the procedure is recursively applied to the resulting simplified systems. In a previous work by the same authors (among others), this approach proved effective in the algebraic cryptanalysis of the Trivium cipher. In this paper, we present a new implementation of the corresponding algorithm based on a Depth-First Search strategy, along with a novel complexity analysis leveraging tree structures. We further introduce the notion of an ``oracle function'' as a general predictive tool for deciding whether the evaluation of a new variable is necessary to simplify the current polynomial system. This notion allows us to unify all previously proposed variants of the multistep strategy, including the classical hybrid approach, by appropriately selecting the oracle function. Finally, we apply the multistep solving strategy to the cryptanalysis of the low-latency block cipher Aradi, recently introduced by the NSA. We present the first full-round algebraic attack, raising concerns about the cipher’s actual security with respect to its key length.
Last updated:  2025-06-11
CuFDFB: Fast and Private Computation on Non-Linear Functions Using FHE
Shutong Jin, Shiyu Shen, Hao Yang, Donglong Chen, Wangchen Dai, and Ray C. C. Cheung
Privacy-preserving neural network inference using Fully Homomorphic Encryption (FHE) faces significant challenges in efficiently evaluating non-polynomial functions, such as activation functions, which are critical for introducing non-linearity in neural networks. Full-Domain Functional Bootstrap (FDFB) algorithms provide a promising solution by enabling the evaluation of arbitrary functions while simultaneously refreshing ciphertexts to manage noise accumulation. Despite their theoretical advantages, the practicality of FDFB algorithms has been limited by excessive computational overhead, often exceeding 1000 ms per ciphertext, which restricts their scalability for large neural networks. To overcome the computational bottlenecks of FDFB, we have re-engineered the algorithms for massively parallel execution on GPUs. Our primary contribution is a hierarchical parallelization strategy that exploits concurrency at the thread, stream, and device levels. A key optimization involves the use of CUDA streams to create a data pipeline that effectively mitigates the overhead of memory transfers between the host and device. This optimized architecture achieves a significant speedup of up to 524 compared to CPU-based implementations. Our implementation maintains full precision for evaluating various activation functions, confirming its viability for large-scale, privacy-preserving machine learning tasks and paving the way for practical FHE-based deep learning.
Last updated:  2025-06-13
Ideally HAWKward: How Not to Break Module-LIP
Clémence Chevignard and Guilhem Mureau
The module-Lattice Isomorphism Problem (module-LIP) was introduced by Ducas et al. (ASIACRYPT 22) in~\cite{HAWK:cryptoeprint:2022/1155}, and used within the signature scheme and NIST candidate HAWK. In~\cite{modLIPtotallyreal}, Mureau et al. (EUROCRYPT24) pointed out that over certain number fields , the problem can be reduced to enumerating solutions of (where is given and are the unknowns). Moreover one can always reduce to a similar equation which has only \textit{few} solutions. This key insight led to a heuristic polynomial-time algorithm for solving module-LIP on those specific instances. Yet this result doesn't threaten HAWK for which the problem can be reduced to enumerating solutions of (where is given and are the unknowns). We show that, in all likelihood, solving this equation requires the enumeration of a \textit{too large} set to be feasible, thereby making irrelevant a straightforward adaptation of the approach in~\cite{modLIPtotallyreal}.
Last updated:  2025-06-11
Key Updatable Identity-Based-Signature Schemes
Tobias Guggemos and Farzin Renan
Identity-based signature () schemes eliminate the need for certificate management, reducing both signature size and verification time. However, the challenge of updating stolen identity-based signature keys (or revoking and re-issueing them) has received limited attention. Existing generic solutions, such as managing revocation lists or periodically renewing user keys, are inefficient and introduce significant networking overhead in dynamic environments. In this work, we address this gap by introducing a symmetric element that enables key updates in schemes via a single multicast message. The network overhead of our solutions scales logarithmic with the number of system users, while computation and memory overhead are constant. Furthermore, we generalize our method by proposing a framework to transform any scheme into a key-updatable signature scheme (), and we define the token security (unforgeability), forward security, and post-compromise security requirements for such transformations. We demonstrate the versatility of our framework by providing five instantiations of based on Schnorr-type , pairing-based , and isogeny-based . Finally, we analyze the security of these instantiations.
Last updated:  2025-06-11
On the Concrete Security of BBS/BBS+ Signatures
Rutchathon Chairattana-Apirom and Stefano Tessaro
BBS/BBS+ signatures are the most promising solution to instantiate practical and lightweight anonymous credentials. They underlie standardization efforts by the W3C and the IRTF. Due to their potential for large scale deployment, it is paramount to understand their concrete security, but a number of questions have been left open by prior works. To this end, the security proofs by Au et al. (SCN '06), Camenisch et al. (TRUST '16), and Tessaro and Zhu (EUROCRYPT '23) show reductions from -SDH in groups of prime order , where is the number of issued signatures. However, these prior works left the possibility open that BBS/BBS+ is "even more secure" than what can be guaranteed by such proofs. Indeed, while the -SDH assumption is subject to an attack that uses group exponentiations (Cheon, EUROCRYPT '06) for several choices of , no attack with a similar complexity appears to affect either of BBS+ and "deterministic" BBS, for which the best known attacks amount to recovering the secret key by breaking the discrete logarithm problem. The assumption that this attack is best possible also seemingly justifies the choice of parameters in practice. Our result shows that this expectation is not true. We show new attacks against BBS+ and deterministic BBS which, after seeing signatures, allow us to recover the secret key with the same complexity as solving the -Discrete Logarithm problem, which in turn is proportional to for many choices of . Further, we also extend the attack to a reduction showing that the security of BBS+ and deterministic BBS implies the -SDH assumption.
Last updated:  2025-06-11
OwlC: Compiling Security Protocols to Verified, Secure, High-Performance Libraries
Pratap Singh, Joshua Gancher, and Bryan Parno
Cryptographic security protocols, such as TLS or WireGuard, form the foundation of a secure Internet; hence, a long line of research has shown how to formally verify their high-level designs. Unfortunately, these formal guarantees have not yet reached real-world implementations of these protocols, which still rely on testing and ad-hoc manual audits for security and correctness. This gap may be explained, in part, by the substantial performance and/or development overhead imposed by prior efforts to verify implementations. To make it more practical to deploy verified implementations of security protocols, we present OwlC, the first fully automated, security-preserving compiler for verified, high-performance implementations of security protocols. From a high-level protocol specification proven computationally secure in the Owl language, OwlC emits an efficient, interoperable, side-channel resistant Rust library that is automatically formally verified to be correct. We produce verified libraries for all previously written Owl protocols, and we also evaluate OwlC on two new verified case studies: WireGuard and Hybrid Public-Key Encryption (HPKE). Our verified implementations interoperate with existing implementations, and their performance matches unverified industrial baselines on end-to-end benchmarks.
Last updated:  2025-06-13
Quantum Computing without the Linear Algebra
Aws Albarghouthi
Quantum computing is often introduced through the lens of linear algebra with notation that is inherited from quantum mechanics. In this paper, we take an operational view of quantum computing that is easy to demonstrate programmatically. The hope is that this viewpoint will (1) demystify quantum computing and make it more accessible to a wider audience, particularly computer science students and software engineers, and (2) possibly serve as the basis of a formal foundation for automatically reasoning about quantum programs. We treat the state of a quantum computer as a set and present the operations of a quantum computer—quantum gates and measurements—using familiar functional set transformations (think map, filter, fold, etc.). By the end of the paper, we will have implemented a simple quantum circuit simulator that can be used to simulate small quantum circuits. The code is available at https://github.com/qqq-wisc/qwla.
Last updated:  2025-06-10
Concrete Treatment of Signal Handshake’s Deniability: Efficient Post-Quantum Deniable Ring Signature
Shuichi Katsumata, Guilhem Niot, Ida Tucker, and Thom Wiggers
The Signal protocol relies on a handshake protocol, formerly X3DH and now PQXDH, to set up secure conversations. One of its privacy properties, of value to Signal, is deniability, allowing users to deny participation in communications. Prior analyses of deniability for these protocols, including post-quantum variants, use models highly tailored to the individual protocols and generally make ad-hoc adaptations to ``standard'' AKE definitions, obscuring the concrete deniability guarantees and complicating comparisons across protocols. Building on Hashimoto et al.’s abstraction for Signal handshake protocols (USENIX’25)'s abstraction for Signal handshake protocols (USENIX'25), we address this gap by presenting a unified framework for analyzing their deniability. We analyze Signal's classically secure X3DH and harvest-now-decrypt-later-secure PQXDH, and show that PQXDH is deniable against harvest-now-judge-later attacks, where a quantum judge retrospectively assesses the participation of classical users. We further analyze post-quantum alternatives like RingXKEM, whose deniability relies on ring signatures (RS). By introducing a novel metric inspired by differential privacy, we provide relaxed, pragmatic guarantees for deniability. We also use this metric to define deniability for RS, a relaxation of anonymity, allowing us to build an efficient RS from NIST-standardized Falcon (and MAYO), which is not anonymous, but is provably deniable.
Last updated:  2025-06-10
Rugged Pseudorandom Permutations with Beyond-Birthday-Bound Security
Nilanjan Datta, Jean Paul Degabriele, Avijit Dutta, Vukašin Karadžić, and Hrithik Nandi
A rugged pseudorandom permutation (RPRP) is a security notion for variable-length tweakable ciphers that is strictly weaker than the traditional notion of a strong pseudorandom permutation. Being a weaker security notion it admits more efficient constructions. Yet the notion is strong enough so that any such construction can lend itself to a number of practical applications. It can be used to construct onion encryption, misuse-resistant AEAD, and AEAD secure under the release of unverified plaintext. Two recent works have introduced the notion, explored some of its applications, and studied a number of constructions that meet this notion. However, no constructions are known to achieve this notion with beyond-birthday-bound security. Current cloud applications are processing amounts of data that go well beyond the traditional barrier, and is becoming the new target. As such, the need for encryption with beyond-birthday-bound security has become a very practical concern. In this work, we present the first constructions for variable-length tweakable ciphers that satisfy RPRP security beyond the birthday bound. From these constructions, we readily obtain efficient AEAD schemes that are optimally secure against once misuse and the release of unverified plaintext.
Last updated:  2025-06-10
Homomorphic Field Trace Revisited : Breaking the Cubic Noise Barrier
Kang Hoon Lee and Ji Won Yoon
We present a novel homomorphic trace evaluation algorithm , which mitigates the phase amplification problem that comes with the definition of the field trace. Our overcomes the phase amplification with only a negligible computational overhead, thereby improving the usability of the homomorphic field trace algorithm. Moreover, our tweak also improves the noise propagation of the and breaks the traditional variance bound in previous works into . Our experimental results obtained by integrating into state-of-the-art homomorphic encryption algorithms further demonstrate the usefulness of our algorithm. Specifically, improves the noise accumulation of the (high precision) circuit bootstrapping, which also achieves maximal speedup by replacing the costly high precision trace evaluation. Also, based on our idea of , we present low latency, high precision LWE-to-GLWE packing algorithm -. We also show that our - significantly reduces the packing error without severe degradation of performance.
Last updated:  2025-06-11
Cryptography meets worst-case complexity: Optimal security and more from iO and worst-case assumptions
Rahul Ilango and Alex Lombardi
We study several problems in the intersection of cryptography and complexity theory based on the following high-level thesis. 1) Obfuscation can serve as a general-purpose worst-case to average-case reduction, reducing the existence of various forms of cryptography to corresponding worst-case assumptions. 2) We can therefore hope to overcome barriers in cryptography and average-case complexity by (i) making worst-case hardness assumptions beyond , and (ii) leveraging worst-case hardness reductions, either proved by traditional complexity-theoretic methods or facilitated further by cryptography. Concretely, our results include: - Optimal hardness. Assuming sub-exponential indistinguishability obfuscation, we give fine-grained worst-case to average case reductions for circuit-SAT. In particular, if finding an -witness requires nearly brute-force time in the worst case, then the same is true for some efficiently sampleable distribution. In fact, we show that under these assumptions, there exist families of one-way functions with optimal time-probability security tradeoffs. Under an additional, stronger assumption -- the optimal non-deterministic hardness of refuting circuit-SAT -- we construct additional cryptographic primitives such as PRGs and public-key encryption that have such optimal time-advantage security tradeoffs. - Direct Product Hardness. Again assuming and optimal non-deterministic hardness of SAT refutation, we show that the ``(search) -fold SAT problem'' -- the computational task of finding satisfying assignments to circuit-SAT instances simultaneously -- has (optimal) hardness roughly for time algorithms. In fact, we build ``optimally secure one-way product functions'' (Holmgren-Lombardi, FOCS '18), demonstrating that optimal direct product theorems hold for some choice of one-way function family. - Single-Input Correlation Intractability. Assuming either or , we show a worst-case to average-case reduction for strong forms of single-input correlation intractability. That is, powerful forms of correlation-intractable hash functions exist provided that a collection of \emph{worst-case} ``correlation-finding'' problems are hard. - Non-interactive Proof of Quantumness. Assuming sub-exponential and OWFs, we give a non-interactive proof of quantumness based on the worst-case hardness of the white-box Simon problem. In particular, this proof of quantumness result does not explicitly assume quantum advantage for an average-case task. To help prove our first two results, we show along the way how to improve the Goldwasser-Sipser ``set lower bound'' protocol to have communication complexity quadratically smaller in the multiplicative approximation error .
Last updated:  2025-06-09
Fairness in the Wild: Secure Atomic Swap with External Incentives
Hao Chung, Elisaweta Masserova, Elaine Shi, and Sri AravindaKrishnan Thyagarajan
Atomic swaps enable asset exchanges across blockchains without relying on trusted intermediaries, and are a key component of decentralized finance (DeFi) ecosystems. Recently, Chung, Masserova, Shi, and Thyagarajan introduced Rapidash (Financial Cryptography 2025), an atomic swap protocol that remains incentive compatible under user-miner collusion, by ensuring that the honest strategy forms a coalition-resistant Nash equilibrium. However, their model assumes a closed system where players act solely based on internal protocol incentives. In practice, participants may be influenced by external incentives such as off-chain rewards or adversarial bribes, which can undermine such equilibrium guarantees. In this work, we introduce a new game-theoretic notion, bounded maximin fairness, which ensures that honest participants remain protected against rational adversaries with arbitrary but bounded external incentives. We construct an atomic swap protocol that satisfies this notion, while preserving the equilibrium properties of prior work in the absence of external influence. As we show, our protocol is easy to implement and can be instantiated even in Bitcoin’s limited scripting language.
Last updated:  2025-06-09
SmallWood: Hash-Based Polynomial Commitments and Zero-Knowledge Arguments for Relatively Small Instances
Thibauld Feneuil and Matthieu Rivain
Zero-knowledge proofs (ZKPs) are a fundamental building block in cryptography, enabling powerful privacy-preserving and verifiable computations. In the post-quantum era, hash-based ZKPs have emerged as a promising direction due to their conjectured resistance to quantum attacks, along with their simplicity and efficiency. In this work, we introduce SmallWood, a hash-based polynomial commitment scheme (PCS) and zero-knowledge argument system optimized for relatively small instances. Building on the recent degree-enforcing commitment scheme (DECS) from the Threshold-Computation-in-the-Head (TCitH) framework, we refine its formalization and combine it with techniques from Brakedown. This results in a new hash-based PCS that is particularly efficient for polynomials of relatively small degree —typically up to — outperforming existing approaches in this range. Leveraging this new PCS, we design a hash-based zero-knowledge argument system that outperforms the state-of-the-art in terms of proof sizes for witness size ranging from to . Additionally, we present exact zero-knowledge arguments for lattice-based problems using SmallWood, demonstrating highly competitive performance: our scheme yields proof sizes under 25 KB across a wide range of lattice parameters, including Kyber and Dilithium instances.
Last updated:  2025-06-12
How to (not) combine Oblivious Pseudorandom Functions
Sebastian Faller and Julia Hesse
An oblivious pseudorandom function (OPRF) is a cryptographic tool that enables fast and secure authentication and key derivation from passwords. In the past few years, the adoption of OPRFs has flourished and today they are at the core of the PIN-protected backup methods of WhatsApp and Signal, and of privacy-enhancing browser technologies. All vendors deploy the so-called 2Hash-Diffie-Hellman (2HashDH) OPRF, which relies on discrete-logarithm-type assumptions that are standard yet known to be prone to quantum attacks. Recent advancements in cryptographic research (e.g., Beullens et al., Eurocrypt 2025) have brought up post-quantum OPRFs that are fast enough to deploy them in the setting of, e.g., WhatsApp or Signal. Yet none of these constructions are based on standard assumptions. In this work, we investigate combiners for OPRFs, namely a "best-of-both'' combination of a classical and a post-quantum OPRF that is secure as long as one of them is. First, we give formal evidence that so-called black-box combiners do not exist, indicating that combining OPRFs is subtle and bears similarities with other powerful yet hard-to-combine cryptographic primitives like oblivious transfer (OT). We then give a (non-black-box) combiner for OPRFs and show that it can be instantiated with 2HashDH and the currently most efficient post-quantum OPRFs based on Legendre symbols. In particular, the reliance on the less standard Legendre-based hardness assumption does not harm the security of 2HashDH. This gives vendors a viable path to lift the security of their OPRF deployments to a post-quantum level.
Last updated:  2025-06-09
The complexity of the SupportMinors Modeling for the MinRank Problem
Giulia Gaggero, Elisa Gorla, and Daniel Cabarcas
In this note, we provide proven estimates for the complexity of the SupportMinors Modeling, mostly confirming the heuristic complexity estimates contained in the original article.
Last updated:  2025-06-09
Treebeard: A Scalable and Fault Tolerant ORAM Datastore
Amin Setayesh, Cheran Mahalingam, Emily Chen, and Sujaya Maiyya
We present Treebeard - the first scalable and fault tolerant Oblivious RAM (ORAM) based datastore designed to protect applications from access pattern attacks. Current ORAM systems face challenges in practical adoption due to their limited ability to handle concurrent workloads, scale effectively, and ensure fault tolerance. We address all three limitations in Treebeard by utilizing a multi-layer architecture that scales horizontally, handling thousands of requests in parallel, while replicating the data to prevent data loss upon failures. Experimental evaluation demonstrates Treebeard's ability to scale linearly, achieving a throughput of 160K ops/sec with 16 machines; this behavior is similar to the enclave-based state-of-the-art, Snoopy. Being fault-tolerant, Treebeard recovers from failures with close to zero downtime and achieves 13.8x the throughput of QuORAM, the latest fault tolerant ORAM system, even without scaling.
Last updated:  2025-06-10
FABLE: Batched Evaluation on Confidential Lookup Tables in 2PC
Zhengyuan Su, Qi Pang, Simon Beyzerov, and Wenting Zheng
Abstract Secure two-party computation (2PC) is a cryptographic technique that enables two mutually distrusting parties to jointly evaluate a function over their private inputs. We consider a 2PC primitive called confidential lookup table (LUT) evaluation, which is useful in privacy-preserving ML inference and data analytics. In this setting, a server holds a confidential LUT and evaluates it over an input secret-shared between a client and the server, producing a secret-shared output. Existing approaches for 2PC LUT evaluation suffer from high asymptotic complexity and practical inefficiency, with some designs lacking confidentiality guarantees for the LUT. Recognizing that many applications involving confidential LUT evaluation require processing multiple inputs with the same LUT, we propose FABLE, a system designed to efficiently evaluate a LUT on a large batch of queries simultaneously. Compared to the state-of-the-art confidential LUT evaluation methods, FABLE achieves up to 28.46-101.47 speedup in LAN environments and up to 50.10-392.93 speedup in WAN environments.
Last updated:  2025-06-09
Leftover Hash Lemma(s) Over Cyclotomic Rings
Katharina Boudgoust and Oleksandra Lapiha
In this work, we propose a novel systematic approach for obtaining leftover hash lemmas (LHLs) over cyclotomic rings. Such LHLs build a fundamental tool in lattice-based cryptography, both in theoretical reductions as well as in the design of cryptographic primitives. The scattered set of prior works makes it difficult to navigate the landscape and requires a substantial effort to understand the mathematical constraints under which the LHL holds over cyclotomic rings. This is especially painful if one’s given setting does not fit exactly into prior studies. We argue that all prior approaches boil down to two different proof strategies, resulting in two main theorems. From there on, we are able to recover all previous flavours of seemingly independent LHLs as corollaries. Moreover, we showcase the power of our interpretation by providing new statements, covering mathematical settings not considered before. Our work further proves LHLs in the presence of leakage for both approaches and provides novel bounds for wide families of leakage functions.
Last updated:  2025-06-13
Revisiting Discrete Logarithm Reductions
Maiara F. Bollauf, Roberto Parisella, and Janno Siim
A reduction showing that the hardness of the discrete logarithm () assumption implies the hardness of the computational Diffie-Hellman () assumption in groups of order , where is smooth, was first presented by den Boer [Crypto, 88].} We also consider groups of prime order , where is somewhat smooth (say, every prime that divides is less than ). Several practically relevant groups satisfy this condition. 1. We present a concretely efficient version of the reduction for such groups. In particular, among practically relevant groups, we obtain the most efficient and tightest reduction in the literature for BLS12-381, showing that = . 2. By generalizing the reduction, we show that in these groups the -Power (-) assumption implies -Diffie-Hellman Exponent (-) assumption, where is polynomial in the security parameter. On the negative side, we show there is no generic reduction, which could demonstrate that - implies the -Generalized Diffie-Hellman Exponent (-) assumption. This is in stark contrast with the algebraic group model, where this implication holds.
Last updated:  2025-06-09
A Theoretical Perspective on the Formal Verification of IoT Protocols Using LTL and Rewriting Logic in Maude
Delia-Iustina Grigoriță
As Internet of Things (IoT) systems become increasingly complex, verifying communication protocols is essential to guarantee their security and correctness. This paper introduces a brief introduction to the theoretical concepts of how to use formal techniques, LTL and rewriting logic within the Maude system to verify the security and proper behavior of the protocols. While rewriting logic explains how various events occur over time, LTL explains how a property can be formulated. This theoretical perspective is intended to inform future applications and research.
Last updated:  2025-06-11
Shorter VOLE-in-the-Head-based Signatures from Vector Semi-Commitment
Seongkwang Kim, Byeonghak Lee, and Mincheol Son
The VOLE-in-the-Head (VOLEitH) paradigm transforms VOLE-based zero-knowledge proofs into post-quantum signature schemes by allowing public verification. We introduce reduced VOLE-in-the-Head (rVOLEitH), which incorporates the Vector Semi-Commitment (VSC) technique. VSC, originally developed for MPC-in-the-Head (MPCitH) schemes, reduces commitment size while maintaining security by relaxing the binding property. We adapt the ideal cipher version of VSC (IC-VSC) into the VOLEitH framework, leading to a reduction in signature size. Our security analysis proves that rVOLEitH achieves existential unforgeability under chosen-message attacks (EUF-CMA) in the ideal cipher model. Compared to existing VOLEitH-based signatures, our approach reduces signature size by up to 6.0\% while improving computational efficiency. Furthermore, we analyze the impact of eliminating individual seed commitments and demonstrate a practical attack against a recently proposed VOLEitH variant that lacks such commitments. Our results establish rVOLEitH as an optimized and secure alternative for post-quantum signatures, improving both efficiency and security in the VOLEitH paradigm.
Last updated:  2025-06-09
Weight reduction in distributed protocols: new algorithms and analysis
Anatoliy Zinovyev
We study the problem of minimizing the total weight of (potentially many) participants of a distributed protocol, a necessary step when the original values are large but the scheme to be deployed scales poorly with the weights. We assume that fraction of the original weights can be corrupted and we must output new weights with at most adversarial fraction, for . This problem can be viewed from the prism of electing a small committee that does the heavy work, a powerful tool for making distributed protocols scalable. We solve the variant that requires giving parties potentially multiple seats in the committee and counting each seat towards the cost of the solution. Moreover, we focus on the ``deterministic'' version of the problem where the computed committee must be secure for any subset of parties that can be corrupted by the adversary; such a committee can be smaller than a randomly sampled one in some cases and is useful when security against adaptive corruptions is desired but parties in the sub-protocol speak multiple times. Presented are new algorithms for the problem as well as analysis of prior work. We give two variants of the algorithm Swiper (PODC 2024), one that significantly improves the running time without sacrificing the quality of the output and the other improving the output for a reasonable increase in the running time. We prove, however, that all known algorithms, including our two variants of Swiper, have worst case approximation ratio . To counter that, we give the first polynomial time algorithm with approximation factor and also the first sub-exponential time exact algorithm, practical for some real-world inputs. Of theoretical interest is another polytime algorithm that we present, based on linear programming, whose output is no worse than an optimal solution to the problem with slightly different parameters. We implemented and tested previous and new algorithms, comparing them on the stake distributions of popular proof-of-stake blockchains, and found that our second variant of Swiper computes solutions extremely close to the optimal, confirmed by our exact algorithm.
Last updated:  2025-06-09
Secure and Practical Cold (and Hot) Staking
Mario Larangeira
The stake delegation technique is what turns the general Proof of Stake (PoS) into a practical protocol for a large number of participants, ensuring the security of the distributed system, in what is known as Delegated PoS (DPoS). Karakostas et al. (SCN ’20) formalized the delegation method paving the way for a whole industry of stake pools by proposing a formal definition for wallet as a universal composable (UC) functionality and introducing a corresponding protocol. On the other hand, a widely used technique named hot/cold wallet was formally studied by Das et al. (CCS ’19 and ’21), and Groth and Shoup (Eurocrypt ’22) for different key derivation methods in the Proof of Work (PoW) setting, but not PoS. Briefly, while hot wallets are exposed to the risks of the network, the cold wallet is kept offline, thus more secure. However this may impair some capabilities given that the cold wallet is kept indefinitely offline. It is straightforward to observe that this “double wallet” design is not naturally portable to the setting where delegation is paramount, i.e., DPoS. This work identifies challenges for PoS Hot/Cold Wallet and proposes a secure and practical protocol.
Last updated:  2025-06-09
Multiparty Distributed Point Functions
Aarushi Goel, Mingyuan Wang, and Zhiheng Wang
We present the first construction of multiparty distributed point functions based on one-way functions, where the share sizes remain sublinear in the domain size and grow {\em only polynomially} with the number of parties. In contrast, existing multiparty distributed point function constructions in Minicrypt have share sizes that grow {\em exponentially} with the number of parties.
Last updated:  2025-06-08
LAPWN: A Lightweight User–Server Authentication Protocol for Wireless Networks
Sajjad Alizadeh and Reza Hooshmand
The Internet of Things (IoT) is composed of interconnected devices that exchange data over a network, enabling applications in healthcare, transportation, and smart environments. As IoT ecosystems expand, ensuring security and privacy remains a critical challenge. Many IoT devices rely on wireless networks for data transmission, making them vulnerable to eavesdropping, tracking, and tampering. This highlights the need for robust authentication mechanisms. To address these concerns, numerous authentication protocols have been proposed. However, many fail to ensure adequate security against both passive and active attacks. In this research, we introduce LAPWN, a lightweight protocol for user–server communication, specifically designed for constrained environments, ensuring a balance between security and efficiency. The proposed protocol is implemented as a fully functional Python application, demonstrating its practical usability and evaluating its efficiency in real-world scenarios. To validate its security, we performboth informal and formal analyses, utilizing Scyther, ProVerif, and the Real-or-Random (RoR) model. The results confirm that LAPWN provides a secure, lightweight, and efficient authentication solution with low computational and communication overhead. Furthermore, performance evaluations show that it surpasses existing authentication protocols, making it a highly effective solution for secure user–server interactions in constrained environments.
Last updated:  2025-06-07
How to Model Unitary Oracles
Mark Zhandry
We make the case for modeling unitary oracles by allowing for controlled access to the oracle as well as its conjugate transpose (inverse), but also its conjugate and transpose. Controlling and conjugate transposes are common if even standard, but conjugates and transposes appear to be non-standard. In order to justify our modeling, we give several formal examples of what goes wrong or is missed when using a more restrictive modeling. We also argue that our model is the "right" level of granularity, and that other transformations likely do not correspond to efficient computation. We also discuss other modeling choices, such as ancillas and approximation error. Through our exploration, we uncover interesting phenomena. Examples include an attack on the recent pseudorandom unitary construction of Ma and Huang (STOC'25) if used incorrectly as a publicly evaluatable unitary, and a quantum complexity-theoretic separation that follows from a purely classical separation.
Last updated:  2025-06-07
PICS: Private Intersection over Committed (and reusable) Sets
Aarushi Goel, Peihan Miao, Phuoc Van Long Pham, and Satvinder Singh
Private Set Intersection (PSI) enables two parties to compute the intersection of their private sets without revealing any additional information. While maliciously secure PSI protocols prevent many attacks, adversaries can still exploit them by using inconsistent inputs across multiple sessions. This limitation stems from the definition of malicious security in secure multiparty computation, but is particularly problematic in PSI because: (1) real-world applications---such as Apple’s PSI protocol for CSAM detection and private contact discovery in messaging apps---often require multiple PSI executions over consistent inputs, and (2) the PSI functionality makes it relatively easy for adversaries to infer additional information. We propose {\em Private Intersection over Committed Sets (PICS)}, a new framework that enforces input consistency across multiple sessions via committed sets. Building on the state-of-the-art maliciously secure PSI framework (i.e., VOLE-PSI [EUROCRYPT 2021]), we present an efficient instantiation of PICS % in the random oracle model using lightweight cryptographic tools. We implement our protocol to demonstrate concrete efficiency. Compared to VOLE-PSI, for input sets of size , our communication overhead is as low as . Our end-to-end performance overhead is in the LAN setting and decreases to in the WAN setting with bandwidths ranging from to Mbps.
Last updated:  2025-06-07
Zeus: Defending against Fee Stealing and Griefing Attacks in Multi-Hop Payments
JIngyu Liu, Yingjie Xue, Di Wu, Jian Liu, and Xuechao Wang
Payment Channel Networks (PCNs) are the most scalable and trust-minimized solution to Bitcoin's scalability challenges. Within PCNs, connected payer and payee can make arbitrary off-chain transactions through multi-hop payments (MHPs) over payment channel paths, while intermediate relays charge relay fees by providing liquidity. However, current MHP protocols face critical security threats including fee-stealing attacks and griefing attacks. In this paper, we identify new fee-stealing attacks targeting most existing MHP protocols. Second, we prove that eliminating griefing attacks in current MHP protocols is impossible by reducing the problem to fair secret exchange. Finally, we introduce Zeus, the first Bitcoin-compatible MHP protocol that is secure against fee-stealing attacks and offers bounded griefing protection against -cost-sensitive adversaries—those who only launch griefing attacks when the expected damage exceeds a fraction of their own cost. These guarantees are established through rigorous proofs in the Global Universal Composability (GUC) framework. Our comprehensive evaluation demonstrates that Zeus reduces worst-case griefing damage to 28\% and 75\% compared to MHP schemes such as AMHL~(NDSS'19) and Blitz~(USENIX SEC'21), respectively. Our results further show that, even under the most adverse configurations within the Lightning Network, Zeus imposes costs on adversaries that are at least ten times greater than their potential damage.
Last updated:  2025-06-07
PRESENT Full Round Emulation : Structural Flaws and Predictable Outputs
Gopal Singh
The Internet of Things (IoT) has become integral to modern life, enabling smart cities, healthcare, and industrial automation. However, the increasing connectivity of IoT devices exposes them to various cyber threats, necessitating robust encryption methods. The PRESENT cipher, a lightweight block cipher, is well-suited for resource-constrained IoT environments, offering strong security with minimal computational overhead. This paper explores the application of deep learning (DL) techniques in cryptanalysis, specifically using an Aggregated Perceptron Group (APG) Model, which employs a Multi-Layer Perceptron (MLP) to predict input-output relations for each round of the PRESENT cipher’s encryption, excluding the key. This approach focuses solely on emulating the cipher's Substitution Permutation Network (SPN), capturing its essential structure and revealing the structural flaws in the way data is transformed through rounds. The models are chained together to generate the final ciphertext for any 64-bit plaintext with high accuracy, reducing the probability form a random guess of . The results demonstrate the potential of DL models in cryptanalysis, providing insights into the security of lightweight ciphers in IoT communications and highlighting the practicality of deep learning for cryptographic analysis on standard computing systems.
Last updated:  2025-06-06
Efficient Modular Multiplication Using Vector Instructions on Commodity Hardware
Simon Langowski and Srini Devadas
Modular arithmetic is the computational backbone of many cryptographic and scientific algorithms. In particular, modular multiplication in a large prime field is computationally expensive and dictates the runtime of many algorithms. While it is relatively easy to utilize vectorization to accelerate batches of independent modular multiplications, our goal is to reduce the latency of a modular multiplication under a generic prime using vectorization, while maintaining constant-time execution. We investigate the technique of Residue Number System (RNS) Montgomery modular multiplication. We first contribute a unified view of algorithmic optimizations in prior art. This view enables us to further reduce the number of elementwise multiplications in an algorithm with a simplified structure that we prove correct. We explore AVX512 acceleration on CPUs, and show how to map our algorithm to vector instructions. We implement our algorithm in C++ and achieve speedup, which is nearly maximal, for -bit primes on a CPU with AVX512 over optimized library implementations of standard Montgomery modular multiplication algorithms. GPUs contain vector cores that each support tens of physical threads. We show how to intelligently map our algorithm to threads in a vector core, ``overparallelizing'' to minimize latency. We show substantial speedups over a commercial library implementation of standard modular multiplication algorithms across a wide range of prime sizes.
Last updated:  2025-06-06
Full Anonymity in the Asynchronous Setting from Peony Onion Encryption
Megumi Ando, Miranda Christ, Kashvi Gupta, Tal Malkin, and Dane Smith
Onion routing is a popular practical approach to anonymous communication, and the subject of a growing body of foundational theoretical work aiming to design efficient schemes with provable anonymity, the strongest notion of which is full anonymity. Unfortunately, all previous schemes that achieve full anonymity assume the synchronous communication setting, which is unrealistic as real networks may experience message loss and timing attacks that render such schemes insecure. Recently, Ando, Lysyanskaya, and Upfal (TCC '24) took a first step towards addressing the asynchronous setting by constructing an efficient onion routing protocol with the strictly weaker guarantee of differential privacy. Their scheme relies on a new primitive called bruisable onion encryption. In this paper, we construct the first efficient fully anonymous onion routing protocol in the asynchronous setting. To do so, we overcome two main technical challenges: First, we develop the first bruisable onion construction that does not leak information about the onion's position on the routing path. Second, we design an onion routing protocol that uses such bruisable onion encryption to achieve full anonymity (rather than just differential privacy). Along the way, we develop a new fully anonymous onion routing protocol in the synchronous setting, which improves on the state of the art in terms of communication complexity and round complexity. Both our protocols are secure against an active adversary corrupting a constant fraction of the nodes (up to <1 for the synchronous protocol, and <1/2 for the asynchronous protocol) and rely on standard cryptographic assumptions (CCA-secure public key encryption and collision-resistant hash functions).
Last updated:  2025-06-06
A New PUF-Based Authenticated Key Establishment Protocol for V2G Networks
Milad Seddigh, Seyed Hamid Baghestani, and Mahdi Esfahani
Vehicle-to-grid (V2G) refers to the bidirectional communication and energy flows that allow renewable energy sources to supply supplementary electrical services between electric cars (EVs) and the power grid. Additionally, V2G lowers environmental pollution and energy issues while providing efficient charging services. A PUF-based, reliable, anonymous authentication and key establishment scheme for V2G networks was recently presented by Sungjin Yu et al. In this paper, we show that the Yu et al. protocol is vulnerable to tracking attacks and does not guarantee user anonymity. We also discovered that ephemeral secret leakage attacks can target their scheme. Additionally, we propose a new PUF-based authenticated key establishment scheme for V2G networks that is more effective than the most recent relevant scheme and is resistant to all known attacks. We prove that the presented scheme is semantically secure, and we also simulate our protocol using the Scyther tool.
Last updated:  2025-06-06
High-Order and Cortex-M4 First-Order Implementations of Masked FrodoKEM
François Gérard and Morgane Guerreau
The key encapsulation mechanism FrodoKEM is a post-quantum algorithm based on plain LWE. While it has not been selected by the NIST for standardization, FrodoKEM shares a lot of similarities with the lattice-based standard ML-KEM and offers strong security assumptions by relying on the unstructured version of the LWE problem. This leads FrodoKEM to be recommended by European agencies ANSSI and BSI as a possible choice to obtain post-quantum security. In this paper, we discuss the practical aspects of incorporating side-channel protections in FrodoKEM by describing a fully masked version of the scheme based on several previous works on LWE-based KEMs. Furthermore, we propose an arbitrary order C implementation based on the reference code and a Cortex-M4 implementation with gadgets specialized at order 1 in low level assembly code that incorporates bespoke modifications to thwart (micro-)architectural leakages. Finally, we validate our order 1 gadgets by performing TVLA on a ChipWhisperer.
Last updated:  2025-06-06
From Signature-Based Witness Encryption to RAM Obfuscation: Achieving Blockchain-Secured Cryptographic Primitives
Lev Stambler
Goyal and Goyal demonstrated that extractable witness encryption, when combined with smart-contract equipped proof-of-stake blockchains, can yield powerful cryptographic primitives such as one-time programs and pay-to-use programs. However, no standard model construction for extractable witness encryption is known, and instantiations from alternatives like indistinguishability obfuscation are highly inefficient. This paper circumvents the need for extractable witness encryption by combining signature-based witness encryption (Döttling et al.) with witness encryption for KZG commitments (Fleischhacker et al.). Inspired by Goyal et al., we introduce -Extractable Witness Encryption for Blockchains (-eWEB), a novel primitive that encrypts a secret, making its decryption contingent upon the subsequent block's state. Leveraging -eWEBs, we then build a conditional one-time memory, leading to a one-time program (-OTP) also conditional on the next block state. Finally, using our -OTP, we develop a conditional RAM obfuscation scheme where program execution can be contingent on the blockchain state, thereby enabling applications like pay-to-use programs. Despite its theoretical value, our construction is impractical due to a "bit-by-bit" signing requirement for the state root and an inefficient method for storing validator keys. We thus posit the construction of a practical -OTP as a significant open problem. This work provides the first theoretical pathway for building such primitives without extractable witness encryption, representing a novel step for blockchain-secured cryptography
Last updated:  2025-06-06
MIZAR: Boosting Secure Three-Party Deep Learning with Co-Designed Sign-Bit Extraction and GPU Acceleration
Ye Dong, Xudong Chen, Xiangfu Song, Yaxi Yang, Tianwei Zhang, and Jin-Song Dong
Three-party secret sharing-based computation has emerged as a promising approach for secure deep learning, benefiting from its high throughput. However, it still faces persistent challenges in computing complex operations such as secure Sign-Bit Extraction, particularly in high-latency and low-bandwidth networks. A recent work, Aegis (Lu et al., Cryptology ePrint'2023), made significant strides by proposing a constant-round DGK-style Sign-Bit Extraction protocol with GPU acceleration on Piranha (Watson et. al., USENIX Security'2022). However, Aegis exhibits two critical limitations: it \romannumeral1) overlooks the use of \textit{bit-wise prefix-sum}, and \romannumeral2) inherits non-optimized modular arithmetic over prime fields and excessive memory overhead from the underlying GPU-based MPC framework. This results in suboptimal performance in terms of communication, computation, and GPU memory usage. Driven by the limitations of Aegis, we propose an optimized constant-round secure Sign-Bit Extraction protocol with communication and GPU-specific optimizations. Concretely, we construct a new masked randomized list by exploiting the upper bound of bit-wise prefix-sum to reduce online communication by up to , and integrate fast modular-reduction and kernel fusion techniques to enhance GPU utilization in MPC protocols. Besides, we propose specific optimizations for secure piecewise polynomial approximations and Maxpool computation in neural network evaluations. Finally, we instantiate these protocols as a framework MIZAR and report their improved performance over state-of-the-art GPU-based solutions: \romannumeral1) For secure Sign-Bit Extraction, we achieve a speedup of -- and reduce communication by --. \romannumeral2) Furthermore, we improve the performance of secure evaluation of nonlinear functions and neural networks by --. \romannumeral3) Lastly, our framework achieves -- GPU memory savings.
Last updated:  2025-06-06
TrafficProof: Privacy-Preserving Reliable Traffic Information Sharing in Social Internet of Vehicles
Stefan Dziembowski, Shahriar Ebrahimi, Parisa Hassanizadeh, and Susil Kumar Mohanty
In the Social Internet of Vehicles (SIoV), effective data sharing is essential for applications including road safety, traffic management, and situational awareness. However, the decentralized and open nature of SIoV presents significant challenges in simultaneously ensuring data integrity, user privacy, and system accountability. This paper presents a protocol for secure and location-accurate traffic data sharing that fully preserves the anonymity and privacy of participating witnesses. The protocol leverages zero-knowledge proofs (ZKPs) to allow vehicles to broadcast redacted traffic information—such as images—tied to specific geographic locations, while withholding both the original content and the identity of the reporting vehicle. To ensure the authenticity of the redacted content and the legitimacy of the witness, an additional ZKP is used to privately validate both elements. Upon receiving a report, the verifying node checks the submitted proofs, aggregates validated inputs, and publishes the resulting metadata to both IPFS and a blockchain. This design ensures public verifiability, tamper resistance, and the reliability of the shared data, while maintaining strong privacy guarantees through cryptographic anonymity. To improve the efficiency of proof generation on resource-constrained devices, the protocol employs folding-based ZKP constructions. We conduct a formal security and soundness analysis of the protocol and implement a proof-of-concept, which is publicly available as open-source software. Experimental evaluations on commodity hardware demonstrate that the protocol is computationally efficient and introduces less than 1.5\% communication overhead relative to the size of the shared traffic data, indicating its suitability for real-world deployment.
Last updated:  2025-06-06
On the Adaptive Security of FROST
Elizabeth Crites, Jonathan Katz, Chelsea Komlo, Stefano Tessaro, and Chenzhi Zhu
FROST and its variants are state-of-the-art protocols for threshold Schnorr signatures that are used in real-world applications. While static security of these protocols has been shown by several works, the security of these protocols under adaptive corruptions—where an adversary can choose which parties to corrupt at any time based on information it learns during protocol executions—has remained a notorious open problem that has received renewed attention due to recent standardization efforts for threshold schemes. We show adaptive security (without erasures) of FROST and several variants under different corruption thresholds and computational assumptions. Let n be the total number of parties, t+1 the signing threshold, and t_c an upper bound on the number of corrupted parties. 1. We prove adaptive security when t_c = t/2 in the random oracle model (ROM) based on the algebraic one-more discrete logarithm assumption (AOMDL)—the same conditions under which FROST is proven statically secure. 2. We introduce the low-dimensional vector representation (LDVR) problem, parameterized by t_c, t, and n, and prove adaptive security in the algebraic group model (AGM) and ROM based on the AOMDL assumption and the hardness of the LDVR problem for the corresponding parameters. In some regimes (including some t_c >t/2) we show the LDVR problem is unconditionally hard, while in other regimes (in particular, when t_c = t) we show that hardness of the LDVR problem is necessary for adaptive security to hold. In fact, we show that hardness of the LDVR problem is necessary for proving adaptive security of a broad class of threshold Schnorr signatures.
Last updated:  2025-06-06
Uniform Black-Box Separations via Non-Malleable Extractors
Marshall Ball and Dana Dachman-Soled
We construct -non-malleable extractors---which allow an attacker to tamper with a source times---for high min-entropy sources samplable by poly-time hierarchy circuits and for tampering classes corresponding to poly-time hierarchy functions from derandomization-type assumptions. We then show an application of this new object to ruling out constructions of succinct, non-interactive, arguments (SNARGs) secure against \emph{uniform} adversaries from \emph{uniform} falsifiable assumptions via a class of black-box reductions that has not been previously considered in the literature. This class of black-box reductions allows the reduction to arbitrarily set the \emph{coins}, as well as the input, of the uniform adversary it interacts with. The class of reductions we consider is restricted in allowing only non-adaptive queries to the adversary.
Last updated:  2025-06-06
Post-Quantum Security of Keyed Sponge-Based Constructions through a Modular Approach
Akinori Hosoyamada
Sponge-based constructions have successfully been receiving widespread adoption, as represented by the standardization of SHA-3 and Ascon by NIST. Yet, their provable security against quantum adversaries has not been investigated much. This paper studies the post-quantum security of some keyed sponge-based constructions in the quantum ideal permutation model, focusing on the Ascon AEAD mode and KMAC as concrete instances. For the Ascon AEAD mode, we prove the post-quantum security in the single-user setting up to about queries, where is the capacity and is the key length. Unlike the recent work by Lang et al.~(ePrint 2025/411), we do not need non-standard restrictions on nonce sets or the number of forgery attempts. In addition, our result guarantees even non-conventional security notions such as the nonce-misuse resilience confidentiality and authenticity under release of unverified plaintext. For KMAC, we show the security up to about queries, where is the rate, ignoring some small factors. In fact, we prove the security not only for KMAC but also for general modes such as the inner-, outer-, and full-keyed sponge functions. We take a modular proof approach, adapting the ideas by several works in the classical ideal permutation model into the quantum setting: For the Ascon AEAD mode, we observe it can be regarded as an iterative application of a Tweakable Even-Mansour (TEM) cipher with a single low-entropy key, and gives the security bound as the sum of the post-quantum TPRP advantage of TEM and the classical security advantage of Ascon when TEM is replaced with a secret random object. The proof for keyed sponges is obtained analogously by regarding them as built on an Even-Mansour (EM) cipher with a single low-entropy key. The post-quantum security of (T)EM has been proven by Alagic et al. (Eurocrypt 2022 and Eurocrypt 2024). However, they show the security only when the keys are uniformly random. In addition, the proof techniques, so-called the resampling lemmas, are inapplicable to our case with a low-entropy key. Thus, we introduce and prove a modified resampling lemma, thereby showing the security of (T)EM with a low-entropy key.
Last updated:  2025-06-06
Adaptive TDF from PKE with Randomness Recoverability and Pseudorandom Ciphertext Property
Fuyuki Kitagawa and Takahiro Matsuda
We present a generic construction of adaptive trapdoor function (TDF) from any public-key encryption (PKE) scheme that satisfies two properties: the randomness recoverability and the pseudorandom ciphertext property. As a direct corollary, we can obtain adaptive TDF from any trapdoor permutation (TDP) whose domain is both recognizable and sufficiently dense. In our construction, we can prove that the function's output is indistinguishable from uniform even when an adversary has access to the inversion oracle.
Last updated:  2025-06-06
Efficient Mixed-Mode Oblivious RAMs
Wenhao Zhang, Xiao Wang, and Chenkai Weng
Oblivious RAMs (ORAMs) allow data outsourcing to servers so that the access pattern to the outsourced data is kept private. It is also a crucial building block to enable private RAM access within secure multi-party computation (MPC). In recent years, schemes that match the ORAM lower bound have been proposed in both the outsourcing setting and the RAM-model MPC setting, seemingly putting an epilogue in the theory of ORAM. In this paper, we initiate a study of mixed-mode ORAMs, where accesses to the ORAM are a mix of both public and private accesses. Although existing ORAMs can support public access by treating them as private ones, achieving better efficiency is highly non-trivial. - We present a mixed-mode ORAM algorithm, assuming the existence of private information retrieval (PIR). When the PIR scheme is communication-efficient, this ORAM achieves the best possible outcome: it has a bandwidth blowup of for private accesses and for public accesses. This construction can be easily extended for the MPC setting achieving circuit size for private accesses to -sized blocks and circuit size for public accesses to the same array. - We instantiate the above protocol in the three-party computation (3PC) setting with more concrete optimizations, yielding a protocol that performs almost as efficiently as state-of-the-art RAM-3PC protocols for private accesses while being more efficient for public accesses in the LAN setting.
Last updated:  2025-06-06
Private Signaling Secure Against Actively Corrupted Servers
Haotian Chu, Xiao Wang, and Yanxue Jia
Private signaling allows servers to identify a recipient's messages on a public bulletin board without knowing the recipient's metadata. It is a central tool for systems like privacy-preserving blockchains and anonymous messaging. However, unless with TEE, current constructions all assume that the servers are only passively corrupted, which significantly limits their practical relevance. In this work, we present a TEE-free simulation-secure private signaling protocol assuming two non-colluding servers, either of which can be actively corrupted. Crucially, we convert signal retrieval into a problem similar to private set intersection and use custom-built zero-knowledge proofs to ensure consistency with the public bulletin board. As a result, our protocol achieves lower server-to-server communication overhead and a much smaller digest compared to state-of-the-art semi-honest protocol. For example, for a board size of messages, the resulting digest size is only 33.57KB. Our protocol is also computationally efficient: retrieving private signals only takes about 2 minutes, using 16 threads and a LAN network.
Last updated:  2025-06-06
Single-server Stateful PIR with Verifiability and Balanced Efficiency
Pranav Shriram Arunachalaramanan and Ling Ren
Recent stateful private information retrieval (PIR) schemes have significantly improved amortized computation and amortized communication while aiming to keep client storage minimal. However, all the schemes in the literature still suffer from a poor tradeoff between client storage and computation. We present BALANCED-PIR, a stateful PIR scheme that effectively balances computation and client storage. For a database of a million entries, each of 8 bytes, our scheme requires 0.2 MB of client storage, 0.2 ms of amortized computation, and 11.14 KB of amortized communication. Compared with the state-of-the-art scheme using a similar storage setting, our scheme is almost 9x better in amortized computation and 40x better in offline computation. Verifiable private information retrieval has been gaining more attention recently. However, all existing schemes require linear amortized computation and huge client storage. We present Verifiable BALANCED-PIR, a verifiable stateful PIR scheme with sublinear amortized computation and small client storage. In fact, our Verifiable BALANCED-PIR adds modest computation, communication, and storage costs on top of BALANCED-PIR. Compared with the state-of-the-art verifiable scheme, the client storage of our scheme is 100x smaller, the amortized computation is 15x less, and the amortized communication is 2.5x better.
Last updated:  2025-06-05
Rewardable Naysayer Proofs
Gennaro Avitabile, Luisa Siniscalchi, and Ivan Visconti
Combining verifiable computation with optimistic approaches is a promising direction to scale blockchain applications. The basic idea consists of saving computations by avoiding the verification of proofs unless there are complaints. A key tool to design systems in the above direction has been recently proposed by Seres, Glaeser and Bonneau [FC'24] who formalized the concept of a Naysayer proof: an efficient to verify proof disproving a more demanding to verify original proof. In this work, we discuss the need of rewarding naysayer provers, the risks deriving from front-running attacks, and the failures of generic approaches trying to defeat them. Next, we introduce the concept of verifiable delayed naysayer proofs and show a construction leveraging proofs of sequential work, without relying on any additional infrastructure.
Last updated:  2025-06-05
Breaking the 1/λ-Rate Barrier for Arithmetic Garbling
Geoffroy Couteau, Carmit Hazay, Aditya Hegde, and Naman Kumar
Garbled circuits, introduced in the seminal work of Yao (FOCS, 1986), have received considerable attention in the boolean setting due to their efficiency and application to round-efficient secure computation. In contrast, arithmetic garbling schemes have received much less scrutiny. The main efficiency measure of garbling schemes is their rate, defined as the bit size of each gate's output divided by the size of the (amortized) garbled gate. Despite recent progress, state-of-the-art garbling schemes for arithmetic circuits suffer from important limitations: all existing schemes are either restricted to -bounded integer arithmetic circuits (a computational model where the arithmetic is performed over and correctness is only guaranteed if no intermediate computation exceeds the bound ) and achieve constant rate only for very large bounds , or have a rate at most otherwise, where denotes a security parameter. In this work, we improve this state of affairs in both settings. - As our main contribution, we introduce the first arithmetic garbling scheme over modular rings with rate , breaking for the first time the -rate barrier for modular arithmetic garbling. Our construction relies on the power-DDH assumption. - As a secondary contribution, we introduce a new arithmetic garbling scheme for -bounded integer arithmetic that achieves a constant rate for bounds as low as . Our construction relies on a new non-standard KDM-security assumption on Paillier encryption with small exponents.
Last updated:  2025-06-05
How to Trace Viral Content in End-to-End Encrypted Messaging
Pedro Branco, Matthew Green, Aditya Hegde, Abhishek Jain, and Gabriel Kaptchuk
We study the problem of combating *viral* misinformation campaigns in end-to-end encrypted (E2EE) messaging systems such as WhatsApp. We propose a new notion of Hop Tracking Signatures (HTS) that allows for tracing originators of messages that have been propagated on long forwarding paths (i.e., gone viral), while preserving anonymity of everyone else. We define security for HTS against malicious servers. We present both negative and positive results for HTS: on the one hand, we show that HTS does not admit succinct constructions if tracing and anonymity thresholds differ by exactly one "hop". On the other hand, by allowing for a larger gap between tracing and anonymity thresholds, we can build succinct HTS schemes where the signature size does not grow with the forwarding path. Our positive result relies on streaming algorithms and strong cryptographic assumptions. Prior works on tracing within E2EE messaging systems either do not achieve security against malicious servers or focus only on tracing originators of pre-defined banned content.
Last updated:  2025-06-05
Synergy: A Lightweight Block Cipher with Variable Bit Rotation Feistel Network
Anders Lindman
Synergy is a lightweight block cipher designed for resource-constrained environments such as IoT devices, embedded systems, and mobile applications. Built around a 16-round Feistel network, 8 independent pseudorandom number generators (PRNGs) ensure strong diffusion and confusion through the generation of per-block unique round keys. With a 1024-bit key and a 64-bit block size, Synergy mitigates vulnerabilities to ML-based cryptanalysis by using a large key size in combination with key- and data-dependent bit rotations, which reduce statistical biases and increase unpredictability. By utilizing 32-bit arithmetic for efficient processing, Synergy achieves high throughput, low latency, and low power consumption, providing performance and security for applications where both are critical.
Last updated:  2025-06-05
Integral Resistance of Block Ciphers with Key Whitening by Modular Addition
Christof Beierle, Phil Hebborn, Gregor Leander, and Yevhen Perehuda
Integral attacks exploit structural weaknesses in symmetric cryptographic primitives by analyzing how subsets of inputs propagate to produce outputs with specific algebraic properties. For the case of (XOR) key-alternating block ciphers using (independent) round keys, at ASIACRYPT'21, Hebborn et al. established the first non-trivial lower bounds on the number of rounds required for ensuring integral resistance in a quite general sense. For the case of adding keys by modular addition, no security arguments are known so far. Here, we present a unified framework for analyzing the integral resistance of primitives using (word-wise) modular addition for key whitening, allowing us to not only fill the gap for security arguments, but also to overcome the heavy computational cost inherent in the case of XOR-whitening.
Last updated:  2025-06-10
XHMQV: Better Efficiency and Stronger Security for Signal’s Initial Handshake based on HMQV
Rune Fiedler, Felix Günther, Jiaxin Pan, and Runzhi Zeng
The Signal protocol is the most widely deployed end-to-end-encrypted messaging protocol. Its initial handshake protocol X3DH allows parties to asynchronously derive a shared session key without the need to be online simultaneously, while providing implicit authentication, forward secrecy, and a form of offline deniability. The X3DH protocol has been extensively studied in the cryptographic literature and is acclaimed for its strong "maximum-exposure" security guarantees, hedging against compromises of users' long-term keys and medium-term keys but also the ephemeral randomness used in the handshake. This maximum-exposure security is achieved by deriving keys from the concatenation of 3–4 Diffie–Hellman (DH) secrets, each combining two long-term, medium-term, or ephemeral DH shares. Remarkably, X3DH's approach of concatenating plain DH combinations is sub-optimal, both in terms of maximum-exposure security and performance. Indeed, Krawczyk's well-known HMQV protocol (Crypto '05) is a high-performance, DH-based key exchange that provides strong security against long-term and ephemeral key compromise. One might hence wonder: why not base Signal's initial handshake on HMQV? In this work, we study this question and show that a carefully adapted variant of HMQV, which we call XHMQV, indeed enables stronger security and efficiency while matching the constraints of Signal's initial handshake. Most notably, HMQV does not work as a drop-in replacement for X3DH, as the latter's asynchronicity requires the protocol to handle cases where one party runs out of ephemeral keys (pre-uploaded to the Signal server). Our XHMQV design hence augments HMQV with medium-term keys analogous to those used in X3DH. We prove that XHMQV provides security in all 3–4 compromise scenarios where X3DH does and additionally in 1–2 further scenarios, strengthening the handshake's maximum-exposure guarantees while using more efficient group operations. We further confirm that our XHMQV design achieves deniability guarantees comparable to X3DH. Our security model is the first to capture Signal's long-term key reuse between DH key exchange and signatures, which may be of independent interest.
Last updated:  2025-06-04
One-way multilinear functions of the second order with linear shifts
Stanislav Semenov
We introduce and analyze a novel class of binary operations on finite-dimensional vector spaces over a field , defined by second-order multilinear expressions with linear shifts. These operations generate polynomials whose degree increases linearly with each iterated application, while the number of distinct monomials grows combinatorially. We demonstrate that, despite the non-associative and non-commutative nature in general, these operations exhibit power associativity and internal commutativity when iterated on a single vector. This allows for well-defined exponentiation . Crucially, the absence of a simple closed-form expression for suggests a one-way property: computing from and is straightforward, but recovering from (the Discrete Iteration Problem) appears computationally hard. We propose a Diffie–Hellman-like key exchange protocol utilizing these properties over finite fields, defining an Algebraic Diffie–Hellman Problem (ADHP). The proposed structures are of interest for cryptographic primitives, algebraic dynamics, and computational algebra.
Last updated:  2025-06-04
Orient Express: Using Frobenius to Express Oriented Isogenies
Wouter Castryck, Riccardo Invernizzi, Gioella Lorenzon, Jonas Meers, and Frederik Vercauteren
In this paper we study supersingular elliptic curves primitively oriented by an imaginary quadratic order, where the orientation is determined by an endomorphism that factors through the Frobenius isogeny. In this way, we partly recycle one of the main features of CSIDH, namely the fact that the Frobenius orientation can be represented for free. This leads to the most efficient family of ideal-class group actions in a range where the discriminant is significantly larger than the field characteristic . Moreover, if we orient with a non-maximal order and we assume that it is feasible to compute the ideal-class group of the maximal order, then also the ideal-class group of is known and we recover the central feature of SCALLOP-like constructions. We propose two variants of our scheme. In the first one, the orientation is by a suborder of the form for some coprime to , so this is similar to SCALLOP. In the second one, inspired by the work of Chenu and Smith, the orientation is by an order of the form where is square-free and not a multiple of . We give practical ways of generating parameters, together with a proof-of-concept SageMath implementation of both variants, which shows the effectiveness of our construction.
Last updated:  2025-06-04
A Quasi-polynomial Time Algorithm for the Extrapolated Dihedral Coset Problem over Power-of-Two Moduli
Shi Bai, Hansraj Jangir, Elena Kirshanova, Tran Ngo, and William Youmans
The Learning With Errors (LWE) problem, introduced by Regev (STOC'05), is one of the fundamental problems in lattice-based cryptography, believed to be hard even for quantum adversaries. Regev (FOCS'02) showed that LWE reduces to the quantum Dihedral Coset Problem (DCP). Later, Brakerski, Kirshanova, Stehlé and Wen (PKC'18) showed that LWE reduces to a generalization known as the Extrapolated Dihedral Coset Problem (EDCP). We present a quasi-polynomial time quantum algorithm for the EDCP problems over power-of-two moduli using a quasi-polynomial number of samples, which also applies to the SLWE problem defined by Chen, Liu, and Zhandry (Eurocrypt'22). Our EDCP algorithm can be viewed as a provable variant to the "Simon-meets-Kuperberg" algorithm introduced by Bonnetain and Naya-Plasencia (Asiacrypt'18), adapted to the EDCP setting. We stress that our algorithm does not affect the security of LWE with standard parameters, as the reduction from standard LWE to EDCP limits the number of samples to be polynomial.
Last updated:  2025-06-04
Constrained Verifiable Random Functions Without Obfuscation and Friends
Nicholas Brandt, Miguel Cueto Noval, Christoph U. Günther, Akin Ünal, and Stella Wohnig
CVRFs are PRFs that unify the properties of verifiable and constrained PRFs. Since they were introduced concurrently by Fuchsbauer and Chandran-Raghuraman-Vinayagamurthy in 2014, it has been an open problem to construct CVRFs without using heavy machinery such as multilinear maps, obfuscation or functional encryption. We solve this problem by constructing a prefix-constrained verifiable PRF that does not rely on the aforementioned assumptions. Essentially, our construction is a verifiable version of the Goldreich-Goldwasser-Micali PRF. To achieve verifiability we leverage degree-2 algebraic PRGs and bilinear groups. In short, proofs consist of intermediate values of the Goldreich-Goldwasser-Micali PRF raised to the exponents of group elements. These outputs can be verified using pairings since the underlying PRG is of degree 2. We prove the selective security of our construction under the Decisional Square Diffie-Hellman (DSDH) assumption and a new assumption, which we dub recursive Decisional Diffie-Hellman (recursive DDH). We prove the soundness of recursive DDH in the generic group model assuming the hardness of the Multivariate Quadratic (MQ) problem and a new variant thereof, which we call MQ+. Last, in terms of applications, we observe that our CVRF is also an exponent (C)VRF in the plain model. Exponent VRFs were recently introduced by Boneh et al. (Eurocrypt’25) with various applications to threshold cryptography in mind. In addition to that, we give further applications for prefix-CVRFs in the blockchain setting, namely, stake-pooling and compressible randomness beacons.
Last updated:  2025-06-04
When Threshold Meets Anamorphic Signatures: What is Possible and What is Not!
Hien Chu, Khue Do, Lucjan Hanzlik, and Sri AravindaKrishnan Thyagarajan
Anamorphic signatures allow covert communication through signatures in environments where encryption is restricted. They enable trusted recipients with a double key to extract hidden messages while the signature remains indistinguishable from a fresh and regular one. However, the traditional notion of anamorphic signatures suffers from vulnerabilities, particularly when a single recipient or sender is compromised, exposing all hidden messages and providing undeniable proof that citizens are part of the anamorphic exchange. To address these limitations, we explore a threshold-based approach to distribute trust among multiple recipients, preventing adversaries from decrypting anamorphic messages even if some recipients are compromised. Our first contribution is the formalization of the notion of \emph{threshold-recipient anamorphic signatures}, where decryption is possible only through collaboration among a subset of recipients. We then explore a \emph{stronger model} where the dictator controls the key generation process through which it learns all secret keys and how citizens store cryptographic keys. A particular example of this model in the real world is a dictator providing citizens with electronic identity documents (eIDs) and blocking all other usage of cryptography. We demonstrate that anamorphic communication is still possible even in such a scenario. Our construction is secure against post-quantum adversaries and does not rely on any computational assumptions except the random oracle model. Finally, we show an \emph{impossibility result} for encoding anamorphic messages with a threshold-sender model when using many existing threshold signature schemes and the adversary is part of the signing group. Our work outlines both the possibilities and limitations of extending anamorphic signatures with threshold cryptography, offering new insights into improving the security and privacy of individuals under authoritarian regimes.
Last updated:  2025-06-04
Designing QC-MDPC Public Key Encryption Schemes with Niederreiter's Construction and a Bit Flipping Decoder with Bounded DFR
Alessandro Annechini, Alessandro Barenghi, Gerardo Pelosi, and Simone Perriello
Post-quantum public key encryption (PKE) schemes employing Quasi-cyclic (QC) sparse parity-check matrix codes are enjoying significant success, thanks to their good performance profile and reduction to believed-hard problems from coding theory. However, using QC sparse parity-check matrix codes (i.e., QC-MDPC/LDPC codes) comes with a significant challenge: determining in closed-form their decoding failure rate (DFR), as decoding failures are known to leak information on the private key. Furthermore, there is no formal proof that changing the (constant) rate of the employed codes does not change the nature of the underlying hard problem, nor of the hardness of decoding random QC codes is formally related to the decoding hardness of random codes. In this work, we address and solve these challenges, providing a novel closed-form estimation of the decoding failure rate for three-iteration bit flipping decoders, and proving computational equivalences among the aforementioned problems. This allows us to design systematically a Niederreiter-style QC-MDPC PKE, enjoying the flexibility granted by freely choosing the code rate, and the significant improvements in tightness of our DFR bound. We report a improvement in public key and ciphertext size w.r.t. the previous best cryptosystem design with DFR closed-form bounds, LEDAcrypt-KEM. Furthermore, we show that our PKE parameters yield % smaller public key size and smaller ciphertexts w.r.t. HQC, which is the key encapsulation method employing a code based PKE, recently selected by the US NIST for standardization.
Last updated:  2025-06-04
Crowhammer: Full Key Recovery Attack on Falcon with a Single Rowhammer Bit Flip
Calvin Abou Haidar, Quentin Payet, and Mehdi Tibouchi
The Rowhammer attack is a fault-injection technique leveraging the density of RAM modules to trigger persistent hardware bit flips that can be used for probing or modifying protected data. In this paper, we show that Falcon, the hash-and-sign signature scheme over NTRU lattices selected by NIST for standardization, is vulnerable to an attack using Rowhammer. Falcon's Gaussian sampler is the core component of its security, as it allows to provably decorrelate the short basis used for signing and the generated signatures. Other schemes, lacking this guarantee (such as NTRUSign, GGH or more recently Peregrine) were proven insecure. However, performing efficient and secure lattice Gaussian sampling has proved to be a difficult task, fraught with numerous potential vulnerabilities to be exploited. To avoid timing attacks, a common technique is to use distribution tables that are traversed to output a sample. The official Falcon implementation uses this technique, employing a hardcoded reverse cumulative distribution table (RCDT). Using Rowhammer, we target Falcon's RCDT to trigger a very small number of targeted bit flips, and prove that the resulting distribution is sufficiently skewed to perform a key recovery attack. Namely, we show that a single targeted bit flip suffices to fully recover the signing key, given a few hundred million signatures, with more bit flips enabling key recovery with fewer signatures. Interestingly, the Nguyen–Regev parallelepiped learning attack that broke NTRUSign, GGH and Peregrine does not readily adapt to this setting unless the number of bit flips is very large. However, we show that combining it with principal component analysis (PCA) yields a practical attack. This vulnerability can also be triggered with other types of persistent fault attacks on memory like optical faults. We suggest cheap countermeasures that largely mitigate it, including rejecting signatures that are unusually short.
Last updated:  2025-06-04
Rubato: Provably Post-Quantum Secure and Batched Asynchronous Randomness Beacon
Linghe Yang, Jian Liu, Jingyi Cui, Guangquan Xu , Yude Bai, and Wei Wang
Distributed Randomness Beacons (DRBs) provide secure, unbiased random numbers for decentralized systems. However, existing protocols face critical limitations. Most rely on cryptographic assumptions which are vulnerable to quantum attacks, risking long-term security in asynchronous networks where unbounded delays may allow attackers time to exploit these weaknesses. Many achieve low beacon generation rates, often below 100 beacons per minute in moderate-scale networks (e.g., Spurt IEEE S&P’22), hindering their use in applications requiring high-throughput randomness. Additionally, traditional Verifiable Secret Sharing (VSS)-based DRBs, using a share-consensus-reconstruct paradigm, are unsuitable for asynchronous networks due to circular dependencies between beacon generation and consensus. Given these limitations, we propose Rubato, the first provably post-quantum secure DRB for asynchronous environments, incorporating a lattice-based batched Asynchronous Verifiable Secret Sharing scheme (bAVSS-PQ). Rubato supports batching of secrets with communication complexity and tolerates Byzantine faults in up to one-third of the nodes. Integrated with DAG-based consensus protocols like Bullshark or Tusk, its epoch-staggered architecture resolves circular dependencies, enabling efficient and secure randomness generation. Evaluations across 10 to 50 nodes show Rubato generates 5200 to 350 beacons per minute with per-beacon latencies of 11.60 to 96.37 milliseconds, achieving a consensus throughput of 186,088 transactions per second with a latency of 16.78 seconds at 30 nodes. Rubato offers robust post-quantum security and high performance for small-to-medium-scale decentralized systems.
Last updated:  2025-06-03
Weave: Efficient and Expressive Oblivious Analytics at Scale
Mahdi Soleimani, Grace Jia, and Anurag Khandelwal
Many distributed analytics applications that are offloaded to the cloud operate on sensitive data. Even when the computations for such analytics workloads are confined to trusted hardware enclaves and all stored data and network communications are encrypted, several studies have shown that they are still vulnerable to access pattern attacks. Prior efforts towards preventing access pattern leakage often incur network and compute overheads that are logarithmic in dataset size, while also limiting the functionality of supported analytics jobs. We present Weave, an efficient, expressive, and secure analytics platform that scales to large datasets. Weaveemploys a combination of noise injection and hardware memory isolation via enclave page caches to reduce the network and compute overheads for oblivious analytics to a constant factor. Weave also employs several optimizations and extensions that exploit dataset and workload-specific properties to ensure performance at scale without compromising on functionality. Our evaluations show that Weave reduces the end-to-end execution time for a wide range of analytics jobs on large real-world datasets by -- compared to prior state-of-the-art while providing strong obliviousness guarantees.
Last updated:  2025-06-03
Unbounded Distributed Broadcast Encryption and Registered ABE from Succinct LWE
Hoeteck Wee and David J. Wu
We construct distributed broadcast encryption and registered attribute-based encryption (ABE) that support an arbitrary polynomial of users from the succinct LWE assumption. Specifically, if we take to be the security parameter and to be the number of users, we obtain the following: * We obtain a distributed broadcast encryption scheme where the size of the public parameters, user public/secret keys, and ciphertexts are optimal (i.e., have size ). Security relies on the -succinct LWE assumption. Previously, this was only known from indistinguishability obfuscation or witness encryption. All constructions that did not rely on these general tools could only support an a priori bounded number of users. * We obtain a key-policy registered ABE scheme that supports arbitrary bounded-depth Boolean circuit policies from the -succinct LWE assumption in the random oracle model, where is the depth of the circuit computing the policy. The public parameters, user public/secret keys, and ciphertexts have size , which are optimal up to the factor. This is the first registered ABE scheme with nearly-optimal parameters. All previous schemes (including constructions based on indistinguishability obfuscation, witness encryption, or evasive LWE) either have ciphertexts that scale with the policy size and attribute length, or can only support a bounded number of users (with long public parameters and public keys that scale with the number of users).
Last updated:  2025-06-03
Security of Operations on Random Numbers: A Review
Tejas Sharma and Ashish Kundu
Random numbers are often used in cryptography algorithms, protocols, and in several security and non-security applications. Such us- ages often apply Arithmetic and Boolean operations on pseudorandom numbers, such as addition, XOR, NOT, bit shifts, and other operations, in order to achieve the desired amount of entropy and desired level of security. In this paper, we have reviewed, studied, and analyzed the se- curity properties of these operations on random numbers: do Arithmetic and Boolean operations and other related operations on cryptograph- ically secure pseudorandom numbers lead to cryptographically secure pseudorandom numbers; do they lead to loss of preservation of entropy?
Last updated:  2025-06-03
Committed Vector Oblivious Linear Evaluation and Its Applications
Yunqing Sun, Hanlin Liu, Kang Yang, Yu Yu, Xiao Wang, and Chenkai Weng
We introduce the notion of committed vector oblivious linear evaluation (C-VOLE), which allows a party holding a pre-committed vector to generate VOLE correlations with multiple parties on the committed value. It is a unifying tool that can be found useful in zero-knowledge proofs (ZKPs) of committed values, actively secure multi-party computation, private set intersection (PSI), etc. To achieve the best efficiency, we design a tailored commitment scheme and matching C-VOLE protocols, both based on the learning parity with noise assumption. In particular, exploiting the structures of the carefully designed LPN-based commitment minimizes the cost of ensuring consistency between the committed vector and VOLE correlation. As a result, we achieve a 28 improvement over the protocol proposed in prior work (Usenix 2021) that uses ZKP to prove the correct opening of the commitment. We also apply C-VOLE to design a PSI protocol that allows one server to run PSI repeatedly with multiple clients while ensuring that the same set is used across all executions. Compared with the state-of-the-art PSI (CCS 2024) with similar security requirements, our protocol reduces the communication overhead by a factor of 35.
Last updated:  2025-06-03
A Critique on Average-Case Noise Analysis in RLWE-Based Homomorphic Encryption
Mingyu Gao and Hongren Zheng
Homomorphic encryption schemes based on the Ring-Learning-with-Errors problem require accurate ciphertext noise analysis to ensure correctness and security. However, ring multiplications during homomorphic computations make the noise in the result ciphertexts difficult to characterize. Existing average-case noise analyses derive a bound on the noise by either assuming it follows a Gaussian distribution, or giving empirical formulae, with strong independence assumption and the Central Limit Theorem extensively applied. In this work, we question the validity of these methods, by showing that the noise exhibits a heavy-tailed distribution via exact calculation of its variance and kurtosis, for both independent and dependent noises. The heavy-tailedness suggests the failing probability of bounds derived from these methods may not be negligible, and we experimentally demonstrate several cases where the noise growth is underestimated.
Last updated:  2025-06-03
Continuous Group-Key Agreement: Concurrent Updates without Pruning
Benedikt Auerbach, Miguel Cueto Noval, Boran Erol, and Krzysztof Pietrzak
Continuous Group Key Agreement (CGKA) is the primitive underlying secure group messaging. It allows a large group of users to maintain a shared secret key that is frequently rotated by the group members in order to achieve forward secrecy and post compromise security. The group messaging scheme Messaging Layer Security (MLS) standardized by the IETF makes use of a CGKA called TreeKEM which arranges the group members in a binary tree. Here, each node is associated with a public-key, each user is assigned one of the leaves, and a user knows the corresponding secret keys from their leaf to the root. To update the key material known to them, a user must just replace keys at nodes, which requires them to create and upload ciphertexts. Such updates must be processed sequentially by all users, which for large groups is impractical. To allow for concurrent updates, TreeKEM uses the ``propose and commit'' paradigm, where multiple users can concurrently propose to update (by just sampling a fresh leaf key), and a single user can then commit to all proposals at once. Unfortunately, this process destroys the binary tree structure as the tree gets pruned and some nodes must be ``blanked'' at the cost of increasing the in-degree of others, which makes the commit operation, as well as, future commits more costly. In the worst case, the update cost (in terms of uploaded ciphertexts) per user can grow from to . In this work we provide two main contributions. First, we show that MLS' communication complexity is bad not only in the worst case but also if the proposers and committers are chosen at random: even if there's just one update proposal for every commit the expected cost is already over , and it approaches as this ratio changes towards more proposals. Our second contribution is a new variant of propose and commit for TreeKEM which for moderate amounts of update proposals per commit provably achieves an update cost of assuming the proposers and committers are chosen at random.
Last updated:  2025-06-03
JANUS: Enhancing Asynchronous Common Subset with Trusted Hardware
Liangrong Zhao, Hans Schmiedel, Qin Wang, and Jiangshan Yu
Asynchronous common subset (ACS) has been extensively studied since the asynchronous Byzantine fault tolerance (BFT) framework was introduced by Ben-Or, Kemler, and Rabin (BKR). The line of work (i.e., HoneyBadgerBFT, BEAT, EPIC) uses parallel reliable broadcast (RBC) and asynchronous binary agreement (ABA) instances to reach an agreement on a subset of proposed transactions. In this paper, we further progress the BKR paradigm by presenting Janus, the first hybrid ACS protocol leveraging trusted hardware components. Janus is the first ACS protocol that tolerates a minority of Byzantine processes and that has O(n^2) message complexity. Supported by trusted hardware components, we introduce a provable broadcast primitive to replace RBC, and develop a resilient binary agreement protocol. Messages for concurrent instances of agreement are aggregated into vectors. Our experimental results demonstrate significant performance improvements over predominant ACS constructions with a 92%+ increase compared to HoneyBadgerBFT and a 47%+ increase compared to BEAT. Additionally, we provide a comparison with open-source hybrid BFT protocols that operate under a partially synchronous network, highlighting the performance enhancement compared to previous hybrid protocols that also tolerate the Byzantine minority (e.g., MinBFT and Damysus, by 49%+).
Last updated:  2025-06-03
Trusted Hardware-Assisted Leaderless Byzantine Fault Tolerance Consensus
Liangrong Zhao, Jérémie Decouchant, Joseph K. Liu, Qinghua Lu, and Jiangshan Yu
Byzantine Fault Tolerance (BFT) Consensus protocols with trusted hardware assistance have been extensively explored for their improved resilience to tolerate more faulty processes. Nonetheless, the potential of trust hardware has been scarcely investigated in leaderless BFT protocols. RedBelly is assumed to be the first blockchain network whose consensus is based on a truly leaderless BFT algorithm. This paper proposes a trusted hardware-assisted leaderless BFT consensus protocol by offering a hybrid solution for the set BFT problem defined in the RedBelly blockchain. Drawing on previous studies, we present two crucial trusted services: the counter and the collector. Based on these two services, we introduce two primitives to formulate our leaderless BFT protocol: a hybrid verified broadcast (VRB) protocol and a hybrid binary agreement. The hybrid VRB protocol enhances the hybrid reliable broadcast protocol by integrating a verification function. This addition ensures that a broadcast message is verified not only for authentication but also for the correctness of its content. Our hybrid BFT consensus is integrated with these broadcast protocols to deliver binary decisions on all proposals. We prove the correctness of the proposed hybrid protocol and demonstrate its enhanced performance in comparison to the prior trusted BFT protocol.
Last updated:  2025-06-03
Constant-Round Asynchronous MPC with Optimal Resilience and Linear Communication
Junru Li and Yifan Song
In this work, we consider secure multiparty computation (MPC) in the asynchronous network setting. MPC allows parties to compute a public function on their private inputs against an adversary corrupting at most of them. We consider both communication complexity and round complexity of asynchronous MPC (AMPC) with the optimal resilience . Without fully homomorphic encryptions, the best-known result in this setting is achieved by Coretti, Garay, Hirt, and Zikas (ASIACRYPT 2016), which requires bits of communication assuming one-way functions, where is the security parameter. On the other hand, the best-known non-constant-round AMPC by Goyal, Liu, and Song (CRYPTO 2024) can achieve communication even in the information-theoretic setting. In this work, we give the first construction of a constant-round AMPC with bits of communication that achieves malicious security with abort assuming random oracles. We provide new techniques for adapting the MPC-in-the-head framework in the asynchronous network to compute a constant-size garbled circuit.
Last updated:  2025-06-03
Quasidifferential Saves Infeasible Differential: Improved Weak-Key Key-Recovery Attacks on Round-Reduced GIFT
Chengcheng Chang, Meiqin Wang, Wei Wang, and Kai Hu
\gift, including \gift-64 and \gift-128, is a family of lightweight block ciphers with outstanding implementation performance and high security, which is a popular underlying primitive chosen by many AEADs such as \sundae. Currently, differential cryptanalysis is the best key-recovery attack on both ciphers, but they have stuck at 21 and 27 rounds for \gift-64 and \gift-128, respectively. Recently, Beyne and Rijmen proposed the quasidifferential transition matrix for differential cryptanalysis at CRYPTO 2022 and showed that the fixed-key probability of a differential (characteristic) can be expressed as the sum of correlations of all quasidifferential trails corresponding to this differential (characteristic). As pointed out by Beyne and Rijmen in their paper, the quasidifferential methodology is useful in identifying weak-key differential attacks. In this paper, we apply Beyne and Rijmen's method to \gift. Some differential characteristics with small (average) probabilities can have much larger probabilities when weak-key conditions hold. Improved weak-key differential attacks on \gift-64 and \gift-128 are thus obtained. For \gift-64, the probability of a 13-round differential is improved from to with 4 bits of weak-key conditions, then an improved differential key-recovery attack on 21-round \gift-64 is obtained with time/data complexities; the probability of a 13-round multiple differential (containing 33 characteristics) is improved from to with 4 bits of weak-key conditions, then an improved multiple differential key-recovery attack on 21-round \gift-64 is obtained with time/data complexities. For \gift-128, the probability of a 20-round differential is improved from to with 6 bits of weak-key conditions; the probability of a 21-round multiple differential (containing 2 differentials) is improved from to with 4 bits of weak-key conditions. Improved (multiple) differential weak-key key-recovery attacks are obtained for 27 and 28 rounds of \gift-128 with / and / time/data complexities, respectively. As far as we know, this is the first time that a (weak-key) key-recovery attack can reach 28 rounds of \gift-128. Additionally, as an independent interest, we perform the first differential attack on \sundae. The differential used in this attack is checked with quasidifferential trails, thus the probability is reliable. Our attack is nonce-respecting and has significantly better complexities than the currently best attack.
Last updated:  2025-06-03
Everlasting Anonymous Rate-Limited Tokens
Rutchathon Chairattana-Apirom, Nico Döttling, Anna Lysyanskaya, and Stefano Tessaro
Anonymous rate-limited tokens are a special type of credential that can be used to improve the efficiency of privacy-preserving authentication systems like Privacy Pass. In such a scheme, a user obtains a "token dispenser" by interacting with an issuer, and the dispenser allows the user to create up to a pre-determined number of unlinkable and publicly verifiable tokens. Unlinkable means that one should not be able to tell that two tokens originate from the same dispenser, but also they cannot be linked to the interaction that generated the dispenser. Furthermore, we can limit the rate at which these tokens are created by linking each token to a context (e.g., the service we are authenticating to), and imposing a limit such that seeing more than tokens for the same context will reveal the identity of the user. Constructions of such tokens were first given by Camenisch, Hohenberger and Lysyanskaya (EUROCRYPT '05) and Camenisch, Hohenberger, Kohlweiss, Lysyanskaya, and Meyerovich (CCS '06). In this work, we present the first construction of \emph{everlasting} anonymous rate-limited tokens, for which unlinkability holds against computationally unbounded adversaries, whereas other security properties (e.g., unforgeability) remain computational. Our construction relies on pairings. While several parameters in our construction unavoidably grow with , the key challenge we resolve is ensuring that the complexity of dispensing a token is independent of the parameter . We are motivated here by the goal of providing solutions that are robust to potential future quantum attacks against the anonymity of previously stored tokens. A construction based on post-quantum secure assumptions (e.g., based on lattices) would be rather inefficient---instead, we take a pragmatic approach dispensing with post-quantum security for properties not related to privacy.
Last updated:  2025-06-03
Improved Key Recovery Attacks of Ascon
Shuo Peng, Kai Hu, Jiahui He, and Meiqin Wang
Ascon, a family of algorithms that support hashing and Authenticated Encryption with Associated Data (AEAD), is the final winner of the NIST Lightweight Cryptography Project. As a research hotspot, Ascon has received substantial third-party security evaluation. Among all the results of Ascon-128 (the primary recommendation of AEAD), the key recovery attack can only be achieved by reducing the initialization phase to 7 rounds or fewer, regardless of whether it violates the security claims made by the designers (i.e., misuse of the nonce or exceeding data limits ). In this paper, we, from two aspects (misuse-free setting and misused setting), improve the key recovery attack on Ascon-128 using the cube attack method. In one part, we present a faster method to recover the superpolies for a 64-dimensional cube in the output bits of the 7-round initialization, enabling us to recover the secret key with a time complexity of and a data complexity of . Our 7-round key recovery attack, based on the full key space, greatly improves the time complexity, making it the best result to date. Additionally, we utilize several techniques to extend state recovery to key recovery, answering the open problem of transitioning from full state recovery in the encryption phase to key recovery for Ascon-128 (ToSc Vol 4, 2022). By combining encryption phase state recovery with initialization phase key recovery, we can achieve 8-round and 9-round initialization phase key recovery in the nonce misuse scenario, with time complexities of and , respectively. This represents an improvement of two rounds over previous results in the misused setting. Our first key recovery attack is also applicable to Ascon-128a, achieving the same result. In cases where the full state, prior to the encryption phase, can be recovered in other Ascon AEAD modes, our second key recovery attack will also be useful. It is worth noting that this work does not threaten the security of the full 12 rounds Ascon, but we expect that our results provide new insights into the security of Ascon.
Last updated:  2025-06-02
Group Key Progression: Strong Security for Shared Persistent Data
Matilda Backendal, David Balbás, and Miro Haller
End-to-end encryption allows data to be outsourced and stored on an untrusted server, such as in the cloud, without compromising data privacy. In the setting when this data is shared between a group of users, members also all share access to the same static key material used for data encryption. When the group membership changes, access control is only enforced by the server: security breaches or compelled disclosure would allow even a removed member to decrypt the current shared data. We propose to move away from static keys and instead use a group key progression (GKP) scheme, a novel primitive that enables a dynamic group of users to agree on a persistent sequence of keys while keeping a compact local state. GKP ensures that group members can only derive keys within a certain interval of the sequence, a notion that we call interval access control (IAC), and also provide post-compromise security. Our GKP construction, called Grappa, combines continuous group key agreement (CGKA, by Alwen et al., 2020) with a new abstraction called interval scheme. The latter is a symmetric-key primitive that can derive a sequence of keys from a compact state while preserving IAC. We explore different interval scheme constructions and simulate their storage and communication costs when used in group settings. The most efficient of them is a generalization of dual key regression (Shafagh et al., 2020), which we formalize and prove secure. Overall, our protocols offer a practical and robust solution to protect persistent data shared by a group.
Last updated:  2025-06-02
Parallel Repetition for Post-Quantum Arguments
Andrew Huang and Yael Tauman Kalai
In this work, we show that parallel repetition of public-coin interactive arguments reduces the soundness error at an exponential rate even in the post-quantum setting. Moreover, we generalize this result to hold for threshold verifiers, where the parallel repeated verifier accepts if and only if at least of the executions are accepted (for some threshold ). Prior to this work, these results were known only when the cheating prover was assumed to be classical. We also prove a similar result for three-message private-coin arguments. Previously, Bostanci, Qian, Spooner, and Yuen (STOC 2024) proved such a parallel repetition result in the more general setting of quantum protocols, where the verifier and communication may be quantum. We consider only protocols where the verifier is classical, but obtain a simplified analysis, and for the more general setting of threshold verifiers.
Last updated:  2025-06-02
Malicious Security in Collaborative zk-SNARKs: More than Meets the Eye
Sanjam Garg, Aarushi Goel, Abhishek Jain, Bhaskar Roberts, and Sruthi Sekar
Collaborative zk-SNARKs (Ozdemir and Boneh, USENIX’22) are a multiparty variant of zk-SNARKs where multiple, mutually distrustful provers, each holding a private input, jointly compute a zk-SNARK using their combined inputs. A sequence of works has proposed efficient constructions of collaborative zk-SNARKs using a common template that involves designing secure multiparty computation (MPC) protocols to emulate a zk-SNARK prover without making non-black-box use of cryptography. To achieve security against malicious adversaries, these works adopt compilers from the MPC literature that transform semi-honest MPC into malicious-secure MPC. In this work, we revisit this design template. • Pitfalls: We demonstrate two pitfalls in the template, which can lead to a loss of input privacy. We first show that it is possible to compute collaborative proofs on invalid witnesses, which in turn can leak the inputs of honest provers. Next, we show that using state-of-the-art malicious security compilers as-is for proof computation is insecure, in general. Finally, we discuss mitigation strategies. • Malicious Security Essentially for Free: As our main technical result, we show that in the honest-majority setting, one can forego malicious security checks performed by state-of-the-art malicious security compilers during collaborative proof generation of several widely used zk-SNARKs. In other words, we can avoid the overheads of malicious security compilers, enabling faster proof generation. To the best of our knowledge, this is the first example of non-trivial computations where semi-honest MPC protocols achieve malicious security. The observations underlying our positive results are general and may have applications beyond collaborative zkSNARKs.
Last updated:  2025-06-02
Secure Noise Sampling for Differentially Private Collaborative Learning
Olive Franzese, Congyu Fang, Radhika Garg, Somesh Jha, Nicolas Papernot, Xiao Wang, and Adam Dziedzic
Differentially private stochastic gradient descent (DP-SGD) trains machine learning (ML) models with formal privacy guarantees for the training set by adding random noise to gradient updates. In collaborative learning (CL), where multiple parties jointly train a model, noise addition occurs either (i) before or (ii) during secure gradient aggregation. The first option is deployed in distributed DP methods, which require greater amounts of total noise to achieve security, resulting in degraded model utility. The second approach preserves model utility but requires a secure multiparty computation (MPC) protocol. Existing methods for MPC noise generation require tens to hundreds of seconds of runtime per noise sample because of the number of parties involved. This makes them impractical for collaborative learning, which often requires thousands or more samples of noise in each training step. We present a novel protocol for MPC noise sampling tailored to the collaborative learning setting. It works by constructing an approximation of the distribution of interest which can be efficiently sampled by a series of table lookups. Our method achieves significant runtime improvements and requires much less communication compared to previous work, especially at higher numbers of parties. It is also highly flexible – while previous MPC sampling methods tend to be optimized for specific distributions, we prove that our method can generically sample noise from statistically close approximations of arbitrary discrete distributions. This makes it compatible with a wide variety of DP mechanisms. Our experiments demonstrate the efficiency and utility of our method applied to a discrete Gaussian mechanism for differentially private collaborative learning. For 16 parties, we achieve a runtime of 0.06 seconds and 11.59 MB total communication per sample, a 230× runtime improvement and 3× less communication compared to the prior state-of-the-art for sampling from discrete Gaussian distribution in MPC.
Last updated:  2025-06-02
Towards Trustless Provenance: A Privacy-Preserving Framework for On-chain Media Verification
Piotr Mikołajczyk, Parisa Hassanizadeh, and Shahriar Ebrahimi
As generative models continue to evolve, verifying the authenticity, provenance, and integrity of digital media has become increasingly critical—particularly for domains like journalism, digital art, and scientific documentation. In this work, we present a decentralized verifiable media ecosystem for managing, verifying, and transacting authentic digital media using zero-knowledge proofs (ZKPs). Building on VIMz (Dziembowski et al., PETS'25), we extend the framework in three key directions. First, we generalize the model to support arbitrary image regions to achieve selective transformations support such as redaction and regional blurring—features commonly required in privacy-preserving applications. Second, we introduce performance optimizations that yield up to an 18% improvement in off-chain proof generation, and enhance the framework to support cost-efficient on-chain verification. Third, we design and implement a modular smart contract architecture to support a wide range of decentralized media applications. As a flagship use case, we develop a decentralized media marketplace that enables permissionless licensing, ownership transfer, and verifiable attribution. In this setting, users can share transformed media—such as cropped, blurred, or resized previews—alongside ZKPs that prove derivation from a signed original, eliminating the need to trust the seller. Unlike prior fair exchange protocols, which rely on trusted descriptions or encrypted payload delivery, our system enables verifiable public previews and origin-bound proofs without revealing the full content. This approach unlocks new applications beyond marketplaces, including automated infringement dispute resolution and photography contests with verifiable criteria.
Last updated:  2025-06-02
Universal Channel Rebalancing: Flexible Coin Shifting in Payment Channel Networks
Stefan Dziembowski, Shahriar Ebrahimi, Omkar Gavhane, and Susil Kumar Mohanty
Payment Channel Networks (PCNs) enhance blockchain scalability by enabling off-chain transactions. However, repeated unidirectional multi-hop payments often cause channel imbalance or depletion, limiting scalability and usability. Existing rebalancing protocols, such as Horcrux [NDSS’25] and Shaduf [NDSS’22], rely on on-chain operations, which hinders efficiency and broad applicability. We propose Universal Channel Rebalancing (UCRb), a blockchain-agnostic, fully off-chain framework that ensures correct behavior among untrusted participants without on-chain interaction. UCRb incorporates the following core innovations: (1) a fair and reliable incentive-compatible mechanism that encourages voluntary user participation in off-chain channel rebalancing, (2) integration of Pedersen commitments to achieve atomic off-chain payments and rebalancing operations, while ensuring balance security, and (3) zero-knowledge proofs to enable privacy-preserving channel initialization and coin shifting, ensuring that user identities and fund allocations remain hidden throughout the process. We evaluate UCRb using real-world Lightning Network dataset and compare its performance against state-of-the-art solutions including Horcrux, Shaduf, and Revive [CCS'17]. UCRb exhibits a success ratio enhancement between 15% and 50%, while also reducing the required user deposits by 72%--92%. It maintains an almost negligible rate of channel depletion. Additionally, the long-term performance of UCRb is roughly 1.5 times that of its short-term performance, suggesting that continuous operation leads to improved efficiency. We implement a prototype for UCRb smart contracts and demonstrate its practicality through extensive evaluation. As \texttt{CoinShift} operations require no on-chain interaction, the protocol incurs minimal gas costs. For instance, opening and closing channels with 10 neighbors costs only 130K-160K gas—significantly lower than comparable solutions.
Last updated:  2025-06-02
Burn Your Vote: Decentralized and Publicly Verifiable Anonymous Voting at Scale
Stefan Dziembowski, Shahriar Ebrahimi, Haniyeh Habibi, Parisa Hassanizadeh, and Pardis Toolabi
Secure and trustworthy electronic voting requires more than correctness and censorship resistance, it must also ensure voter privacy, vote confidentiality, and protection against coercion. Prior work attempt to address these challenges using heavyweight cryptographic primitives such as homomorphic encryption, time-lock puzzles, or multi-party computation. These approaches often involve complex computations, depend on trusted parties, and typically do not scale well. We propose a lightweight, fully on-chain anonymous voting protocol based on a novel application of the proof-of-burn (PoB) mechanism. Voters anonymously commit to their votes by burning tokens to pseudorandom addresses and later submit zero-knowledge proofs attesting to their valid participation. Our design achieves vote integrity, coercion resistance, and unlinkability without relying on encrypted ballots, trusted third parties, or centralized tallying. The tallying process is public and operates on plaintext votes that are authenticated yet unlinkable to voters. This enables flexible voting models—including token-weighted and quadratic voting—with minimal on-chain overhead. We formally analyze the protocol’s security guarantees and demonstrate support for a broad range of voting models. We implement the protocol as an open-source library fully compatible with the Ethereum Virtual Machine (EVM), and our experimental evaluation confirms its high scalability and improved efficiency compared to the state-of-the-art.
Last updated:  2025-06-02
Black-Box Crypto is Useless for Pseudorandom Codes
Sanjam Garg, Sam Gunn, and Mingyuan Wang
A pseudorandom code is a keyed error-correction scheme with the property that any polynomial number of encodings appear random to any computationally bounded adversary. We show that the pseudorandomness of any code tolerating a constant rate of random errors cannot be based on black-box reductions to almost any generic cryptographic primitive: for instance, anything that can be built from random oracles, generic multilinear groups, and virtual black-box obfuscation. Our result is optimal, as Ghentiyala and Guruswami (2024) observed that pseudorandom codes tolerating any sub-constant rate of random errors exist using a black-box reduction from one-way functions. The key technical ingredient in our proof is the hypercontractivity theorem for Boolean functions, which we use to prove our impossibility in the random oracle model. It turns out that this easily extends to an impossibility in the presence of "crypto oracles," a notion recently introduced---and shown to be capable of implementing all the primitives mentioned above---by Lin, Mook, and Wichs (EUROCRYPT 2025).
Last updated:  2025-06-06
Separating Pseudorandom Codes from Local Oracles
Nico Döttling, Anne Müller, and Mahesh Sreekumar Rajasree
Pseudorandom codes (PRCs) are error-correcting codes with the distinguishing feature that their codewords are computationally indistinguishable from random strings. Introduced by Christ and Gunn (CRYPTO 2024), PRCs have found applications in areas such as AI watermarking, where both robustness and pseudorandomness are essential. All known constructions of PRCs rely on coding-theoretic hardness assumptions. In this work, we study how inherent the use of coding-theoretic hardness is in the construction of pseudorandom codes. We show that there is no black-box construction of PRCs with binary alphabets capable of decoding from a constant fraction of Bernoulli noise from a class of oracles we call local oracles. The class of local oracles includes random oracles and trapdoor permutation oracles, and can be interpreted as a meaningful notion of oracles that are not resilient against noise. Our separation result is cast in the Impagliazzo-Rudich framework and crucially relies on the Bonami-Beckner hypercontractivity theorem on the Boolean hypercube. As a complementary result, we show that PRCs with large alphabets that can tolerate high error rates can indeed be constructed in a black-box manner from one-way functions.
Last updated:  2025-06-02
Silent Splitter: Privacy for Payment Splitting via New Protocols for Distributed Point Functions
Margaret Pierce and Saba Eskandarian
In a world where financial transactions are primarily performed or recorded online, protecting sensitive transaction details has become crucial. Roommates sharing housing costs or friends splitting travelling expenses may use applications such as Splitwise to easily track debts and minimize the number of individual repayments. However, these apps reveal potentially sensitive financial transaction activity to their operators. In this paper, we present Silent Splitter, a privacy-preserving payment splitting system which enables users to securely set up groups, perform transactions within those groups, and "settle up" without revealing group membership or any sensitive transaction details (such as the users involved or amount of money exchanged) to the system itself. Silent Splitter operates in the two server setting and uses Distributed Point Functions (DPFs) to securely record transactions. Of independent interest, we also present new protocols for proving knowledge of properties of DPFs as part of our system.
Last updated:  2025-06-02
MT-TMVP: Modular Tiled TMVP-based Polynomial Multiplication for Post-Quantum Cryptography on FPGAs
Shekoufeh Neisarian and Elif Bilge Kavun
As quantum technology advances, developing cryptographic solutions resistant to quantum attacks is crucial. Post-Quantum Cryptography (PQC) provides a practical approach by running on classical computers. They rely on hard mathematical problems, with lattice-based being one of the National Institute of Standards and Technology (NIST)-recognized schemes known for its small key sizes. Hardware implementation of these schemes faces challenges due to the computational intensity of operations like polynomial multiplication, especially for resource-constrained devices. This paper proposes a novel Modular Tiled Toeplitz Matrix-Vector Polynomial Multiplication (MT-TMVP) for lattice-based PQC algorithms and presents a resource-optimized Field Programmable Gate Array (FPGA) architecture. The proposed implementation significantly reduces resource utilization and Area-Delay Product (ADP) compared to state-of-the-art polynomial multipliers. It utilizes 99.68% and 84.22% fewer Look-Up Tables (LUTs) on Artix-7 and Zynq Ultrascale+ FPGAs, respectively, and achieves 99.94% and 80.02% ADP improvements on these FPGAs compared to the best results in the literature. By leveraging Block RAM (BRAM), the proposed architecture offers robustness against timing-based Side-Channel Attacks (SCAs), and the design is modular and scalable to any polynomial degree.
Last updated:  2025-06-02
Using the Schur Product to Solve the Code Equivalence Problem
Michele Battagliola, Rocco Mora, and Paolo Santini
Given two linear codes, the Code Equivalence Problem asks to find (if it exists) an isometry between them. A special case is the Permutation Equivalence Problem (PEP), where the isometry is a permutation. The hardness of PEP is crucially dependent on the hull of a code, that is, the intersection between a code and its dual. For random codes, with large probability the hull is trivial, i.e., has dimension : this allows for efficient attacks. However, when the hull is large enough, all known attacks take exponential time and PEP is deemed hard. In this paper we study how the so-called Schur product between linear codes can be employed to solve PEP. The main idea is to transform a given PEP instance by computing powers of the given codes. We show that, while squaring a pair of equivalent codes preserves the equivalence, the new pair of codes have trivial hull with high probability. This allows to identify many new weak instances of PEP, namely: whenever With some technical caveats, our solver runs in average polynomial time. As a concrete application, we consider the updatable encryption scheme proposed by Albrecht, Benčina and Lai at Eurocrypt 2025. All the recommended instances fall into the range of weak PEP instances we identify in this paper, hence are susceptible to our attack. We successfully recover the secret permutation for one of the instances claiming 128 bits of security. As a fix, instances with hull dimension shall be employed.
Last updated:  2025-06-02
Leader Election with Poly-logarithmic Communication Per Party
Amey Bhangale, Chen-Da Liu-Zhang, Julian Loss, Kartik Nayak, and Sravya Yandamuri
The leader election problem requires a set of parties, out of which up to can be Byzantine, to elect a leader uniformly at random such that no two parties disagree on the elected leader and an honest leader is elected with constant probability. The Scalable Leader Election protocol published in SODA'2006 is an important breakthrough in solving this problem efficiently for all but of the parties. They achieve a protocol for (for ) in the full-information setting such that every party only sends polylog bits. In this paper, we revisit their work and show that there are subtleties in the protocol that are not dealt with in the analysis. In particular, two mechanisms related to ``silencing'' parties and dealing with ``bad nodes'' are at odds with each other, which is why the existing analysis is insufficient. We present these concerns in detail and subsequently present a modification to their protocol with a corresponding analysis to solve leader election with the desired metrics.
Last updated:  2025-06-02
Zero-Knowledge Polynomial Commitment in Binary Fields
Benjamin E. Diamond
In recent work, Diamond and Posen ('24) introduce a polynomial commitment scheme for large binary fields, adapting BaseFold (CRYPTO '24). In this note, we devise a zero-knowledge variant of Diamond and Posen's scheme. Our construction reprises a few ideas from Aurora (EUROCRYPT '19). We carry through those ideas in characteristic 2, and moreover show that they're compatible with BaseFold.
Last updated:  2025-06-02
How to Make Any Computational Secret Sharing Scheme Adaptively Secure
George Lu and Brent Waters
Secret sharing (SS) is a foundational cryptographic primitive with diverse applications, including secure multiparty computation and conditional disclosure of secrets. While traditional schemes have primarily emphasized information-theoretic security, recent advancements have increasingly leveraged computational assumptions to achieve more efficient constructions and support broader access policies. Despite these successes, most existing computational secret sharing (CSS) schemes are limited to a static security model, where adversaries must commit to their choice of corrupted participants at the outset. A critical challenge in CSS lies in achieving adaptive security, where adversaries can dynamically select participants to corrupt, better reflecting real-world threat models. In this paper, we present a novel transformation that converts any statically secure CSS scheme into an adaptively secure one while preserving the original access policy and computational assumptions, providing a framework for bridging the gap between static and adaptive security. Our construction introduces a multiplicative share size overhead of where is the number of parties. Additionally, we explore trade-offs in efficiency and security, offering more efficient adaptive CSS constructions for specific, restricted policy classes. This work addresses key limitations in the current landscape of CSS and paves the way for broader adoption of adaptively secure secret sharing in cryptographic applications.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.