Papers updated in last 365 days (3085 results)

Last updated:  2025-06-30
Computing Asymptotic Bounds for Small Roots in Coppersmith's Method via Sumset Theory
Yansong Feng, Hengyi Luo, Qiyuan Chen, Abderrahmane Nitaj, and Yanbin Pan
Coppersmith's method is a well-known and practical method for solving polynomial modular equations involved in some cryptosystems such as RSA. An important and tedious task in this method consists in computing the asymptotic bounds. In this work, we address the challenge of computing such asymptotic bounds by introducing the Sumsets theory from Additive Combinatorics as a new analytical tool, which significantly streamlines manual calculations. More precisely, we develop the first provable algorithm for determining these asymptotic bounds, whereas the recent methods based on simple Lagrange interpolation are heuristic. Moreover, the experiments showed that our method is much more efficient than the previous method in practice. We also employ our method to improve the cryptanalytic results for the Commutative Isogeny Hidden Number Problem. Our approach may deepen the understanding of Coppersmith's method and inspire more security analysis methodologies.
Last updated:  2025-06-30
Tricycle: Private Transformer Inference with Tricyclic Encodings
Lawrence Lim, Vikas Kalagi, Divyakant Agrawal, and Amr El Abbadi
The growing adoption of Large Language Models in privacy-sensitive domains necessitates secure inference mechanisms that preserve data confidentiality. Homomorphic encryption offers a promising pathway by enabling computation on encrypted inputs, yet existing approaches struggle to scale efficiently to full transformer models due to limitations in packing schemes, which must efficiently support a wide range of operations, including matrix multiplications, row-wise nonlinear operations, and self-attention. In this work, we present Tricycle, a framework for private transformer inference built on our novel packing scheme, called tricyclic encodings, which are designed to efficiently support these core operations. Tricyclic encodings are a generalization of bicyclic encodings, enabling privacy-preserving batch matrix multiplications with optimal multiplicative depth in order to facilitate parallelized multi-head self-attention. We optimize our matrix multiplications by incorporating Baby-Step Giant-Step optimizations to reduce ciphertext rotations and presenting new ciphertext-plaintext matrix multiplication techniques that relax prior limitations. A further contribution of our work is a lightweight and effective approach for stabilizing the softmax function via statistical max estimation. Our end-to-end implementation on a BERT-Tiny model shows that Tricycle achieves a \(1.5 \times\) to \(3 \times\) speedup over previous approaches, marking a step toward practical and scalable private LLM inference without sacrificing model fidelity.
Last updated:  2025-06-30
RingSG: Optimal Secure Vertex-Centric Computation for Collaborative Graph Processing
Zhenhua Zou, Zhuotao Liu, Jinyong Shan, Qi Li, Ke Xu, and Mingwei Xu
Collaborative graph processing refers to the joint analysis of inter-connected graphs held by multiple graph owners. To honor data privacy and support various graph processing algorithms, existing approaches employ secure multi-party computation (MPC) protocols to express the vertex-centric abstraction. Yet, due to certain computation-intensive cryptography constructions, state-of-the-art (SOTA) approaches are asymptotically suboptimal, imposing significant overheads in terms of computation and communication. In this paper, we present RingSG, the first system to attain optimal communication/computation complexity within the MPC-based vertex-centric abstraction for collaborative graph processing. This optimal complexity is attributed to Ring-ScatterGather, a novel computation paradigm that can avoid exceedingly expensive cryptography operations (e.g., oblivious sort), and simultaneously ensure the overall workload can be optimally decomposed into parallelizable and mutually exclusive MPC tasks. Within Ring-ScatterGather, RingSG improves the concrete runtime efficiency by incorporating 3-party secure computation via share conversion, and optimizing the most cost-heavy part using a novel oblivious group aggregation protocol. Finally, unlike prior approaches, we instantiate RingSG into two end-to-end applications to effectively obtain application-specific results from the protocol outputs in a privacy-preserving manner. We developed a prototype of RingSG and extensively evaluated it across various graph collaboration settings, including different graph sizes, numbers of parties, and average vertex degrees. The results show RingSG reduces the system running time of SOTA approaches by up to 15.34× and per-party communication by up to 10.36×. Notably, RingSG excels in processing sparse global graphs collectively held by more parties, consistent with our theoretical cost analysis.
Last updated:  2025-06-30
Efficient SPA Countermeasures using Redundant Number Representation with Application to ML-KEM
Rishub Nagpal, Vedad Hadžić, Robert Primas, and Stefan Mangard
Simple power analysis (SPA) attacks and their extensions, profiled and soft-analytical side-channel attacks (SASCA), represent a significant threat to the security of cryptographic devices and remain among the most powerful classes of passive side-channel attacks. In this work, we analyze how numeric representations of secrets can affect the amount of exploitable information leakage available to the adversary. We present an analysis of how mutual information changes as a result of the integer ring size relative to the machine word-size. Furthermore, we study the Redundant Number Representation (RNR) countermeasure and show that its application to ML-KEM can resist the most powerful SASCA attacks and provides a low-cost alternative to shuffling. We eval- uate the performance of RNR-ML-KEM with both simulated and prac- tical SASCA experiments on the ARM Cortex-M4 based on a worst-case attack methodology. We show that RNR-ML-KEM sufficiently renders these attacks ineffective. Finally, we evaluate the performance of the RNR-ML-KEM NTT and INTT and show that SPA security can be achieved with a 62.8% overhead for the NTT and 0% overhead for the INTT relative to the ARM Cortex-M4 reference implementation used.
Last updated:  2025-06-30
A search to distinguish reduction for the isomorphism problem on direct sum lattices
Daniël van Gent and Wessel van Woerden
At Eurocrypt 2003, Szydlo presented a search to distinguish reduction for the Lattice Isomorphism Problem (LIP) on the integer lattice $\mathbb{Z}^n$. Here the search problem asks to find an isometry between $\mathbb{Z}^n$ and an isomorphic lattice, while the distinguish variant asks to distinguish between a list of auxiliary lattices related to $\mathbb{Z}^n$. In this work we generalize Szydlo's search to distinguish reduction in two ways. Firstly, we generalize the reduction to any lattice isomorphic to $\Gamma^n$, where $\Gamma$ is a fixed base lattice. Secondly, we allow $\Gamma$ to be a module lattice over any number field. Assuming the base lattice $\Gamma$ and the number field $K$ are fixed, our reduction is polynomial in $n$. As a special case we consider the module lattice $\mathcal{O}_K^2$ used in the module-LIP based signature scheme HAWK, and we show that one can solve the search problem, leading to a full key recovery, with less than $2d^2$ distinguishing calls on two lattices each, where $d$ is the degree of the power-of-two cyclotomic number field and $\mathcal{O}_K$ its ring of integers.
Last updated:  2025-06-30
Solve Approximate CVP via Variants of Nearest-Colattice
Wenwen Xia, Geng Wang, and Dawu Gu
The approximate Closest Vector Problem (CVP) is a core computational problem underlying many post-quantum lattice-based signature schemes, including Dilithium, one-more-ISIS, and HuFu. While the security of these schemes is typically expressed in terms of the Inhomogeneous Short Integer Solution (ISIS) problem, it is well-known that ISIS can be efficiently reduced to approximate CVP. Despite its foundational role, approximate CVP with non-negligible approximation factors remains far less explored than other lattice problems such as SVP or LWE, creating a critical gap in both theory and practice. In this work, we bridge this gap by advancing the Colattice framework for solving approximate CVP with large approximation factors. More concretely, (1) We define a practical version of the Colattice algorithm and propose a randomized Nearest Colattice for generating more than one approximate closest vector. (2) Define a formal strategy space for blockwise approximate CVP. (3) Propose a polynomial-time strategy selection algorithm and prove its correctness under standard lattice heuristics. (4) Building on this, we design an efficient security estimator for approximate CVP in both Euclidean and Infinity norms, and extend it to approximate batch-CVP attack settings. (5) By applying this estimator, we perform concrete security evaluations of Dilithium, HuFu, and one-more-ISIS. Our results reveal that the security of Dilithium are at least 10 $\log_2({\rm gates})$ lower than the required security thresholds at NIST levels 3 and 5, and almost none of the evaluated schemes withstand approximate batch-CVP attacks with $2^{64}$ queries. (6) We integrate a slicer and Colattice into G6K-CPU, leveraging the Locality-Sensitive Hashing (LSH) technqiue for nearest neighbors search (NNS). This is the first practical implementation of an NNS-accelerated slicer. Our results demonstrate the practical efficiency of approximate CVP and batch-CVP attacks, highlighting the need for more accurate security estimation. These findings underscore the practical importance of accurate approximate CVP modeling and call for a reassessment of current parameter sets in post-quantum signature schemes.
Last updated:  2025-06-30
Brief Comments on Rijndael-256 and the Standard RISC-V Cryptography Extensions
Markku-Juhani O. Saarinen
We evaluate the implementation aspects of Rijndael-256 using the ratified RISC-V Vector Cryptography extension Zvkn. A positive finding is that Rijndael-256 can be implemented in constant time with the existing RISC-V ISA as the critical AES and fixed crossbar permutation instructions are in the DIEL (data-independent execution latency) set. Furthermore, simple tricks can be used to expand the functionality of key expansion instructions to cover the additional round constants required. However, due to the required additional byte shuffle in each round, Rijndael-256 will be significantly slower than AES-256 in terms of throughput. Without additional ISA modifications, the instruction count will be increased by the required switching of the EEW (``effective element width'') parameter in each round between 8 bits (byte shuffle) and 32 bits (AES round instructions). Instruction counts for 1-kilobyte encryption and decryption with Rijndael-256 are factor $2.66\times$ higher than with AES-256. The precise amount of throughput slowdown depends on the microarchitectural details of a particular RISC-V ISA hardware instantiation, but it may be substantial with some high-performance vector AES architectures due to the breakdown of AES pipelining and the relative slowness of crossbar permutation instructions.
Last updated:  2025-06-29
Revisiting SIOT protocol with new security assumptions
Paulo Barreto, Glaucio Oliveira, and Waldyr Benits
Oblivious Transfer is one of the most important building blocks in cryptography and useful for building secure protocols. With the advent of quantum computing there was a boost in research and development of cryptographic protocols resistant to quantum computer processing. Thus, in 2018, the SIOT (Supersingular Isogeny Oblivious Transfer) protocol was presented as the first post-quantum cryptographic OT scheme based on supersingular elliptic curve isogenies. Initially, an OT scheme was created combined with the cryptographic primitives of the SIDH (Supersingular Isogeny Diffie-Hellman) protocol. Furthermore, the SIOT protocol was built in its simplest configuration against semi-honest adversaries. However, it was subjected to scrutiny that resulted in the need to develop new security proofs. Almost in parallel, new and efficient cryptanalytic attacks emerged on the SIDH protocol, which consequently compromised the SIOT security structure. Thus, the new definitions of security proofs encompassed the compatibility of certain parameters of the OT functionality of the SIOT protocol itself with security assumptions of computational isogeny problems. After that, the security countermeasures from M-SIDH (Masked-Supersingular Isogeny Diffie-Hellman) protocol were analysed and implemented into SIOT protocol. Therefore, we propose an OT protocol based on isogenies of elliptic curves and with resistance to quantum attacks.
Last updated:  2025-06-28
Arithmetic PCA for Encrypted Data
Jung Hee Cheon, Hyeongmin Choe, Saebyul Jung, Duhyeong Kim, Dahoon Lee, and Jai Hyun Park
Reducing the size of large dimensional data is a critical task in machine learning (ML) that often involves using principal component analysis (PCA). In privacy-preserving ML, data confidentiality is of utmost importance, and reducing data size is a crucial way to cut overall costs. This work focuses on minimizing the number of normalization processes in the PCA algorithm, which is a costly procedure in encrypted PCA. By modifying Krasulina's algorithm, non-polynomial operations were eliminated, except for a single delayed normalization at the end. Our PCA algorithm demonstrated similar performance to conventional PCA algorithms in face recognition applications. We also implemented it using the CKKS (Cheon-Kim-Kim-Song) homomorphic encryption scheme and obtained the first 6 principal components of a 128$\times$128 real matrix in 7.85 minutes using 8 threads.
Last updated:  2025-06-27
End-to-End Encrypted Git Services
Ya-Nan Li, Yaqing Song, Qiang Tang, and Moti Yung
Git services such as GitHub, have been widely used to manage projects and enable collaborations among multiple entities. Just as in messaging and cloud storage, where end-to-end security has been gaining increased attention, such a level of security is also demanded for Git services. Content in the repositories (and the data/code supply-chain facilitated by Git services) could be highly valuable, whereas the threat of system breaches has become routine nowadays. However, existing studies of Git security to date (mostly open source projects) suffer in two ways: they provide only very weak security, and they have a large overhead. In this paper, we initiate the needed study of efficient end-to-end encrypted Git services. Specifically, we formally define the syntax and critical security properties, and then propose two constructions that provably meet those properties. Moreover, our constructions have the important property of platform-compatibility: They are compatible with current Git servers and reserve all basic Git operations, thus can be directly tested and deployed on top of existing platforms. Furthermore, the overhead we achieve is only proportional to the actual difference caused by each edit, instead of the whole file (or even the whole repository) as is the case with existing works. We implemented both constructions and tested them directly on several public GitHub repositories. Our evaluations show (1) the effectiveness of platform-compatibility, and (2) the significant efficiency improvement we got (while provably providing much stronger security than prior ad-hoc treatments).
Last updated:  2025-06-27
Stealth and Beyond: Attribute-Driven Accountability in Bitcoin Transactions
Alberto Maria Mongardini, Daniele Friolo, and Giuseppe Ateniese
Bitcoin enables decentralized, pseudonymous transactions, but balancing privacy with accountability remains a challenge. This paper introduces a novel dual accountability mechanism that enforces both sender and recipient compliance in Bitcoin transactions. Senders are restricted to spending Unspent Transaction Outputs (UTXOs) that meet specific criteria, while recipients must satisfy legal and ethical requirements before receiving funds. We enhance stealth addresses by integrating compliance attributes, preserving privacy while ensuring policy adherence. Our solution introduces a new cryptographic primitive, Identity-Based Matchmaking Signatures (IB-MSS), which supports streamlined auditing. Our approach is fully compatible with existing Bitcoin infrastructure and does not require changes to the core protocol, preserving both privacy and decentralization while enabling transaction auditing and compliance.
Last updated:  2025-06-27
Copy-Protection from UPO, Revisited
Prabhanjan Ananth, Amit Behera, and Zikuan Huang
Quantum copy-protection is a foundational notion in quantum cryptography that leverages the governing principles of quantum mechanics to tackle the problem of software anti-piracy. Despite progress in recent years, precisely characterizing the class of functionalities that can be copy-protected is still not well understood. Two recent works, by [Coladangelo and Gunn, STOC 2024] and [Ananth and Behera, CRYPTO 2024, showed that puncturable functionalities can be copy-protected. Both works have significant caveats with regard to the underlying cryptographic assumptions and additionally restrict the output length of the functionalities to be copy-protected. In this work, we make progress towards simultaneously addressing both caveats. We show the following: - Revisiting Unclonable Puncturable Obfuscation (UPO): We revisit the notion of UPO introduced by [Ananth and Behera, CRYPTO 2024]. We present a new approach to construct UPO and a variant of UPO, called independent-secure UPO. Unlike UPO, we show how to base the latter notion on well-studied assumptions. - Copy-Protection from Independent-secure UPO: Assuming independent-secure UPO, we show that any m-bit, for m ≥ 2, puncturable functionality can be copy-protected. - Copy-Protection from UPO: Assuming UPO, we show that any 1-bit puncturable functionality can be copy-protected. The security of copy-protection holds against identical challenge distributions.
Last updated:  2025-06-27
New Upper and Lower Bounds for Perfectly Secure MPC
Ivan Damgård, Shravani Patil, Arpita Patra, and Lawrence Roy
We consider perfectly secure MPC for $n$ players and $t$ malicious corruptions. We ask whether requiring only security with abort (rather than guaranteed output delivery, GOD) can help to achieve protocols with better resilience, communication complexity or round complexity. We show that for resilience and communication complexity, abort security does not help, one still needs $3t< n$ for a synchronous network and $4t< n$ in the asynchronous case. And, in both cases, a communication overhead of $O(n)$ bits per gate is necessary. When $O(n)$ overhead is inevitable, one can explore if this overhead can be pushed to the preprocessing phase and the online phase can be achieved with $O(1)$ overhead. This result was recently achieved in the synchronous setting, in fact, with GOD guarantee. We show this same result in the asynchronous setting. This was previously open since the main standard approach to getting constant overhead in a synchronous on-line phase fails in the asynchronous setting. In particular, this shows that we do not need to settle for abort security to get an asynchronous perfectly secure protocol with overheads $O(n)$ and $O(1)$. Lastly, in the synchronous setting, we show that perfect secure MPC with abort requires only 2 rounds, in contrast to protocols with GOD that require 4 rounds.
Last updated:  2025-06-27
Generic Construction of Threshold Ring Signatures and Lattice-based Instantiations
Hao Lin, Mingqiang Wang, Weiqiang Wen, Shi-Feng Sun, and Kaitai Liang
A t-out-of-n threshold ring signature allows $t$ parties to jointly sign a message on behalf of $n$ parties without revealing the identities of the signers. In this paper, we introduce a new generic construction for threshold ring signature, called GCTRS, which can be built on top of a selection on identification schemes, commitment schemes and a new primitive called t-out-of-n proof protocol which is a special type of zero-knowledge proof. In general, our design enables a group of $t$ signers to first generate an aggregated signature by interacting with each other; then they are able to compute a t-out-of-n proof to convince the verifier that the aggregated signature is indeed produced by $t$ individuals among a particular set. The signature is succinct, as it contains only one aggregated signature and one proof in the final signature. We define all the properties required for the building blocks to capture the security of the GCTRS and provide a detailed security proof. Furthermore, we propose two lattice-based instantiations for the GCTRS, named LTRS and CTRS, respectively. Notably, the CTRS scheme is the first scheme that has a logarithmic signature size relative to the ring size. Additionally, during the instantiation process, we construct two t-out-of-n proof protocols, which may be of independent interest.
Last updated:  2025-06-27
Breaking The Authenticated Encryption scheme HiAE
Xichao Hu, Lin Jiao, Dengguo Feng, Yonglin Hao, Senpeng Wang, Yongqiang Li, and Xinxin Gong
HiAE is the fastest AEAD solution on ARM chips to date, utilizing AES round functions while also setting a new performance benchmark on the latest x86 processors. In this paper, we employ algebraic techniques to investigate the security of HiAE. Our findings reveal that HiAE is vulnerable. Firstly, we employ the meet-in-the-middle technique and guess-and-determine technique to recover the state and derive a key-related equation resulting from two layers of AES round functions. Secondly, by adopting an algebraic approach to study the properties of the round function, we decompose the equation into byte-level equations for divide-and-conquer. Finally, we utilize the guess-and-determine technique to recover the key. Collectively, these techniques enable us to present the first full key-recovery attack on HiAE. Our attack achieves a data complexity of $2^{130}$ and a time complexity of approximately $2^{209}$, leveraging both encryption and decryption oracles with a success probability of 1. In a single-key and nonce-respecting scenario, the attack fully recovers the 256-bit key, breaking the claimed 256-bit security against key-recovery attacks.
Last updated:  2025-06-27
t-Probing (In-)Security - Pitfalls on Noise Assumptions
Dina Hesse, Jakob Feldtkeller, Tim Güneysu, Julius Hermelink, Georg Land, Markus Krausz, and Jan Richter-Brockmann
The ongoing transition to post-quantum cryptography has led to a surge of research in side-channel countermeasures tailored to these schemes. A prominent method to prove security in the context of side-channel analysis is the utilization of the well-established t-probing model. However, recent studies by Hermelink et al. at CCS 2024 demonstrate a simple and practical attack on a provably secure implementation of the Fujisaki-Okamoto transform that raises concerns regarding the practical security of t-probing secure schemes. In this paper, we present an unsupervised single-trace side-channel attack on a tenth order masked implementation of fixed-weight polynomial sampling, which has also been proven to be secure in the t-probing model. Both attacks reveal a mismatch between the correct, well-understood theory of the t-probing model and its practical application, since the security proofs are valid, yet the attacks still succeed at high noise levels. Therefore, we take a closer look at the underlying causes and the assumptions that are made for transferring t-probing security to practice. In particular, we investigate the amount of noise required for this transfer. We find that, depending on the design decisions made, this can be very high and difficult to achieve. Consequently, we examine the factors impacting the required amount of noise and that should be considered for practically secure implementations. In particular, non-uniformly distributed shares - a setting that is increasingly encountered in post-quantum cryptographic algorithms - could lead to an increased noise requirement, and thus it could reduce the security level of the masking scheme. Our analysis then allows us to provide practical guidelines for implementation designers, thereby facilitating the development of practically secure designs.
Last updated:  2025-06-27
Securely Computing One-Sided Matching Markets
James Hsin-Yu Chiang, Ivan Damgård, Claudio Orlandi, Mahak Pancholi, and Mark Simkin
Top trading cycles (TTC) is a famous algorithm for trading indivisible goods between a set of agents such that all agents are as happy as possible about the outcome. In this paper, we present a protocol for executing TTC in a privacy preserving way. To the best of our knowledge, it is the first of its kind. As a technical contribution of independent interest, we suggest a new algorithm for determining all nodes in a functional graph that are on a cycle. The algorithm is particularly well suited for secure implementation in that it requires no branching and no random memory access. Finally, we report on a prototype implementation of the protocol based on somewhat homomorphic encryption.
Last updated:  2025-06-27
BitBatSPIR: Efficient Batch Symmetric Private Information Retrieval from PSI
Shuaishuai Li, Liqiang Peng, Weiran Liu, Cong Zhang, Zhen Gu, and Dongdai Lin
Private Information Retrieval (PIR) allows a client to retrieve an entry from a database held by a server without leaking which entry is being requested. Symmetric PIR (SPIR) is a stronger variant of PIR with database privacy so that the client knows nothing about the database other than the retrieved entry. This work studies SPIR in the batch setting (BatchSPIR), where the client wants to retrieve multiple entries. In particular, we focus on the case of bit entries, which has important real-world applications. We set up the connection between bit-entry information retrieval and set operation, and propose a black-box construction of BatchSPIR from Private Set Intersection (PSI). By applying an efficient PSI protocol with asymmetric set sizes, we obtain our BatchSPIR protocol named $\mathsf{BitBatSPIR}$. We also introduce several optimizations for the underlying PSI. These optimizations improve the efficiency of our concrete BatchSPIR construction as well as the PSI protocol. We implement $\mathsf{BitBatSPIR}$ and compare the performance with the state-of-the-art PIR protocol in the batch setting. Our experimental results show that $\mathsf{BitBatSPIR}$ not only achieves a stronger security guarantee (symmetric privacy) but also has a better performance for large databases, especially in the Wide Area Network (WAN) setting.
Last updated:  2025-06-27
Extending Groth16 for Disjunctive Statements
Xudong Zhu, Xinxuan Zhang, Xuyang Song, Yi Deng, Yuanju Wei, and Liuyu Yang
Two most common ways to design non-interactive zero knowledge (NIZK) proofs are based on Sigma ($\Sigma$)-protocols (an efficient way to prove algebraic statements) and zero-knowledge succinct non-interactive arguments of knowledge (zk-SNARK) protocols (an efficient way to prove arithmetic statements). However, in the applications of cryptocurrencies such as privacy-preserving credentials, privacy-preserving audits, and blockchain-based voting systems, the zk-SNARKs for general statements are usually implemented with encryption, commitment, or other algebraic cryptographic schemes. Moreover, zk-SNARKs for many different arithmetic statements may also be required to be implemented together. Clearly, a typical solution is to extend the zk-SNARK circuit to include the code for algebraic part. However, complex cryptographic operations in the algebraic algorithms will significantly increase the circuit size, which leads to impractically large proving time and CRS size. Thus, we need a flexible enough proof system for composite statements including both algebraic and arithmetic statements. Unfortunately, while the conjunction of zk-SNARKs is relatively natural and numerous effective solutions are currently available (e.g. by utilizing the commit-and-prove technique), the disjunction of zk-SNARKs is rarely discussed in detail. In this paper, we mainly focus on the disjunctive statements of Groth16, and we propose a Groth16 variant---CompGroth16, which provides a framework for Groth16 to prove the disjunctive statements that consist of a mix of algebraic and arithmetic components. Specifically, we could directly combine CompGroth16 with $\Sigma$-protocol or even CompGroth16 with CompGroth16 just like the logical composition of $\Sigma$-protocols. From this, we can gain many good properties, such as broader expression, better prover's efficiency and shorter CRS. In addition, for the combination of CompGroth16 and $\Sigma$-protocol, we also present two representative application scenarios to demonstrate the practicality of our construction.
Last updated:  2025-06-27
HypSCA: A Hyperbolic Embedding Method for Enhanced Side-channel Attack
Kaibin Li, Yihuai Liang, Zhengchun Zhou, and Shui Yu
Deep learning-based side-channel attack (DLSCA) has become the dominant paradigm for extracting sensitive information from hardware implementations due to its ability to learn discriminative features directly from raw side-channel traces. A common design choice in DLSCA involves embedding traces in Euclidean space, where the underlying geometry supports conventional objectives such as classification or contrastive learning. However, Euclidean space is fundamentally limited in capturing the multi-level hierarchical structure of side-channel traces, which often exhibit both coarse-grained clustering patterns (e.g., Hamming weight similarities) and fine-grained distinctions (e.g., instruction-level variations). These limitations adversely affect the discriminability and generalization of learned representations, particularly across diverse datasets and leakage models. In this work, we propose HypSCA, a dual-space representation learning method that embeds traces in hyperbolic space to exploit its natural ability to model hierarchical relationships through exponential volume growth. In contrast to existing approaches, HypSCA jointly combines hyperbolic structure modeling with local discriminative learning in Euclidean space, enabling the preservation of global hierarchies while enhancing fine-grained feature separation. Extensive experiments on multiple public datasets demonstrate that HypSCA achieves up to 51.6% improvement in attack performance over state-of-the-art DLSCA methods, consistently enhancing generalization across diverse datasets and leakage models.
Last updated:  2025-06-27
Hydrangea: Optimistic Two-Round Partial Synchrony
Nibesh Shrestha, Aniket Kate, and Kartik Nayak
We introduce Hydrangea, a partially synchronous Byzantine fault-tolerant state machine replication protocol that offers strong fault tolerance and achieves a fast, two-round commit in optimistic scenarios. Specifically, for a system of $n = 3f + 2c + k + 1$ parties, Hydrangea achieves an optimistic good-case latency of two rounds when the number of faulty parties (Byzantine or crash) is at most $p = \lfloor \frac{c+k}{2} \rfloor$ for a parameter $k \geq 0$. In more adversarial settings with up to $f$ Byzantine faults and $c$ crash faults, \name{} obtains a good-case latency of three rounds. Furthermore, we prove a matching lower bound: no protocol can achieve two-round optimistic commit under this fault model if $p > \lfloor \frac{c+k+2}{2} \rfloor$.
Last updated:  2025-06-26
SubLogarithmic Linear Time SNARKs from Compressed Sum-Check
Nitin Singh and Sikhar Patranabis
We leverage recently proposed multilinear polynomial commitment schemes, with linear time prover and constant proof size to reduce the communication complexity of the classical sum-check protocol for multivariate polynomials. Specifically, we consider degree $d$ multivariate polynomials in $\mu$ variables which can be decomposed into $\ell$ multilinear polynomials. We exhibit a new multivariate sum-check protocol with $O(\ell + d\log \log n)$ communication for $n = 2^\mu$. Our protocol retains the $O(n)$ prover cost~(where the precise constant depends on $\ell$, $d$ and the multivariate form). Thus we improve on the $O(\log n)$ communication inherent in all applications of existing multivariate sum-check protocol. Multivariate sum-check is a key ingredient in the design of several prover-efficient SNARKs, such as HyperPlonk (EUROCRYPT 2023), Spartan (CRYPTO 2020), Hyrax (IEEE S&P 2018), Libra (CRYPTO 2019), Gemini (EUROCRYPT 2022), Virgo (S&P 2020) etc. All of these SNARKS incur at least $O(\log n)$ proof size, with the smallest concrete proof size being $\approx 7$ KB for circuits of size $2^{25}$. Our improved multivariate sum-check protocol improves the proof size of all of the above SNARKs while retaining the $O(n)$ prover cost. In particular, plugging our sum-check protocol into the HyperPlonk multilinear PIOP yields $\mathsf{HybridPlonk}$ -- the first SNARK that simultaneously achieves $O(n)$ prover, sublogarithmic proof size of $O(\log\log n)$, and $O(\log n)$ verifier. Concretely, the proof size of $\mathsf{HybridPlonk}$ is about 2 KB for circuit sizes up to $2^{30}$. We note that SNARKs with smaller proof size than $\mathsf{HybridPlonk}$ are based on univariate polynomials, and are not prover-efficient as they inherently incur $O(n\log n)$ prover cost due to polynomial multiplications. Moreover, $\mathsf{HybridPlonk}$ avoids proof recursion techniques and non-black-box usage of cryptographic primitives. We believe that our improved multivariate sum-check protocol is of independent interest, and could have applications beyond SNARKs.
Last updated:  2025-06-26
How to Copy-Protect All Puncturable Functionalities Without Conjectures: A Unified Solution to Quantum Protection
Alper Çakan and Vipul Goyal
Quantum copy-protection (Aaronson, CCC'09) is the problem of encoding a functionality/key into a quantum state to achieve an anti-piracy security notion that guarantees that the key cannot be split into two keys that both still work. Most works so far has focused on constructing copy-protection for specific functionalities. The only exceptions are the work of Aaronson, Liu, Liu, Zhandry, Zhang (CRYPTO'21) and Ananth and Behera (CRYPTO'24). The former constructs copy-protection for all functionalities in the classical oracle model and the latter constructs copy-protection for all circuits that can be punctured at a uniformly random point with negligible security, assuming a new unproven conjecture about simultaneous extraction from entangled quantum adversaries, on top of assuming subexponentially-secure indistinguishability obfuscation (iO) and hardness of Learning with Errors (LWE). In this work, we show that the construction of Aaronson et al (CRYPTO'21), when the oracles are instantiated with iO, satisfies copy-protection security in the plain model for all cryptographically puncturable functionalities (instead of only puncturable circuits) with arbitrary success threshold (e.g. we get CPA-style security rather than unpredictability for encryption schemes), without any unproven conjectures, assuming only subexponentially secure iO and one-way functions (we do not assume LWE). Thus, our work resolves the five-year-old open question of Aaronson et al, and further, our work encompasses/supersedes and significantly improves upon all existing plain-model copy-protection results. Since puncturability has a long history of being studied in cryptography, our result immediately allows us to obtain copy-protection schemes for a large set of advanced functionalities for which no previous copy-protection scheme existed. Further, even for any functionality F that has not already been considered, through our result, constructing copy-protection for F essentially becomes a classical cryptographer's problem. Going further, we show that our scheme also satisfies secure leasing (Ananth and La Placa, EUROCRYPT'21), unbounded/LOCC leakage-resilience and intrusion-detection security (Cakan, Goyal, Liu-Zhang, Ribeiro, TCC'24), giving a unified solution to the problem of quantum protection.
Last updated:  2025-06-26
Limits on the Power of Private Constrained PRFs
Mengda Bi, Chenxin Dai, and Yaohua Ma
Private constrained PRFs are constrained PRFs where the constrained key hides information about the predicate circuit. Although there are many constructions and applications of PCPRF, its relationship to basic cryptographic primitives, such as one-way functions and public-key encryptions, has been unclear. For example, we don't know whether one-way functions imply PCPRFs for general predicates, nor do we know whether 1-key secure PCPRF for all polynomial-sized predicates imply public-key primitives such as public-key encryption and secret-key agreement. In this work, we prove the black-box separation between a 1-key secure PCPRF for any predicate and a secret-key agreement, which is the first black-box separation result about PCPRF. Specifically, we prove that there exists an oracle relative to which 1-key secure PCPRFs exist while secret-key agreement does not. Our proof is based on the simulation-based technique proposed by Impagliazzo and Rudich (STOC 89). The main technical challenge in generalizing the simulation-based technique to PCPRF is the issue of \textit{unfaithfulness} of Eve's simulation to the real world because our oracle is more complicated than a random oracle. We introduce a new technique which we call the ``weighting" technique and show how to leverage it to circumvent the issue of unfaithfulness in the proof framework of Impagliazzo and Rudich.
Last updated:  2025-06-26
A Theoretical Take on a Practical Consensus Protocol
Victor Shoup
The Asynchronous Common Subset (ACS) problem is a fundamental problem in distributed computing. Very recently, Das et al. (2024) developed a new ACS protocol with several desirable properties: (i) it provides optimal resilience, tolerating up to $t < n/3$ corrupt parties out of $n$ parties in total, (ii) it does not rely on a trusted set up, (iii) it utilizes only "lighweight" cryptography, which can be instantiated using just a hash function, and (iv) it has expected round complexity $O(1)$ and expected communication complexity $O(\kappa n^3)$, where $\kappa$ is the output-length of the hash function. The purpose of this paper is to give a detailed, self-contained exposition and analysis of this protocol from the point of view of modern theoretcal cryptography, fleshing out a number of details of the definitions and proofs, providing a complete security analysis based on concrete security assumptions on the hash function (i.e., without relying on random oracles), and developing all of the underlying theory in the universal composability framework.
Last updated:  2025-06-26
Drifting Towards Better Error Probabilities in Fully Homomorphic Encryption Schemes
Olivier Bernard, Marc Joye, Nigel P. Smart, and Michael Walter
There are two security notions for FHE schemes the traditional notion of IND-CPA, and a more stringent notion of IND-CPA$^D$. The notions are equivalent if the FHE schemes are perfectly correct, however for schemes with negligible failure probability the FHE parameters needed to obtain IND-CPA$^D$ security can be much larger than those needed to obtain IND-CPA security. This paper uses the notion of ciphertext drift in order to understand the practical difference between IND-CPA and IND-CPA$^D$ security in schemes such as FHEW, TFHE, and FINAL. This notion allows us to define a modulus switching operation (the main culprit for the difference in parameters) such that one does not require adapting IND-CPA cryptographic parameters to meet the IND-CPA$^D$ security level. Further, the extra cost incurred by the new techniques has no noticeable performance impact in practical applications. The paper also formally defines a stronger version for IND-CPA$^D$ security called sIND-CPA$^D$, which is proved to be strictly separated from the IND-CPA$^D$ notion. Criterion for turning an IND-CPA$^D$ secure public-key encryption into an sIND-CPA$^D$ one is also provided.
Last updated:  2025-06-26
Breaking Parallel ROS: Implication for Isogeny and Lattice-based Blind Signatures
Shuichi Katsumata, Yi-Fu Lai, and Michael Reichle
Many of the three-round blind signatures based on identification protocols are only proven to be $\ell$-concurrently unforgeable for $\ell = \mathsf{polylog}(\lambda)$. It was only recently shown in a seminal work by Benhamouda et al. (EUROCRYPT'21) that this is not just a limitation of the proof technique. They proposed an elegant polynomial time attack against the $\ell$-concurrently unforgeability of the classical blind Schnorr protocol for $\ell = \mathsf{poly}(\lambda)$. However, there are still many blind signatures following a similar recipe to blind Schnorr where the attack by Benhamouda et al. does not apply. This includes for instance the isogeny-based blind signature CSI-Otter by Katsumata et al (CRYPTO'23), the lattice-based blind signatures Blaze+ by Alkeilani et al. (ACISP'20) and BlindOR by Alkeilani et al. (CANS'20). In this work, we provide a simple and novel attack on blind signatures based on identification protocols performing parallel repetition to reduce the soundness error. Our attack translates to a polynomial time break for the $\ell$-concurrent unforgeability of CSI-Otter, Blaze+, and BlindOR for $\ell = \mathsf{poly}(\lambda)$. More formally, we define an intermediate problem called Parallel Random inhomogeneities in an Overdetermined Solvable system of linear equations (pROS) problem and show that an attack against pROS implies an attack to the above blind signatures. One takeaway of our finding is that while parallel repetition allows to exponentially reduce the soundness error of an identification protocol, this has minimal effect on the resulting blind signature. Our attack is concretely very efficient and for instance breaks $4$-concurrent unforgeability of CSI-Otter in time roughly $2^{34}$ hash computations.
Last updated:  2025-06-26
XBOOT: Free-XOR Gates for CKKS with Applications to Transciphering
Chao Niu, Zhicong Huang, Zhaomin Yang, Yi Chen, Liang Kong, Cheng Hong, and Tao Wei
The CKKS scheme is traditionally recognized for approximate homomorphic encryption of real numbers, but BLEACH (Drucker et al., JoC 2024) extends its capabilities to handle exact computations on binary or small integer numbers. Despite this advancement, BLEACH's approach of simulating XOR gates via $(a-b)^2$ incurs one multiplication per gate, which is computationally expensive in homomorphic encryption. To this end, we introduce XBOOT, a new framework built upon BLEACH's blueprint but allows for almost free evaluation of XOR gates. The core concept of XBOOT involves lazy reduction, where XOR operations are simulated with the less costly addition operation, $a+b$, leaving the management of potential overflows to later stages. We carefully handle the modulus chain and scale factors to ensure that the overflows would be conveniently rounded during the CKKS bootstrapping phase without extra cost. We use AES-CKKS transciphering as a benchmark to test the capability of XBOOT, and achieve a throughput exceeding one kilobyte per second, which represents a $2.5\times$ improvement over the state-of-the-art (Aharoni et al., HES 2023). Moreover, XBOOT enables the practical execution of tasks with extensive XOR operations that were previously challenging for CKKS. For example, we can do Rasta-CKKS transciphering at over two kilobytes per second, more than $10\times$ faster than the baseline without XBOOT.
Last updated:  2025-06-26
On symbolic computations and Post Quantum Cryptography with Lie Geometries.
Vasyl Ustimenko
Assume that the global density of multivariate map over the commutative ring is the total number of its coefficients. In the case of finite commutative ring K with the multiplicative group K* containing more than 2 elements we suggest multivariate public keys in n variables with the public rule of global density O(n) and degree O(1). Another public keys use public rule of global density O(n) and degree O(n) together with the space of plaintexts (K*)^n and the space of ciphertext K^n . We consider examples of protocols of Noncommutative Cryptography implemented on the platform of endomorphisms of which allow the con-version of mentioned above multivariate public keys into protocol based cryptosystems of El Gamal type. The cryptosystems and protocols are designed in terms of analogue of geometries of Chevalley groups over commutative rings and their temporal versions.
Last updated:  2025-06-26
Private coins extension with verifiable encryption
Oleg Fomenko
This paper introduces a protocol for verifiable encryption of values committed using Pedersen commitments. It enables a recipient to decrypt the hidden amount while proving its consistency with the original commitment, without revealing the value publicly. The construction combines symmetric encryption with zero-knowledge proofs and is made non-interactive via the Fiat-Shamir heuristic. The protocol is particularly useful in blockchain settings where confidential but verifiable value transfers are required.
Last updated:  2025-06-26
Non-Homomorphic Key Blinding from Symmetric Primitives
Thomas Bellebaum
Key Blinding Signature Schemes allow to derive so-called blinded keys from public keys, which can be used to verify signatures created with the secret key. At the same time, neither the blinded keys nor their signatures disclose from which public key they were derived, effectively implementing pseudonyms for one’s identity. In search of conservative schemes, we deviate from the homomorphism- based re-randomization approach in favor of a novel proof of knowledge- based approach. To authenticate a message, a signer proves that they know an original keypair and a valid way to commit to the corresponding verification key to derive a given blinded key. We provide a framework for such constructions and indicate how MPC-friendly block ciphers and one-way functions may be used for efficient instantiations. While the general framework’s security arguments are stated in the random oracle model, we show a natural instantiation approach whose security can be based on collision-resistance and pseudorandomness instead. The result is the first standard model construction of key blinding. Using our framework, we identify a shortcoming in the usual definition of unlinkability for key blinding signature schemes, which we rectify by considering an additional notion called targeted unlinkability.
Last updated:  2025-06-26
PrivacyGo: Privacy-Preserving Ad Measurement with Multidimensional Intersection
Jian Du, Haohao Qian, Shikun Zhang, Wen-jie Lu, Donghang Lu, Yongchuan Niu, Bo Jiang, Yongjun Zhao, and Qiang Yan
In digital advertising, accurate measurement is essential for optimiz- ing ad performance, requiring collaboration between advertisers and publishers to compute aggregate statistics—such as total conver- sions—while preserving user privacy. Traditional secure two-party computation methods allow joint computation on single-identifier data without revealing raw inputs, but they fall short when mul- tidimensional matching is needed and leak the intersection size, exposing sensitive information to privacy attacks. This paper tackles the challenging and practical problem of multi- identifier private user profile matching for privacy-preserving ad measurement, a cornerstone of modern advertising analytics. We introduce a comprehensive cryptographic framework leveraging re- versed Oblivious Pseudorandom Functions (OPRF) and novel blind key rotation techniques to support secure matching across multiple identifiers. Our design prevents cross-identifier linkages and in- cludes a differentially private mechanism to obfuscate intersection sizes, mitigating risks such as membership inference attacks. We present a concrete construction of our protocol that achieves both strong privacy guarantees and high efficiency. It scales to large datasets, offering a practical and scalable solution for privacy- centric applications like secure ad conversion tracking. By combin- ing rigorous cryptographic principles with differential privacy, our work addresses a critical need in the advertising industry, setting a new standard for privacy-preserving ad measurement frameworks.
Last updated:  2025-06-26
Khatam: Reducing the Communication Complexity of Code-Based SNARKs
Hadas Zeilberger
Two techniques have recently emerged in the construction of Succinct Non-Interactive Arguments of Knowledge (SNARKs) that yield extremely fast provers; The use of multilinear (instead of univariate) polynomial commitment schemes (PCS) and the construction of PCS from error-correcting codes. Recently, BaseFold (Crypto 2024) introduced a family of PCS that combine these two techniques, thereby achieving a better trade-off between prover time and verifier costs than the state of the art. Despite its impressive overall efficiency, BaseFold suffered from larger proof sizes than its univariate counterparts, due to unproven claims about linear codes, which were not relevant in the univariate setting. This work closes this gap by proving a new fact about linear codes -- that if $\pi_L, \pi_R$ are two vectors in $\mathbb{F}^{n}$ and if $\pi_L + r \pi_R$ is close to a codeword in $C$, then $\pi_L, \pi_R$ and $(\pi_L + r \pi_R)$ all agree with codewords at positions in the same set $S \subset [n]$, except with negligible probability over $r \leftarrow \mathbb{F}$. Our result holds as long as $|S| > (1 - \Delta_C + \epsilon)^{1/3} + \eta$, for $\epsilon, \eta \in [0,1]$ and with failure probability smaller than $\frac{1}{\epsilon\eta |\mathbb{F}|}$, where $\Delta_C$ is the minimum distance of the code. Furthermore, our results extend to any finite field and any linear code.
Last updated:  2025-06-26
Rhombus: Fast Homomorphic Matrix-Vector Multiplication for Secure Two-Party Inference
Jiaxing He, Kang Yang, Guofeng Tang, Zhangjie Huang, Li Lin, Changzheng Wei, Ying Yan, and Wei Wang
We present $\textit{Rhombus}$, a new secure matrix-vector multiplication (MVM) protocol in the semi-honest two-party setting, which is able to be seamlessly integrated into existing privacy-preserving machine learning (PPML) frameworks and serve as the basis of secure computation in linear layers. $\textit{Rhombus}$ adopts RLWE-based homomorphic encryption (HE) with coefficient encoding, which allows messages to be chosen from not only a field $\mathbb{F}_p$ but also a ring $\mathbb{Z}_{2^\ell}$, where the latter supports faster computation in non-linear layers. To achieve better efficiency, we develop an input-output packing technique that reduces the communication cost incurred by HE with coefficient encoding by about $21\times$, and propose a split-point picking technique that reduces the number of rotations to that sublinear in the matrix dimension. Compared to the recent protocol $\textit{HELiKs}$ by Balla and Koushanfar (CCS'23), our implementation demonstrates that $\textit{Rhombus}$ improves the whole performance of an MVM protocol by a factor of $7.4\times \sim 8\times$, and improves the end-to-end performance of secure two-party inference of ResNet50 by a factor of $4.6\times \sim 18\times$.
Last updated:  2025-06-26
Rapidash: Atomic Swaps Secure under User-Miner Collusion
Hao Chung, Elisaweta Masserova, Elaine Shi, and Sri AravindaKrishnan Thyagarajan
Cross-chain trading is fundamental to blockchains and Decentralized Finance (DeFi). A way to achieve such trading in a truly decentralized manner, i.e., without trusted third parties, is by using atomic swaps. However, recent works revealed that Hashed Time-Lock Contract, a key building block of the existing atomic swaps, is entirely insecure in the presence of user-miner collusion. Specifically, a user can bribe the miners of the blockchain to help it cheat. In this work, we give the first and rigorous formal treatment of fair trading on blockchains, where users and miners may enter arbitrary binding contracts on the side. We propose Rapidash, a new atomic swap protocol, and prove its incentive-compatibility in the presence of user-miner collusion. Specifically, we show that Rapidash satisfies a coalition-resistant Nash equilibrium absent external incentives. We give instantiations of Rapidash that are compatible with Bitcoin and Ethereum, and incur only minimal overheads in terms of costs for the users.
Last updated:  2025-06-25
A Polynomial Public-Key Cryptosystem Based on Jacobian-Preserving Composition
Saimon Ahmed
We propose a public-key cryptosystem based on Jacobian-preserving polynomial compositions, utilizing algebraically invertible polynomial maps with hard-to-invert composition. The construction utilizes polynomial maps over $\mathbb{Z}_p$, where $p$ is a prime number, with Jacobian determinant equal to 1 to ensure invertibility. The public key function $H : \mathbb{Z}_p^n \to \mathbb{Z}_p^n$ is defined as the composition of invertible polynomial maps $f_1, f_2, \dots, f_k$, each with Jacobian determinant 1, while the private key consists of the individual components used in the composition. Although inverting the composition is possible, inverting without the knowledge of the factors is computationally infeasible. This system incorporates both triangular and affine polynomial maps. We discuss the construction, provide formal correctness proofs, analyze hardness assumptions, and present a Python-based prototype with benchmark results.
Last updated:  2025-06-25
Towards AI-driven Optimization of Robust Probing Model-compliant Masked Hardware Gadgets Using Evolutionary Algorithms
David S. Koblah, Dev M. Mehta, Mohammad Hashemi, Fatemeh Ganji, and Domenic Forte
Side-channel analysis (SCA) is a persistent threat to security-critical systems, enabling attackers to exploit information leakage. To mitigate its harmful impacts, masking serves as a provably secure countermeasure that performs computing on random shares of secret values. As masking complexity, required effort, and cost increase dramatically with design complexity, recent techniques rely on designing and implementing smaller building blocks, so-called “gadgets.” Existing work on optimizing gadgets has primarily focused on latency, area, and power as their objectives. To the best of our knowledge, the most up-to-date ASIC-specific masking gadget optimization frameworks require significant manual effort. This paper is inspired by previous work introducing open-source academic tools to leverage aspects of artificial intelligence (AI) in electronic design automation (EDA) to attempt to optimize and enhance existing gadgets and overall designs. We concentrate on evolutionary algorithms (EA), optimization techniques inspired by biological evolution and natural selection, to find optimal or near-optimal solutions. In this regard, our goal is to improve gadgets in terms of power and area metrics. The primary objective is to demonstrate the effectiveness of our methods by integrating compatible gates from a technology library to generate an optimized and functional design without compromising security. Our results show a significant reduction in power consumption and promising area improvements, with values reduced by 15% in some cases, compared to the naïve synthesis of masked designs. We evaluate our results using industry-standard synthesis and pre-silicon side-channel verification tools.
Last updated:  2025-06-25
Anamorphic Encryption, Revisited
Fabio Banfi, Konstantin Gegier, Martin Hirt, Ueli Maurer, and Guilherme Rito
An anamorphic encryption scheme allows two parties who share a so-called double key to embed covert messages in ciphertexts of an established PKE scheme. This protects against a dictator that can force the receiver to reveal the secret keys for the PKE scheme, but who is oblivious about the existence of the double key. We identify two limitations of the original model by Persiano, Phan, and Yung (EUROCRYPT 2022). First, in their definition a double key can only be generated once, together with a key-pair. This has the drawback that a receiver who wants to use the anamorphic mode after a dictator comes to power, needs to deploy a new key-pair, a potentially suspicious act. Second, a receiver cannot distinguish whether or not a ciphertext contains a covert message. In this work we propose a new model that overcomes these limitations. First, we allow to associate multiple double keys to a key-pair, after its deployment. This also enables deniability in case the double key only depends on the public key. Second, we propose a natural robustness notion, which guarantees that anamorphically decrypting a regularly encrypted message results in a special symbol indicating that no covert message is contained, which also eliminates certain attacks. Finally, to instantiate our new, stronger definition of anamorphic encryption, we provide generic and concrete constructions. Concretely, we show that ElGamal and Cramer-Shoup satisfy a new condition, selective randomness recoverability, which enables robust anamorphic extensions, and we also provide a robust anamorphic extension for RSA-OAEP.
Last updated:  2025-06-25
Outsourced Cloud Data Privacy-Preserving Framework: An Efficient Broadcast Encrypted Search Realization
Yibo Cao, Shiyuan Xu, Xiu-Bo Chen, Gang Xu, Siu-Ming Yiu, and Zongpeng Li
The development of cloud networks facilitates data outsourcing, sharing, and storage, but it has also raised several security concerns. Public key authenticated encryption with keyword search (PAEKS) enables the encrypted search over cloud data while resisting the insider keyword guessing attacks (IKGAs). However, existing PAEKS schemes are limited to a single receiver, restricting application prospects in cloud networks. In addition, quantum computing attacks and key leakage issues further threaten data security, which has attracted extensive attention from researchers. Therefore, designing an encrypted search scheme to resist the above-mentioned attacks is still far-reaching. In this paper, we first propose BroSearch, an outsourced data privacy-preserving framework through efficient broadcast encrypted search for cloud networks. It utilizes lattice sampling algorithms to authenticate the keyword and offers searchability over broadcasting ciphertext while enjoying IKGAs-resistance in a quantum setting. To get around key leakage issues, we then incorporate the minimal cover set technique and lattice basis extension algorithm to construct FS-BroSearch as an enhanced version. Furthermore, we give a rigorous security analysis and a comprehensive performance evaluation of BroSearch and FS-BroSearch. Specifically, BroSearch consumes only 61.11%, 81.82%, and 83.33% of the execution time compared to prior art in terms of ciphertext calculation, trapdoor generation, and search procedures, which is practical and efficient in cloud networks.
Last updated:  2025-06-25
Robust Non-Interactive Zero-Knowledge Combiners
Michele Ciampi, Lorenzo Magliocco, Daniele Venturi, and Yu Xia
A $t$-out-of-$n$ robust non-interactive zero-knowledge (NIZK) combiner is a construction that, given access to $n$ candidate instantiations of a NIZK for some language, itself implements a NIZK for the same language. Moreover, the combiner is secure, assuming at least $t$ of the given candidates are secure. In this work, we provide the first definition of combiners for NIZK, and prove that no robust NIZK combiner exists assuming $t \le \lfloor n/2 \rfloor$ (unless the polynomial hierarchy collapses). On the positive side, we provide different constructions of robust NIZK combiners for $t > \lfloor n/2 \rfloor$. In particular, we show how to obtain: 1) A black-box combiner working for a special class of {\em homomorphic} languages where $n,t$ are polynomial and $t > \lfloor n/2 \rfloor$. 2) A non-black-box combiner working for any language, where $n,t$ are constant and $t > \lfloor n/2 \rfloor$. 3) A non-black-box combiner working for any language, where $n,t$ are polynomial and $t > \lfloor 2n/3 \rfloor$.
Last updated:  2025-06-25
Easy-ABE: An Easy Ciphertext-Policy Attribute-Based Encryption
Ahmad Khoureich Ka
Attribute-Based Encryption is widely recognized as a leap forward in the field of public key encryption. It allows to enforce an access control on encrypted data. Decryption time in ABE schemes can be long depending on the number of attributes and pairing operations. This drawback hinders their adoption on a broader scale. In this paper, we propose a non-monotone CP-ABE scheme that has no restrictions on the size of attribute sets and policies, allows fast decryption and is adaptively secure under the CBDH-3 assumption. To achieve this, we approached the problem from a new angle, namely using a set membership relation for access structure. We have implemented our scheme using the Charm framework and the source code is available on GitHub. Easy-ABE performs better than FAME an FABEO.
Last updated:  2025-06-25
Carousel: Fully Homomorphic Encryption with Bootstrapping over Automorphism Group
Intak Hwang, Seonhong Min, and Yongsoo Song
Fully Homomorphic Encryption (FHE) enables the secure computation of functions on ciphertexts without requiring decryption. Specifically, AP-like HE schemes exploit an intrinsic bootstrapping method called blind rotation. In existing blind rotation methods, a look-up table is homomorphically evaluated on the input ciphertext through iterative multiplication of monomials. However, the algebraic structure of the multiplicative group of monomials imposes certain limitation on the input plaintext space, as it can bootstrap only a fraction of the input plaintext space. In this work, we introduce a new FHE scheme, Carousel, that solves this problem. The key idea of our approach is to utilize the automorphism group instead of monomials. More specifically, the look-up table is encoded into a single polynomial that can be rotated via a series of homomorphic multiplications and automorphisms. We instantiate Carousel with subring encoding proposed by Arita and Handa (ICISC ’17) and provide a proof-of-concept implementation. Our benchmark result shows that Carousel can bootstrap 4-bit integer under 30ms.
Last updated:  2025-06-25
Performance and Privacy: A Low-Latency Secure Anonymous Authentication Protocol with OPRF
Wenjv Hu, Yanping Ye, and Yin Li
erforming privacy-preserving queries, particularly anonymous authentication, against large-scale datasets presents critical tradeoffs between security, latency, scalability. Existing cryptographic solutions often impose linear computation or communication overheads. This paper introduces a novel, efficient protocol for secure anonymous authentication, uniquely combining matrix partitioning via hash prefixes with Oblivious Pseudorandom Functions in a three-server semi-honest model. Crucially, compared to our previous work published at TrustCom 2024, this enhanced protocol eliminates the dependency on a designated fully trusted server, achieving security when any single server is corrupted. Furthermore, our protocol demonstrates significant performance improvements over current state-of-the-art methods. It achieves sub-linear online communication complexity. Evaluations show that for datasets of size 𝑚 ≈ 106 , our protocol reduces online communication by at least 30% compared to other sub-linear schemes, while maintaining competitive online computation times. Security is proven via simulation, and comprehensive experiments confirm practicality for datasets up to 𝑚 = 10^8
Last updated:  2025-06-25
Depth-Optimized Quantum Implementation of CHAM
Kyungbae Jang, Yujin Oh, and Hwajeong Seo
Security weaknesses in the symmetric-key components of a cipher can compromise its overall security assurances. With the rapid progress in quantum computing in recent years, there is a growing focus on assessing the resilience of symmetric-key cryptography against possible quantum attacks (e.g., Grover's algorithm). This paper is dedicated to examining the quantum attack resistance of CHAM, a family of lightweight block ciphers developed by a Korean research group. We provide an optimized quantum circuit implementation of CHAM and evaluate its complexity metrics, such as the number of qubits, gate count, and circuit depth, within the context of Grover's search algorithm. For Grover's key search, minimizing the quantum circuit depth is the key optimization goal, particularly when parallel search capabilities are taken into account. Our approach enhances parallelism for a low-depth quantum circuit of the CHAM block cipher, significantly reducing the full circuit depth compared to previous works. For example, in the case of CHAM-128/128, our implementation achieves a full depth of 14,772, compared to 37,768 depth in the best known prior work. This highlights the substantial depth reduction enabled by our parallelism-oriented design, which facilitates more practical quantum attacks.
Last updated:  2025-06-24
Ligerito: A Small and Concretely Fast Polynomial Commitment Scheme
Andrija Novakovic and Guillermo Angeris
In this note we present Ligerito, a small and practically fast polynomial commitment and inner product scheme. For the case of univariate and multilinear polynomial evaluations, the scheme has a proof size of $\sim \log(N)^2/\log\log(N)$ up to constants and for a large enough field, where $N$ is the size of the input. Ligerito is also fast on consumer hardware: when run on an M1 MacBook Pro for a polynomial with $2^{24}$ coefficients over a 32-bit binary field, our Julia prover implementation has a proving time of 1.3 seconds and a proof size of 255 KiB. Ligerito is also relatively flexible: any linear code for which the rows of the generator matrix can be efficiently evaluated can be used. Such codes include Reed–Solomon codes, Reed–Muller codes, among others. This, in turn, allows for a high degree of flexibility on the choice of field and can likely give further efficiency gains in specific applications.
Last updated:  2025-06-24
Unconditional Individual Verifiability with Receipt Freeness via Post-Cast Isolation
Janis Erdmanis
We introduce a trapdoorless tracker construction for electronic voting that fundamentally reimagines verifiability through information flow control. Unlike existing E2E verifiable systems where receipt-freeness compromises individual verifiability, our approach achieves both simultaneously by requiring only temporary isolation of the voting calculator between ballot casting and verification—when voters enter unique challenges to compute trackers for locating their votes on the public tally board. Our construction leverages perfectly hiding Pedersen commitments and a unique tracker challenge mechanism to simultaneously achieve unconditional individual verifiability, practical everlasting privacy, and receipt-freeness while relying only on standard cryptographic assumptions. When verification failures occur, our system provides transparent accountability by precisely identifying whether the voting calculator or voting device is responsible. The system maintains security even with partial compliance with isolation procedures and offers robust protection against various adversaries while requiring minimal trust assumptions.
Last updated:  2025-06-24
Analysis of REDOG: The Pad Thai Attack
Alex Pellegrini and Marc Vorstermans
This paper introduces the Pad Thai message recovery attack on REDOG, a rank-metric code-based encryption scheme selected for the second round of evaluation in the Korean Post-Quantum Cryptography (KPQC) competition. The attack exploits the low rank weight of a portion of the ciphertext to construct multiple systems of linear equations, one of which is noise-free and can be solved to recover the secret message. The Pad Thai attack significantly undermines the security of REDOG, revealing that its provided security is much lower than originally claimed.
Last updated:  2025-06-24
Faster Hash-based Multi-valued Validated Asynchronous Byzantine Agreement
Hanwen Feng, Zhenliang Lu, Tiancheng Mai, and Qiang Tang
Multi-valued Validated Byzantine Agreement (MVBA) is vital for asynchronous distributed protocols like asynchronous BFT consensus and distributed key generation, making performance improvements a long-standing goal. Existing communication-optimal MVBA protocols rely on computationally intensive public-key cryptographic tools, such as non-interactive threshold signatures, which are also vulnerable to quantum attacks. While hash-based MVBA protocols have been proposed to address these challenges, their higher communication overhead has raised concerns about practical performance. We present a novel MVBA protocol with adaptive security, relying exclusively on hash functions to achieve post-quantum security. Our protocol delivers near-optimal communication, constant round complexity, and significantly reduced latency compared to existing schemes, though it has sub-optimal resilience, tolerating up to 20% Byzantine corruptions instead of the typical 33%. For example, with $n=201$ and input size 1.75 MB, it reduces latency by 81% over previous hash-based approaches.
Last updated:  2025-06-24
From Worst-Case Hardness of $\mathsf{NP}$ to Quantum Cryptography via Quantum Indistinguishability Obfuscation
Tomoyuki Morimae, Yuki Shirakawa, and Takashi Yamakawa
Indistinguishability obfuscation (iO) has emerged as a powerful cryptographic primitive with many implications. While classical iO, combined with the infinitely-often worst-case hardness of $\mathsf{NP}$, is known to imply one-way functions (OWFs) and a range of advanced cryptographic primitives, the cryptographic implications of quantum iO remain poorly understood. In this work, we initiate a study of the power of quantum iO. We define several natural variants of quantum iO, distinguished by whether the obfuscation algorithm, evaluation algorithm, and description of obfuscated program are classical or quantum. For each variant, we identify quantum cryptographic primitives that can be constructed under the assumption of quantum iO and the infinitely-often quantum worst-case hardness of $\mathsf{NP}$ (i.e., $\mathsf{NP}\not\subseteq \mathsf{i.o.BQP}$). In particular, we construct pseudorandom unitaries, QCCC quantum public-key encryption and (QCCC) quantum symmetric-key encryption, and several primitives implied by them such as one-way state generators, (efficiently-verifiable) one-way puzzles, and EFI pairs, etc. While our main focus is on quantum iO, even in the classical setting, our techniques yield a new and arguably simpler construction of OWFs from classical (imperfect) iO and the infinitely-often worst-case hardness of $\mathsf{NP}$.
Last updated:  2025-06-24
Hash-Based Multi-Signatures for Post-Quantum Ethereum
Justin Drake, Dmitry Khovratovich, Mikhail Kudinov, and Benedikt Wagner
With the threat posed by quantum computers on the horizon, systems like Ethereum must transition to cryptographic primitives resistant to quantum attacks. One of the most critical of these primitives is the non-interactive multi-signature scheme used in Ethereum's proof-of-stake consensus, currently implemented with BLS signatures. This primitive enables validators to independently sign blocks, with their signatures then publicly aggregated into a compact aggregate signature. In this work, we introduce a family of hash-based signature schemes as post-quantum alternatives to BLS. We consider the folklore method of aggregating signatures via (hash-based) succinct arguments, and our work is focused on instantiating the underlying signature scheme. The proposed schemes are variants of the XMSS signature scheme, analyzed within a novel and unified framework. While being generic, this framework is designed to minimize security loss, facilitating efficient parameter selection. A key feature of our work is the avoidance of random oracles in the security proof. Instead, we define explicit standard model requirements for the underlying hash functions. This eliminates the paradox of simultaneously treating hash functions as random oracles and as explicit circuits for aggregation. Furthermore, this provides cryptanalysts with clearly defined targets for evaluating the security of hash functions. Finally, we provide recommendations for practical instantiations of hash functions and concrete parameter settings, supported by known and novel heuristic bounds on the standard model properties.
Last updated:  2025-06-24
zkGPT: An Efficient Non-interactive Zero-knowledge Proof Framework for LLM Inference
Wenjie Qu, Yijun Sun, Xuanming Liu, Tao Lu, Yanpei Guo, Kai Chen, and Jiaheng Zhang
Large Language Models (LLMs) are widely employed for their ability to generate human-like text. However, service providers may deploy smaller models to reduce costs, potentially deceiving users. Zero-Knowledge Proofs (ZKPs) offer a solution by allowing providers to prove LLM inference without compromising the privacy of model parameters. Existing solutions either do not support LLM architectures or suffer from significant inefficiency and tremendous overhead. To address this issue, this paper introduces several new techniques. We propose new methods to efficiently prove linear and non-linear layers in LLMs, reducing computation overhead by orders of magnitude. To further enhance efficiency, we propose constraint fusion to reduce the overhead of proving non-linear layers and circuit squeeze to improve parallelism. We implement our efficient protocol, specifically tailored for popular LLM architectures like GPT-2, and deploy optimizations to enhance performance. Experiments show that our scheme can prove GPT-2 inference in less than 25 seconds. Compared with state-of-the-art systems such as Hao et al. (USENIX Security'24) and ZKML (Eurosys'24), our work achieves nearly $279\times$ and $185\times$ speedup, respectively.
Last updated:  2025-06-24
PA1 Security on Release of Unverified Plaintext in Encrypt-then-MAC AE Schemes
Bart Mennink and Suprita Talnikar
At ASIACRYPT 2014, Andreeva et al. put forward a definition for security of authenticated encryption under release of unverified plaintext. They introduced two notions of plaintext awareness (PA1 and its stronger sibling PA2), suggested to be used in conjunction with confidentiality in case of release of unverified plaintext, as well as the notion of integrity under release of unverified plaintext (INT-RUP). Various efforts have been made to develop a unified model (e.g., Ashur et al., CRYPTO 2017, Chang et al., ToSC 2019(4)). With respect to the analysis of existing and new modes under release of unverified plaintext, most research however has focused on INT-RUP security only. Plaintext awareness is less studied and understood. In this work, we take a detailed look at the original definitions of PA1 and PA2 security. We observe that the definitions leave too much room for interpretation, and claimed results such as PA1 security of Encrypt-then-MAC are unjustified. The core of the issue lies in the fact that PA1 security is necessarily tied to the implementation of the scheme. To resolve this, we present refined definitions of PA1 and PA2 security. We argue that even for these refined definitions, there is no implementation of Encrypt-and-MAC that is PA1 (nor PA2) secure. For MAC-then-Encrypt, results depend on the actual scheme, as we demonstrate using a negative result and a positive result (from literature, on Romulus-M). Furthermore, we formally prove for Encrypt-then-MAC that (i) there exist implementations that are PA1 insecure and (ii) there exist implementations that are PA1 secure. In other words, Encrypt-then-MAC is insecure under the old definition but secure under the new definition, provided a proper implementation is used. We apply this observation to Isap v2, finalist in the NIST Lightweight Cryptography competition, where we additionally deal with the complication that the same key is used for encryption and authentication.
Last updated:  2025-06-24
Efficient Constant-Size Linkable Ring Signatures for Ad-Hoc Rings via Pairing-Based Set Membership Arguments
Min Xie, Zhengzhou Tu, Man Ho Au, Junbin Fang, Xuan Wang, and Zoe Lin Jiang
Linkable Ring Signatures (LRS) allow users to anonymously sign messages on behalf of ad-hoc rings, while ensuring that multiple signatures from the same user can be linked. This feature makes LRS widely used in privacy-preserving applications like e-voting and e-cash. To scale to systems with large user groups, efficient schemes with short signatures and fast verification are essential. Recent works, such as DualDory (ESORICS’22) and LLRing (ESORICS’24), improve verification efficiency through offline precomputations but rely on static rings, limiting their applicability in ad-hoc ring scenarios. Similarly, constant-size ring signature schemes based on accumulators face the same limitation. In this paper, we propose a framework for constructing constant-size LRS suitable for large ad-hoc rings. We introduce a novel pairing-based Set Membership Argument (SMA) with a proof size of only three group elements. By leveraging KZG polynomial commitments, we optimize the verification to require only constant group exponentiations and pairings, as well as linear field multiplications. Utilizing the SMA, our framework achieves constant-size signatures with verification dominated by linear field operations, outperforming existing schemes that require linear group exponentiations in ad-hoc ring settings. Moreover, it exhibits strong scalability: (i) compatibility with any PKI-based cryptosystem and (ii) scoped linkability, enabling flexible definitions of linking scope. We instantiate our framework using a discrete logarithm public key structure. On the $BN254$ curve, our signature size is fixed at 687 bytes, which to our best knowledge is the shortest LRS for ring sizes larger than 32. For a ring size of 1024, our verification cost is only 10.4 ms, achieving 48.6×, 2.6×–467×, 7.9×–13.2×, and 2.2×–102.5× improvements over Omniring (CCS’19), DualDory (with and without precomputation), LLRing-DL (with and without precomputation), and LLRing-P (with and without precomputation), respectively. Moreover, this performance gap continues to grow as the ring size increases.
Last updated:  2025-06-24
Pseudorandom Correlation Generators for Multiparty Beaver Triples over $\mathbb{F}_2$
Peihan Miao, Alice Murphy, Akshayaram Srinivasan, and Max Tromanhauser
We construct an efficient pseudorandom correlation generator (PCG) (Boyle et al., Crypto'19) for two-party programmable oblivious linear evaluation (OLE) functionality over $\mathbb{F}_2$. Our construction (i) has an efficient seed expansion phase, and (ii) comes with a concretely efficient protocol for distributing the seeds that makes black-box use of cryptography and runs in a constant number of rounds. PCGs for programmable OLE are known to imply PCGs for generating $n$-party Beaver triples over $\mathbb{F}_2$. The resultant PCG has a seed setup phase whose communication cost is $n(n-1)$ times than that of the programmable OLE protocol. The per-party seed size and the seed expansion time have a multiplicative overhead of $2(n-1)$. Prior constructions for efficiently generating multiparty Beaver triples only worked for finite fields $\mathbb{F}_q$ where $q \geq 3$ or required one bit of per-party communication for each triple generated (and hence, do not satisfy the PCG definition). Thus, ours is the first concretely efficient PCG for generating Beaver triples over $\mathbb{F}_2$ in the multiparty setting. Our distributed seed generation protocol generates $N = 2^{30}$ two-party programmable OLEs in 3.5 minutes with 255 MB of communication over a LAN network. The PCG seed size is around 55 MB and the expansion phase requires 10 PRG calls and around 229 thousand XOR and AND operations per triple, producing roughly 31,000 triples per second. Our PCG for generating multiparty Beaver triples has lower concrete communication cost than the state-of-the-art for small number of parties. When compared to the FOLEAGE protocol (Bombar et al, Asiacrypt 2024) which requires one bit of per-party communication per triple that is generated, our communication cost is lower by $2.4\times$ when generating $N = 2^{36}$ triples between three parties and is $1.2 \times $ lower for the case of five parties. At a conceptual level, our protocol deviates from the prior approaches which relied on variants of dual learning parity with noise (LPN) assumption. Instead, our construction combines both the primal and dual versions of LPN to achieve the aforementioned efficiency.
Last updated:  2025-06-23
UOV-Based Verifiable Timed Signature Scheme
Erkan Uslu and Oğuz Yayla
Verifiable Timed Signatures (VTS) are cryptographic primitives that enable the creation of a signature that can only be retrieved after a specific time delay, while also providing verifiable evidence of its existence. This framework is particularly useful in blockchain applications. Current VTS schemes rely on signature algorithms such as BLS, Schnorr, and ECDSA, which are vulnerable to quantum attacks due to the vulnerability of the discrete logarithm problem to Shor's Algorithm. We introduce VT-UOV, a novel VTS scheme based on the Salt-Unbalanced Oil and Vinegar (Salt-UOV) Digital Signature Algorithm. As a multivariate polynomial-based cryptographic primitive, Salt-UOV provides strong security against both classical and quantum adversaries. Adapting Salt-UOV into the VTS framework requires addressing challenges such as complex parameters instead of a integer, the computational complexity of solving multivariate equations, and the integration of Time-Lock Puzzles (TLPs) for enforcing delayed signature generation. Our experimental results show that VT-UOV exhibits a unique performance profile among existing VTS constructions. This paper offers a detailed exploration of the VT-UOV scheme and its overall security and performance properties.
Last updated:  2025-06-23
FICS and FACS: Fast IOPPs and Accumulation via Code-Switching
Anubhav Baweja, Pratyush Mishra, Tushar Mopuri, and Matan Shtepel
Recent work on IOP-based succinct arguments has focused on developing IOPs that improve prover efficiency by relying on linear-time encodable codes. We present two new schemes for improving the efficiency of such succinct arguments: $\quad \bullet$ $\mathsf{FICS}$, an IOP of proximity for multilinear polynomial evaluation that, like prior work Blaze [EUROCRYPT 2025] achieves linear prover time, but additionally reduces the verifier oracle query complexity to $O(\lambda \log \log n + \log n)$ for codewords of length $n$. $\quad \bullet$ $\mathsf{FACS}$, an accumulation scheme for NP that achieves linear prover time and $O(\lambda)$ oracle queries per step of the accumulation. Both schemes support a large class of linear-time encodable codes, including systematic LDPC codes and tensor codes of linear-time encodable codes. We obtain our results by extending and formalizing the framework of Interactive Oracle Reductions (IORs) introduced by Ben-Sasson et al. [TCC 2019]. In particular, we develop new IORs for "codeswitching" tensor codes (Ron-Zewi and Rothblum [JACM 2024]), and also develop a new notion of knowledge soundness for IORs that allows us to easily compose IORs and to prove the security of our schemes in the non-interactive setting, even if the underlying codes are not known to be decodable in polynomial time.
Last updated:  2025-06-23
Cryptanalysis of HiAE
Alexander Bille and Elmar Tischhauser
We describe key recovery attacks on the authenticated stream cipher HiAE, which was recently proposed for future high-throughput communication networks such as 6G by Huawei. HiAE uses a 2048-bit state, a 256-bit key and produces 128-bit tags, targeting 256-bit security against key and state recovery. As a nonce-based AEAD scheme, it relies on the uniqueness of the nonce per key for these security claims. Our analysis indicates that a complete recovery of the 256-bit key of HiAE is possible with a complexity of $2^{128}$ data and at most $2^{129.585}$ time in the nonce-respecting attack setting, with various small tradeoffs concerning the data and time complexity. While infeasible in practice, this attack therefore violates the 256-bit security claim for HiAE. We describe further complete key-recovery attacks in the nonce-misuse and release of unverfied plaintext (RUP) settings which require only a small constant number of repeated nonces or unverified decryption queries, respectively.
Last updated:  2025-06-23
Downlink (T)FHE ciphertexts compression
Antonina Bondarchuk, Olive Chakraborty, Geoffroy Couteau, and Renaud Sirdey
This paper focuses on the issue of reducing the bandwidth requirement for FHE ciphertext transmission. While this issue has been extensively studied from the uplink viewpoint (transmission of encrypted inputs towards a FHE calculation), where several approaches exist to essentially cancel FHE ciphertext expansion, the downlink case (transmission of encrypted results towards an end-user) has been the object of much less attention. In this paper, we address this latter issue with a particular focus on the TFHE scheme, for which we revisit a number of folklore methods, including several approaches for switching to more compact linearly homomorphic schemes, reducing the precision of T(R)LWE coefficients (while maintaining acceptable probabilities of decryption errors), and others. We also investigate how to use these methods in combination, depending on the number of encrypted results to transmit. We further perform extensive experiments demonstrating that the downlink TFHE ciphertext expansion factor can be practically reduced to values below 10, depending on the setup, with little additional computational burden.
Last updated:  2025-06-23
A Tale of Two Worlds, a Formal Story of WireGuard Hybridization
Pascal Lafourcade, Dhekra Mahmoud, Sylvain Ruhault, and Abdul Rahman Taleb
PQ-WireGuard is a post-quantum variant of WireGuard Virtual Private Network (VPN), where Diffie-Hellman-based key exchange is replaced by post-quantum Key Encapsulation Mechanisms-based key exchange. In this paper, we first conduct a thorough formal analysis of PQ-WireGuard's original design, in which we point out and fix a number of weaknesses. This leads us to an improved construction PQ-WireGuard*. Secondly, we propose and formally analyze a new protocol, based on both WireGuard and PQ-WireGuard*, named Hybrid-WireGuard, compliant with current best practices for post-quantum transition about hybridization techniques. For our analysis, we use the SAPIC+ framework that enables the generation of three state-of-the-art protocol models for the verification tools ProVerif, DeepSec and Tamarin from a single specification, leveraging the strengths of each tool. We formally prove that Hybrid-WireGuard is secure. Eventually, we propose a generic, efficient and usable Rust implementation of our new protocol.
Last updated:  2025-06-23
The Pipes Model for Latency and Throughput Analysis
Andrew Lewis-Pye, Kartik Nayak, and Nibesh Shrestha
Protocols for State-Machine-Replication (sometimes called 'blockchain' protocols) generally make use of rotating leaders to drive consensus. In typical protocols (henceforth called 'single-sender' protocols), the leader is a single processor responsible for making and disseminating proposals to others. Since the leader acts as a bottleneck, apparently limiting throughput, a recent line of research has investigated the use of 'multi-sender' protocols in which many processors distribute proposals in parallel. Examples include DAG-based protocols such as DAG-Rider, Bullshark, Sailfish, Cordial Miners, Mysticeti, and variants such as Autobahn. However, existing models do not allow for a formal analysis to determine whether these protocols can actually handle higher throughputs than single-sender protocols such as PBFT, Tendermint, and HotStuff. In this paper, we describe a very simple model that allows for such an analysis. For any given protocol, the model allows one to calculate latency as a function of network bandwidth, network delays, the number of processors $n$, and the incoming transaction rate. Each protocol has a latency bottleneck: an incoming transaction rate at which latency becomes unbounded over the protocol execution, i.e., a maximum throughput that the protocol can handle without unbounded latency. With the aim of building to an analysis for state-of-the-art State-Machine-Replication (SMR) protocols, we begin by considering protocols for simpler primitives, such as Best-effort Broadcast and Reliable Broadcast. For Best-effort Broadcast, we establish a tight lower bound on latency for single-sender and multi-sender protocols when blocks are distributed without the use of techniques such as erasure coding. Perhaps unsurprisingly, a key difference between the single-sender and multi-sender approaches in this case is a factor $n$ in the point at which the latency bottleneck appears. However, for other primitives such as Reliable Broadcast, our results may be more surprising: the factor $n$ difference now disappears, and maximum throughput for the two approaches differs by a constant factor, while multi-sender approaches will generally have latency that grows more quickly with $n$. For state-of-the-art SMR protocols, the picture that emerges is one with seemingly inherent trade-offs. If one compares single-sender protocols that use pipelining and erasure coding, such as DispersedSimplex, with DAG-based protocols such as Sailfish or Bullshark, the former are seen to have lower latency for a wide range of throughputs, while the benefit of the latter protocols is that they have a latency bottleneck which is higher by a constant factor.
Last updated:  2025-06-23
Engel p-adic Supersingular Isogeny-based Cryptography over Laurent series
Ilias Cherkaoui, Ciaran Clarke, and Indrakshi Dey
This paper builds the foundation for a cryptosystem based on p-adic representations of supersingular elliptic curve isogenies generated through Engel expansions of Laurent series. This mathematical framework manifests as a lightweight encryption scheme implemented on ESP32 microcontrollers for IoT applications. Efficient isogeny paths are constructed for quantum-resistant primitives secured against Shor's algorithm by decomposing elements into Engel sequences. Performance analysis confirms linear computational scaling with message size and speed gain at a higher clock rate, along with power trace signatures corroborating theoretical computational models. Consequently, we confirm the practical feasibility of our proposed p-adic isogeny cryptography on resource-constrained embedded systems while offering rigorous post-quantum security assurances.
Last updated:  2025-06-23
MT-TMVP: Modular Tiled TMVP-based Polynomial Multiplication for Post-Quantum Cryptography on FPGAs
Shekoufeh Neisarian and Elif Bilge Kavun
As quantum technology advances, developing cryptographic solutions resistant to quantum attacks is crucial. Post-Quantum Cryptography (PQC) provides a practical approach by running on classical computers. They rely on hard mathematical problems, with lattice-based being one of the National Institute of Standards and Technology (NIST)-recognized schemes known for its small key sizes. Hardware implementation of these schemes faces challenges due to the computational intensity of operations like polynomial multiplication, especially for resource-constrained devices. This paper proposes a novel Modular Tiled Toeplitz Matrix-Vector Polynomial Multiplication (MT-TMVP) for lattice-based PQC algorithms and presents a resource-optimized Field Programmable Gate Array (FPGA) architecture. The proposed implementation significantly reduces resource utilization and Area-Delay Product (ADP) compared to state-of-the-art polynomial multipliers. It utilizes 99.68% and 84.22% fewer Look-Up Tables (LUTs) on Artix-7 and Zynq Ultrascale+ FPGAs, respectively, and achieves 99.94% and 80.02% ADP improvements on these FPGAs compared to the best results in the literature. By leveraging Block RAM (BRAM), the proposed architecture offers robustness against timing-based Side-Channel Attacks (SCAs), and the design is modular and scalable to any polynomial degree.
Last updated:  2025-06-23
On the BUFF Security of ECDSA with Key Recovery
Keita Emura
In the usual syntax of digital signatures, the verification algorithm takes a verification key in addition to a signature and a message, whereas in ECDSA with key recovery, which is used in Ethereum, no verification key is input to the verification algorithm. Instead, a verification key is recovered from a signature and a message. In this paper, we explore BUFF security of ECDSA with key recovery (KR-ECDSA), where BUFF stands for Beyond UnForgeability Features (Cremers et al., IEEE S&P 2021). As a result, we show that KR-ECDSA provides BUFF security, except weak non-resignability (wNR). We pay attention to that the verification algorithm of KR-ECDSA takes an Ethereum address addr as input, which is defined as the rightmost 160-bits of the Keccak-256 hash of the corresponding ECDSA verification key, and checks the hash value of the recovered verification key is equal to addr. Our security analysis shows that this procedure is mandatory to provide BUFF security. We also discuss whether wNR is mandatory in Ethereum or not. To clarify the above equality check is mandatory to provide BUFF security in KR-ECDSA, we show that the original ECDSA does not provide any BUFF security. As a by-product of the analysis, we show that one of our BUFF attacks also works against the Aumayr et al.'s ECDSA-based adaptor signature scheme (ASIACRYPT 2021) and the Qin et al.'s blind adaptor signature scheme (IEEE S&P 2023), which is based on the Aumayr et al.'s scheme. We emphasize that the attack is positioned outside of their security models.
Last updated:  2025-06-23
Key-Insulated and Privacy-Preserving Signature Scheme with Publicly Derived Public Key, Revisited: Consistency, Outsider Strong Unforgeability, and Generic Construction
Keita Emura
Liu et al. (EuroS&P 2019) introduced Key-Insulated and Privacy-Preserving Signature Scheme with Publicly Derived Public Key (PDPKS) to enhance the security of stealth address and deterministic wallet. In this paper, we point out that the current security notions are insufficient in practice, and introduce a new security notion which we call consistency. Moreover, we explore the unforgeability to provide strong unforgeability for outsider which captures the situation that nobody, except the payer and the payee, can produce a valid signature. From the viewpoint of cryptocurrency functionality, it allows us to implement a refund functionality. Currently, basically there is no way to refund a coin when one mistakenly spends a coin to an address. This functionality rescues the case, even in the stealth environment that hides information of the payer. Note that the refund functionality only works before the payee transfers a coin to own wallet, and it prevents a double spending issue. Finally, we propose a generic construction of PDPKS that provides consistency and outsider strong unforgeability. The design is conceptually much simpler than known PDPKS constructions. It is particularly note that the underlying strongly unforgeable signature scheme is required to provide the strong conservative exclusive ownership (S-CEO) security (Cremers et al., IEEE S&P 2021). Since we explicitly require the underlying signature scheme to be S-CEO secure, our security proof introduces a new insight of exclusive ownership security which may be of independent interest.
Last updated:  2025-06-23
Mind the Gap: Securing QKD Interfaces with Post-Quantum Proxies
Sayan Das, Aarav Varshney, Prasanna Ravi, and Anupam Chattopadhyay
Quantum Key Distribution (QKD) is a promising technology that enables information-theoretic secure key exchange using quantum principles. It is being increasingly deployed in critical sectors through emerging Quantum Key-as-a-Service (QKaaS) models. However, current standards like ETSI GS QKD 014 assume that QKD keys are consumed within trusted environments—an assumption that breaks down in real-world deployments where keys are delivered over classical networks to remote, potentially untrusted endpoints. This creates a security gap at the interface between QKD systems and key-consuming applications. In this paper, we identify this gap and propose a proxy-based solution that secures QKD key delivery using post-quantum cryptography (PQC). Our proxy transparently applies PQC-based signatures and key encapsulation to ETSI-compliant QKD APIs, without requiring changes to existing infrastructure. It supports cryptographic agility, allowing runtime switching between multiple PQC schemes. We benchmark our design using both QKD simulators and production-grade QKD hardware, and show that it introduces minimal overhead with efficient NIST PQC algorithms. Our findings highlight the need for stronger protection of the QKD interface in practical deployments. We advocate for a revision to ETSI GS QKD 014 to include an addendum that addresses this critical gap and promotes end-to-end quantum-safe integration.
Last updated:  2025-06-22
On the Adaptive Security of FROST
Elizabeth Crites, Jonathan Katz, Chelsea Komlo, Stefano Tessaro, and Chenzhi Zhu
FROST and its variants are state-of-the-art protocols for threshold Schnorr signatures that are used in real-world applications. While static security of these protocols has been shown by several works, the security of these protocols under adaptive corruptions—where an adversary can choose which parties to corrupt at any time based on information it learns during protocol executions—has remained a notorious open problem that has received renewed attention due to recent standardization efforts for threshold schemes. We show adaptive security (without erasures) of FROST and several variants under different corruption thresholds and computational assumptions. Let n be the total number of parties, t+1 the signing threshold, and t_c an upper bound on the number of corrupted parties. 1. We prove adaptive security when t_c = t/2 in the random oracle model (ROM) based on the algebraic one-more discrete logarithm assumption (AOMDL)—the same conditions under which FROST is proven statically secure. 2. We introduce the low-dimensional vector representation (LDVR) problem, parameterized by t_c, t, and n, and prove adaptive security in the algebraic group model (AGM) and ROM based on the AOMDL assumption and the hardness of the LDVR problem for the corresponding parameters. In some regimes (including some t_c >t/2) we show the LDVR problem is unconditionally hard, while in other regimes (in particular, when t_c = t) we show that hardness of the LDVR problem is necessary for adaptive security to hold. In fact, we show that hardness of the LDVR problem is necessary for proving adaptive security of a broad class of threshold Schnorr signatures.
Last updated:  2025-06-22
Quantum Analysis of AES
Kyungbae Jang, Anubhab Baksi, Hyunji Kim, Gyeongju Song, Hwajeong Seo, and Anupam Chattopadhyay
Quantum computing is considered one of the next big leaps in computational science. While a fully functional quantum computer is still in the future, there is an ever-growing need to evaluate the security of the symmetric key ciphers against a potent quantum adversary. Keeping this in mind, our work explores the key recovery attack using the Grover's search on the three variants of AES (-128, -192, -256). We develop a pool of 26 implementations per AES variant (thus totaling 78), by taking the state-of-the-art advancements in the relevant fields into account. In a nutshell, we present the least Toffoli depth and full depth implementations of AES, thereby improving from Zou et al.'s Asiacrypt'20 paper by more than 97 percent for each variant of AES. We show that the qubit count - Toffoli depth product is reduced from theirs by more than 87 percent. Furthermore, we analyze the Jaques et al.'s Eurocrypt'20 implementations in detail, fix the bugs (arising from some problem of the quantum computing tool used and not related to their coding), and report corrected benchmarks. To the best of our finding, our work improves from all the previous works (including the Asiacrypt'22 paper by Huang and Sun, the Asiacrypt'23 paper by Liu et al. and the Asiacrypt'24 paper by Shi and Feng) in terms of various quantum circuit complexity metrics (Toffoli depth, full depth, Toffoli/full depth - qubit count product, full depth - gate count product, etc.). Also, our bug-fixing of Jaques et al.'s Eurocrypt'20 implementations seems to improve from the authors' own bug-fixing, thanks to our architecture consideration. Equipped with the basic AES implementations, we further investigate the prospect of the Grover's search. We propose four new implementations of the S-box, one new implementation of the MixColumn; as well as five new architecture (one is motivated by the architecture by Jaques et al. in Eurocrypt'20, and the rest four are entirely our innovation). Under the MAXDEPTH constraint (specified by NIST), the circuit depth metrics (Toffoli depth, T-depth and full depth) become crucial factors and parallelization for often becomes necessary. We provide the least depth implementation in this respect that offers the best performance in terms of metrics for circuit complexity (like, depth-squared - qubit count product, depth - gate count product). Thus, to our knowledge, we estimate the currently best-known quantum attack complexities for AES-128 ($2^{156.2630}$), AES-192 ($2^{221.5801}$) and AES-256 ($2^{286.0731}$); these provide the NIST-specified quantum security levels 1, 3 and 5 respectively. Additionally, we achieve the least Toffoli depth - qubit count product for AES-128 ($121920$, improving from $130720$ by Shi and Feng in Asiacrypt'24), AES-192 ($161664$, improving from $188880$ by Liu et al. in Asiacrypt'23) and AES-256 ($206528$, improving from $248024$ by Liu et al. in Asiacrypt'23) so far.
Last updated:  2025-06-22
LWE with Quantum Amplitudes: Algorithm, Hardness, and Oblivious Sampling
Yilei Chen, Zihan Hu, Qipeng Liu, Han Luo, and Yaxin Tu
The learning with errors problem (LWE) is one of the most important building blocks for post-quantum cryptography. To better understand the quantum hardness of LWE, it is crucial to explore quantum variants of LWE. To this end, Chen, Liu, and Zhandry [Eurocrypt 2022] defined S|LWE> and C|LWE> problems by encoding the error of LWE samples into quantum amplitudes, and showed efficient quantum algorithms for a few interesting amplitudes. However, algorithms or hardness results of the most interesting amplitude, Gaussian, were not addressed before. In this paper, we show new algorithms, hardness results and applications for S|LWE> and C|LWE> with real Gaussian, Gaussian with linear or quadratic phase terms, and other related amplitudes. Let $n$ be the dimension, $q$ be the modulus of LWE samples. Our main results are 1. There is a $2^{\Theta(\sqrt{n \log q})}$-time algorithm for S|LWE> with Gaussian amplitude with known phase, given $2^{\Theta(\sqrt{n \log q})}$ many quantum samples. The algorithm is modified from Kuperberg's sieve, and in fact works for more general amplitudes as long as the amplitudes and phases are completely known. 2. There is a polynomial time quantum algorithm for solving S|LWE> and C|LWE> for Gaussian with quadratic phase amplitudes, where the sample complexity is as small as $\tilde{O}(n)$. As an application, we give a quantum oblivious LWE sampler where the core quantum sampler requires only quasi-linear sample complexity. This improves upon the previous oblivious LWE sampler given by Debris-Alazard, Fallahpour, Stehlé [STOC 2024], whose core quantum sampler requires $\tilde{O}(nr)$ sample complexity, where $r$ is the standard deviation of the error. 3. There exist polynomial time quantum reductions from standard LWE or worst-case GapSVP to S|LWE> with Gaussian amplitude with small unknown phase, and arbitrarily many samples. Compared to the first two items, the appearance of the unknown phase term places a barrier in designing efficient quantum algorithm for solving standard LWE via S|LWE>.
Last updated:  2025-06-22
(Interleaved) Extended Gabidulin Codes, More Attacks on Rank Decoding Problem, and Their Applications to Cryptosystems
Yongcheng Song, Rongmao Chen, Fangguo Zhang, Xinyi Huang, Jian Weng, and Huaxiong Wang
In this paper, we investigate the Extended Gabidulin (EG) codes and the Interleaved EG (IEG) codes, develop more powerful attacks on the variants of the Rank Decoding (RD) problem, and enhance rank-based cryptosystems such as RQC and ROLLO. First, we develop a general decoding algorithm for the (I)EG codes by solving the Linear Reconstruction (LR) problem. We find that the (I)EG codes can be probabilistically decoded by Welch-Berlekamp like algorithm, can achieve arbitrarily small decoding failure rate, and can decode up to the rank Gilbert-Varshamov bound (even close to the minimal distance). Our conclusion intrinsically shows that it is not necessary to require that the generator be linearly independent as Gabidulin codes for designing decodable codes from $q$-polynomials. An interesting and important byproduct is that we demonstrate that decoding interleaved Gabidulin codes can be achieved deterministically by solving the LR problem. It has long been believed that there are only probabilistic decoding algorithms for interleaved Gabidulin codes (IEEE TIT 2011, DCC 2014, DCC 2024). Second, we develop the Blockwise Puncturing (BP) strategy for attacking on the Blockwise RD (BRD) problem (Asiacrypt 2023, IEEE TIT 2025) and Non-Homogenous RD (NHRD) problem (NIST PQC 2020, IEEE TIT 2024). We find that the BP strategy can significantly speed up the overdetermined MM modeling and even underdetermined MM modelings. When the proposed attacks are applied to existing rank-based cryptosystems based on the BRD and NHRD problems, such as RQC (IEEE TIT 2025, IEEE TIT 2024, PQC 2024) and ROLLO (IEEE TIT 2025, IEEE TIT 2022), the most parameters sets are lower than the claimed security. This implies that these cryptosystems should enlarge parameters to resist the MM attack with the BP strategy. Third, we apply the EG codes to RQC based on the BRD problem. We find that the gain of using the EG codes in decoding capacity outweighs the complexity loss in solving the BRD problem with the BP strategy, which still makes it possible to design more efficient RQC. As a result, RQC remains attractive sizes with a bandwidth of about 2.3 KB for 128-bit security. Overall, RQC still outperforms Hamming metric ones of NIST PQC Round 4 submissions, such as HQC, BIKE, and Classic McEliece, in terms of bandwidth, especially about 65% more compact than the NIST PQC selected HQC.
Last updated:  2025-06-21
Security Analysis of Forward Secure Log Sealing in Journald
Felix Dörre and Astrid Ottenhues
This paper presents a security analysis of forward-secure log sealing in the journald logging system, which is installed by default in almost all modern Linux distributions. Forward-secure log sealing is a cryptographic technique used to ensure the integrity of past log entries even in the event of a full system compromise. We identify multiple security vulnerabilities in journald resulting from a gap between the model of the cryptographic primitives and their usage in a larger context. Our contribution is both theoretical and practical: As a practical contribution, we discovered attacks on the log sealing in journald and provide descriptions as well as implementations of the attacks. In particular one vulnerability allows to forge arbitrary logs for past entries without the validation tool noticing any problem. This finding completely breaks the security guarantee of log sealing. For all described vulnerabilities we provide patches, the two more serious ones are merged in systemd version 255. As a theoretical contribution, we provide formal definitions that capture the expected security properties of log sealing. We demonstrate our attacks on the vulnerable version of journald by showing how an attacker can defeat this security definition. Furthermore, we provide a modified version of the logging scheme which underlies the one in journald and prove that it satisfies our security definition. Since our patches have been merged, our logging scheme is the basis for the log sealing in journald. This work narrows the gap between theory and practice. It provides a practical example of the problems that can occur when applying cryptographic primitives to a complex real world system. It makes the logging implementation used in many Linux distributions more secure and demonstrates the importance of rigorous security analysis of cryptographic systems.
Last updated:  2025-06-21
Nonce- and Redundancy-encrypting Modes with Farfalle
Seth Hoffert
Nonces are a fact of life for achieving semantic security. Generating a uniformly random nonce can be costly and may not always be feasible. Using anything other than uniformly random bits can result in information leakage; e.g., a timestamp can deanonymize a communication and a counter can leak the quantity of transmitted messages. Ideally, we would like to be able to efficiently encrypt the nonce to 1) avoid needing uniformly random bits and 2) avoid information leakage. This paper presents new modes built on top of Farfalle that achieve nonce and redundancy encryption in the AEAD and onion AE settings.
Last updated:  2025-06-21
Oblivious Single Access Machines: A New Model for Oblivious Computation
Ananya Appan, David Heath, and Ling Ren
Oblivious RAM (ORAM) allows a client to securely outsource memory storage to an untrusted server. It has been shown that no ORAM can simultaneously achieve small bandwidth blow-up, small client storage, and a single roundtrip of latency. We consider a weakening of the RAM model, which we call the Single Access Machine (SAM) model. In the SAM model, each memory slot can be written to at most once and read from at most once. We adapt existing tree-based ORAM to obtain an oblivious SAM (OSAM) that has $O(\log n)$ bandwidth blow-up (which we show is optimal), small client storage, and a single roundtrip. OSAM unlocks improvements to oblivious data structures/algorithms. For instance, we achieve oblivious unbalanced binary trees (e.g. tries, splay trees). By leveraging splay trees, we obtain a notion of caching ORAM, where an access in the worst case incurs amortized $O(\log^2 n)$ bandwidth blow-up and $O(\log n)$ roundtrips, but in many common cases (e.g. sequential scans) incurs only amortized $O(\log n)$ bandwidth blow-up and $O(1)$ roundtrips. We also give new oblivious graph algorithms, including computing minimum spanning trees and single source shortest paths, in which the OSAM client reads/writes $O(|E| \cdot \log |E|)$ words using $O(|E|)$ roundtrips, where $|E|$ is the number of edges. This improves over prior custom solutions by a log factor. At a higher level, OSAM provides a general model for oblivious computation. We construct a programming interface around OSAM that supports arbitrary pointer-manipulating programs such that dereferencing a pointer to an object incurs $O(\log d \log n)$ bandwidth blowup and $O(\log d)$ roundtrips, where $d$ is the number of pointers to that object. This new interface captures a wide variety of data structures and algorithms (e.g., trees, tries, doubly-linked lists) while matching or exceeding prior best asymptotic results. It both unifies much of our understanding of oblivious computation and allows the programmer to write oblivious algorithms combining various common data structures/algorithms and beyond.
Last updated:  2025-06-21
Perfect MPC over Layered Graphs
Bernardo David, Yuval Ishai, Anders Konring, Eyal Kushilevitz, and Varun Narayanan
The classical "BGW protocol" (Ben-Or, Goldwasser and Wigderson, STOC 1988) shows that secure multiparty computation (MPC) among $n$ parties can be realized with perfect full security if $t < n/3$ parties are corrupted. This holds against malicious adversaries in the "standard" model for MPC, where a fixed set of $n$ parties is involved in the full execution of the protocol. However, the picture is less clear in the mobile adversary setting of Ostrovsky and Yung (PODC 1991), where the adversary may periodically "move" by uncorrupting parties and corrupting a new set of $t$ parties. In this setting, it is unclear if full security can be achieved against an adversary that is maximally mobile, i.e., moves after every round. The question is further motivated by the "You Only Speak Once" (YOSO) setting of Gentry et al. (Crypto 2021), where not only the adversary is mobile but also each round is executed by a disjoint set of parties. Previous positive results in this model do not achieve perfect security, and either assume probabilistic corruption and a nonstandard communication model, or only realize the weaker goal of security-with-abort. The question of matching the BGW result in these settings remained open. In this work, we tackle the above two challenges simultaneously. We consider a layered MPC model, a simplified variant of the fluid MPC model of Choudhuri et al. (Crypto 2021). Layered MPC is an instance of standard MPC where the interaction pattern is defined by a layered graph of width $n$, allowing each party to send secret messages and broadcast messages only to parties in the next layer. We require perfect security against a malicious adversary who may corrupt at most $t$ parties in each layer. Our main result is a perfect, fully secure layered MPC protocol with an optimal corruption threshold of $t < n/3$, thus extending the BGW feasibility result to the layered setting. This implies perfectly secure MPC protocols against a maximally mobile adversary.
Last updated:  2025-06-21
On the Composition of Single-Keyed Tweakable Even-Mansour for Achieving BBB Security
Avik Chakraborti, Mridul Nandi, Suprita Talnikar, and Kan Yasuda
Observing the growing popularity of random permutation (RP)-based designs (e.g, Sponge), Bart Mennink in CRYPTO 2019 has initiated an interesting research in the direction of RP-based pseudorandom functions (PRFs). Both are claimed to achieve beyond-the-birthday-bound (BBB) security of $2n/3$ bits ($n$ being the input block size in bits) but require two instances of RPs and can handle only one-block inputs. In this work, we extend research in this direction by providing two new BBB-secure constructions by composing the tweakable Even-Mansour appropriately. Our first construction requires only one instance of an RP and requires only one key. Our second construction extends the first to a nonce-based Message Authentication Code (MAC) using a universal hash to deal with multi-block inputs. We show that the hash key can be derived from the original key when the underlying hash is the Polyhash. We provide matching attacks for both constructions to demonstrate the tightness of the proven security bounds.
Last updated:  2025-06-21
Efficiently-Thresholdizable Batched Identity Based Encryption, with Applications
Amit Agarwal, Rex Fernando, and Benny Pinkas
We propose a new cryptographic primitive called "batched identity-based encryption" (Batched IBE) and its thresholdized version. The new primitive allows encrypting messages with specific identities and batch labels, where the latter can represent, for example, a block number on a blockchain. Given an arbitrary subset of identities for a particular batch, our primitive enables efficient issuance of a single decryption key that can be used to decrypt all ciphertexts having identities that are included in the subset while preserving the privacy of all ciphertexts having identities that are excluded from the subset. At the heart of our construction is a new technique that enables public aggregation (i.e. without knowledge of any secrets) of any subset of identities, into a succinct digest. This digest is used to derive, via a master secret key, a single succinct decryption key for all the identities that were digested in this batch. In a threshold system, where the master key is distributed as secret shares among multiple authorities, our method significantly reduces the communication (and in some cases, computation) overhead for the authorities. It achieves this by making their costs for key issuance independent of the batch size. We present a concrete instantiation of a Batched IBE scheme based on the KZG polynomial commitment scheme by Kate et al. (Asiacrypt'10) and a modified form of the BLS signature scheme by Boneh et al. (Asiacrypt'01). The construction is proven secure in the generic group model (GGM). In a blockchain setting, the new construction can be used for achieving mempool privacy by encrypting transactions to a block, opening only the transactions included in a given block and hiding the transactions that are not included in it. With the thresholdized version, multiple authorities (validators) can collaboratively manage the decryption process. Other possible applications include scalable support via blockchain for fairness of dishonest majority MPC, and conditional batched threshold decryption that can be used for implementing secure Dutch auctions and privacy preserving options trading.
Last updated:  2025-06-21
SV-LLM: An Agentic Approach for SoC Security Verification using Large Language Models
Dipayan Saha, Shams Tarek, Hasan Al Shaikh, Khan Thamid Hasan, Pavan Sai Nalluri, Md. Ajoad Hasan, Nashmin Alam, Jingbo Zhou, Sujan Kumar Saha, Mark Tehranipoor, and Farimah Farahmandi
Ensuring the security of complex system-on-chips (SoCs) designs is a critical imperative, yet traditional verification techniques struggle to keep pace due to significant challenges in automation, scalability, comprehensiveness, and adaptability. The advent of large language models (LLMs), with their remarkable capabilities in natural language understanding, code generation, and advanced reasoning, presents a new paradigm for tackling these issues. Moving beyond monolithic models, an agentic approach allows for the creation of multi-agent systems where specialized LLMs collaborate to solve complex problems more effectively. Recognizing this opportunity, we introduce SV-LLM, a novel multi-agent assistant system designed to automate and enhance SoC security verification. By integrating specialized agents for tasks like verification question answering, security asset identification, threat modeling, test plan and property generation, vulnerability detection, and simulation-based bug validation, SV-LLM streamlines the workflow. To optimize their performance in these diverse tasks, agents leverage different learning paradigms, such as in-context learning, fine-tuning, and retrieval-augmented generation (RAG). The system aims to reduce manual intervention, improve accuracy, and accelerate security analysis, supporting proactive identification and mitigation of risks early in the design cycle. We demonstrate its potential to transform hardware security practices through illustrative case studies and experiments that showcase its applicability and efficacy.
Last updated:  2025-06-20
An efficient construction of Raz's two-source randomness extractor with improved parameters
Cameron Foreman, Lewis Wooltorton, Kevin Milner, and Florian J. Curchod
Randomness extractors are algorithms that distill weak random sources into near-perfect random numbers. Two-source extractors enable this distillation process by combining two independent weak random sources. Raz’s extractor (STOC '05) was the first to achieve this in a setting where one source has linear min-entropy (i.e., proportional to its length), while the other has only logarithmic min-entropy in its length. However, Raz's original construction is impractical due to a polynomial computation time of at least degree 4. Our work solves this problem by presenting an improved version of Raz's extractor with quasi-linear computation time, as well as a new analytic theorem with reduced entropy requirements. We provide comprehensive analytical and numerical comparisons of our construction with others in the literature, and we derive strong and quantum-proof versions of our efficient Raz extractor. Additionally, we offer an easy-to-use, open-source code implementation of the extractor and a numerical parameter calculation module.
Last updated:  2025-06-20
Victor Shoup
We present a scheme for verifiably encrypting a Shamir secret sharing to a committee of shareholders. Such a scheme can be used to easily implement distributed key generation (DKG) and resharing protocols used in threshold signing and decryption protocols. Our scheme is a minor variation on known techniques, and is not the most efficient in terms of communication and computational complexity. However, it is extremely simple and easy to implement. Moreover, for moderately sized shareholder committees of up to, say, 13 parties or so, and for applications where a DKG/resharing only needs to be performed occasionally, its performance should be acceptable in practice.
Last updated:  2025-06-20
The Effectiveness of Differential Privacy in Real-world Settings: A Metrics-based Framework to help Practitioners Visualise and Evaluate $\varepsilon$
Akasha Shafiq, Abhishek Kesarwani, Dimitrios Vasilopoulos, and Paolo Palmieri
Differential privacy (DP) has emerged as a preferred solution for privacy-preserving data analysis, having been adopted by several leading Internet companies. DP is a privacy-preserving mechanism that protects against re-identification of individuals within aggregated datasets. It is known that the privacy budget $\varepsilon$ determines the trade-off between privacy and utility. In this paper, we propose the use of novel set of metrics and an easy-to-implement, step-by-step framework to facilitate the implementation of the DP mechanism on real-world datasets and guide the selection of $\varepsilon$ based on desired accuracy vs utility trade-off. Currently, for a given query there is no widely accepted methodology on how to select $\varepsilon$ and choose the best DP mechanism that offers an optimal trade-off between privacy and utility. In order to address this gap, we perform experiments by considering three real-world datasets, aiming to identify optimal $\varepsilon$ and suitable mechanisms (Laplace or Gaussian) based on privacy utility trade-off as per use case for the commonly used count, sum and average queries for each dataset. Based on our experiment results, we observe that using our metric and framework, one can analyse noise distribution charts of multiple queries, and choose the suitable $\varepsilon$ and the DP mechanism for achieving a balance between privacy and utility. Additionally, we show that the optimal $\varepsilon$ depends on the particular query, desired accuracy and context in which DP is implemented, which suggests that an arbitrary, a-prior selection of $\varepsilon$ cannot provide adequate results. Our framework prioritises the plotting and visualisation of values and results in the DP analysis, making its adoption easy for a wider audience.
Last updated:  2025-06-20
Guarding the Signal: Secure Messaging with Reverse Firewalls
Yevgeniy Dodis, Bernardo Magri, Noah Stephens-Davidowitz, and Yiannis Tselekounis
Secure messaging protocols allow users to communicate asynchronously over untrusted channels with strong guarantees of privacy, authenticity, forward secrecy, and post-compromise security. However, traditional security analyses of these protocols assume complete trust in the hardware and software of honest participants, overlooking a significant class of real-world threats known as subversion attacks. These attacks alter cryptographic algorithms to compromise security, by exfiltrating secrets or creating vulnerabilities that are often undetected. The notion of reverse firewalls (EC'15), aims at protecting against subversion attacks by introducing a third party, called a "reverse firewall" (RF), which sits between a party and the outside world and modifies its outgoing and incoming messages in a way such that, even if the party's machine has been corrupted (in a way that maintains functionality), security is still preserved. Importantly, the firewall shares no private information with the parties, and parties put no more trust in the firewall than they do in the communication channel. In this work, we address the existing gap in secure messaging and subversion attacks by presenting several key contributions: - We design the first subversion-resilient secure messaging protocol based on the model of RF. Our protocol is based on the Signal protocol---the current state-of-the-art in two-party secure messaging, though it lacks subversion resilience---and achieves subversion resilience with only constant overhead over Signal. - We develop a subversion-resilient version of the X3DH protocol in the RF model. X3DH is a core component that facilitates secure initial key agreement in Signal's protocol. - We introduce and formalize the notion of Continuous Key Agreement with Tamper Detection, an essential concept for subversion-resilient secure messaging. Our notion enables parties to continuously agree on keys, even in the presence of active adversaries capable of partially tampering with the key exchange transcript. We present a construction of our notion and prove its subversion resilience in the model of RF.
Last updated:  2025-06-20
Circular Insecure Encryption: from Long Cycles to Short Cycles
Zehou Wu
A length $n$ encryption cycle consists of a sequence of $n$ keys, each encrypting the next, forming a cycle, and an encryption scheme is $n$-circular secure if a length $n$ encryption cycle is computationally indistinguishable from encryptions of zeros. An interesting problem is whether CPA security implies circular security. This is shown to be not true. Using standard cryptographic assumptions and LWE, it was shown that within the class of CPA secure encryption schemes, for any $n$, there exists an $n$-circular insecure encryption scheme. Furthermore, there exists a particular encryption scheme that is $\ell$-circular insecure for all $\ell$. Following these results, it is natural to ask whether a circular insecurity of a particular length implies circular insecurity of different lengths and of multiple lengths. We answer this problem with an affirmative in this paper. We constructively prove that a CPA secure encryption scheme that is insecure in the presence of encryption cycles of length $(n+1)$ implies the existence of such a scheme for encryption cycles of any length less than $(n+1)$. The constructed $(\le n)$-circular insecure construction may have the same message space as the $(n+1)$-circular insecure encryption scheme, and our results apply to both public key and symmetric key settings.
Last updated:  2025-06-20
Privacy-aware White and Black List Searching for Fraud Analysis
William J Buchanan, Jamie Gilchrist, Zakwan Jaroucheh, Dmitri Timosenko, Nanik Ramchandani, and Hisham Ali
In many areas of cybersecurity, we require access to Personally Identifiable Information (PII), such as names, postal addresses and email addresses. Unfortunately, this can lead to data breaches, especially in relation to data compliance regulations such as GDPR. An Internet Protocol (IP) address is an identifier that is assigned to a networked device to enable it to communicate over networks that use IP. Thus, in applications which are privacy-aware, we may aim to hide the IP address while aiming to determine if the address comes from a blacklist. One solution to this is to use homomorphic encryption to match an encrypted version of an IP address to a blacklisted network list. This matching allows us to encrypt the IP address and match it to an encrypted version of a blacklist. In this paper, we use the OpenFHE library \cite{OpenFHE} to encrypt network addresses with the BFV homomorphic encryption scheme. In order to assess the performance overhead of BFV, we implement a matching method using the OpenFHE library and compare it against partial homomorphic schemes, including Paillier, Damgard-Jurik, Okamoto-Uchiyama, Naccache-Stern and Benaloh. The main findings are that the BFV method compares favourably against the partial homomorphic methods in most cases.
Last updated:  2025-06-20
Beyond LWE: a Lattice Framework for Homomorphic Encryption
Alberto Leporati, Lorenzo Rovida, and Wessel van Woerden
We suggest a generalization of homomorphic encryption (HE) schemes from a purely geometrical and lattice-based perspective. All the current reference HE schemes are based on the ring version of the Learning with Errors (LWE) problem. In this proposal, we first investigate LWE-based cryptosystems from a lattice point of view and present a framework that allows to obtain the same result, in geometrical terms, from any lattice — as long as it contains a sufficiently short trapdoor vector. More precisely, we generalize the classical BGV (Brakerski, Gentry and Vaikuntanathan, ITCS '12) and GSW (Gentry, Sahai and Waters, CRYPTO '14) schemes to purely lattice-based variants, which we call Lattice-BGV and Lattice-GSW. By abstracting away the particular hardness assumption, our lattice framework allows to be instantiated with a broader range of lattices and hardness assumptions. For example, LWE gives a natural trapdoor for random $q$-ary lattices, and when plugged into our framework one obtains the original BGV and GSW schemes, while in this work we will also consider an instantiation based on the Lattice Isomorphism Problem (LIP), leading to the first more advanced cryptographic scheme build from LIP$^*$. Our framework also gives a geometrical and natural explanation of HE procedures and generalizes some properties, such as the ability to store many messages in a single ciphertext, one for each short trapdoor vector, without relying on any particular algebraic structure. $^*$ In a concurrent work Branco, Malavolta and Maradni (ePrint 2025/993) propose an alternative LIP-based FHE construction.
Last updated:  2025-06-20
What Have SNARGs Ever Done for FHE?
Michael Walter
In recent years, there have been several constructions combining FHE with SNARGs to add integrity guarantees to FHE schemes. Most of these works focused on improving efficiency, while the precise security model with regards to client side input privacy has remained understudied. Only recently it was shown by Manulis and Nguyen (Eurocrypt'24) that this combination does not yield IND-CCA1 security. So an interesting open question is: does the SNARG actually add any meaningful security to input privacy? We address this question in this note and give a security definition that meaningfully captures the security of the FHE plus SNARG construction.
Last updated:  2025-06-20
Optimized Rank Sort for Encrypted Real Numbers
Seunghu Kim, Eymen Ünay, Ayse Yilmazer-Metin, and Hyung Tae Lee
Sorting arrays encrypted under fully homomorphic encryption remains a fundamental challenge due to the high cost of private comparisons and the incompatibility of conventional sorting algorithms with the encrypted domain. Recently, Hong et al. (IEEE Transactions on Information Forensics and Security, 2021) proposed a $k$-way sorting network tailored to encrypted real numbers, but its reliance on multiple comparison stages incurs substantial multiplicative depth and significant bootstrapping overhead, even for modest array sizes. In this work, we propose a novel rank-based sorting algorithm for encrypted real numbers that performs only a single comparison stage, thereby eliminating the need for bootstrapping operations. Our empirical evaluation demonstrates that the proposed method significantly outperforms the $k$-way approach for small to medium array sizes $(n\leq 1024)$, achieving a $46.91\times$ speedup at $n=256$ with a total runtime of $79$ seconds. Furthermore, we examine the recent matrix-based rank sort method by Mazzone et al. (USENIX Security '25) and show that integrating our optimized rank construction improves its efficiency. Specifically, we achieve $1.77\times$ and $2.43\times$ performance gains for $n=128$ and $n=512$, respectively.
Last updated:  2025-06-20
Understanding Lasso: A Novel Lookup Argument Protocol
Oleg Fomenko and Anton Levochko
In 2023, Srinath Setty, Justin Thaler, and Riad Wahby published a paper that describes a novel lookup argument with efficient verification called Lasso. We present a focused and accessible overview of the Lasso lookup argument that stands for the foundational component of the Jolt ZK-VM. This article distills the core principles behind Lasso: the sum-check protocol, multilinear polynomials and their extensions, Spark commitment, offline memory-checking, and the evolution of Spark called Surge. By clarifying the underlying protocols and their relationship to innovations like Spark and Surge, we aim to provide researchers and engineers with practical insights into the cryptographic foundations powering both Lasso and the Jolt virtual machine.
Last updated:  2025-06-20
On Frontrunning Risks in Batch-Order Fair Systems for Blockchains (Extended Version)
Eunchan Park, Taeung Yoon, Hocheol Nam, Deepak Maram, and Min Suk Kang
In timing-sensitive blockchain applications, such as decentralized finance (DeFi), achieving first-come-first-served (FCFS) transaction ordering among decentralized nodes is critical to prevent frontrunning attacks. Themis[CCS'23], a state-of-the-art decentralized FCFS ordering system, has become a key reference point for high-throughput fair ordering systems for real-world blockchain applications, such as rollup chains and decentralized sequencing, and has influenced the design of several subsequent proposals. In this paper, we critically analyze its core system property of practical batch-order fairness and evaluate the frontrunning resistance claim of Themis. We present the Ambush attack, a new frontrunning technique that achieves nearly 100% success against the practical batch-order fair system with only a single malicious node and negligible attack costs. This attack causes a subtle temporary information asymmetry among nodes, which is allowed due to the heavily optimized communication model of the system. A fundamental trade-off we identify is a challenge in balancing security and performance in these systems; namely, enforcing timely dissemination of transaction information among nodes (to mitigate frontrunning) can easily lead to non-negligible network overheads (thus, degrading overall throughput performance). We show that it is yet possible to balance these two by delaying transaction dissemination to a certain tolerable level for frontrunning mitigation while maintaining high throughput. Our evaluation demonstrates that the proposed delayed gossiping mechanism can be seamlessly integrated into existing systems with only minimal changes.
Last updated:  2025-06-20
How to Model Unitary Oracles
Mark Zhandry
We make the case for modeling unitary oracles by allowing for controlled access to the oracle as well as its conjugate transpose (inverse), but also its conjugate and transpose. Controlling and conjugate transposes are common if even standard, but conjugates and transposes appear to be non-standard. In order to justify our modeling, we give several formal examples of what goes wrong or is missed when using a more restrictive modeling. We also argue that our model is the "right" level of granularity, and that other transformations likely do not correspond to efficient computation. We also discuss other modeling choices, such as ancillas and approximation error. Through our exploration, we uncover interesting phenomena. Examples include an attack on the recent pseudorandom unitary construction of Ma and Huang (STOC'25) if used incorrectly as a publicly evaluatable unitary, and a quantum complexity-theoretic separation that follows from a purely classical separation.
Last updated:  2025-06-19
Security Analysis on a Public-Key Inverted-Index Keyword Search Scheme with Designated Tester
Mizuki Hayashi and Keita Emura
Gao et al. (IEEE Internet of Things Journal 2024) proposed public-key inverted-index keyword search with designated tester as an extension of public key encryption with keyword search (PEKS). In their scheme, a server (a tester) has a secret key and uses the key for running the search algorithm due to the designated tester setting. They proved that no information of keyword is revealed from trapdoors under the decisional Diffie-Hellman (DDH) assumption. However, they also employed a symmetric pairing which can be seen as a DDH-solver. Thus, it is expected that information of keyword is revealed from trapdoors since the underlying complexity assumption does not hold. In this paper, we demonstrate two attacks against the Gao et al.'s scheme where information of keyword is revealed from a trapdoor. The first attack completes by using only the server's secret key in addition to the challenge trapdoor, without any additional encryption/trapdoor queries. We remark that an adversary is not allowed to obtain the server's secret key in their security model, and our attack is outside of their security model. Thus, we discuss the roles of the server, and stress that our attack scenario is reasonable. The second attack does not employ the server's secret key and utilizes linkability of two trapdoors. In both attacks, the attack complexity is just two pairing computations and is feasible in terms of the computational cost.
Last updated:  2025-06-19
Threshold Signatures Reloaded: ML-DSA and Enhanced Raccoon with Identifiable Aborts
Giacomo Borin, Sofía Celi, Rafael del Pino, Thomas Espitau, Guilhem Niot, and Thomas Prest
Threshold signatures enable multiple participants to collaboratively produce a digital signature, ensuring both fault tolerance and decentralization. As we transition to the post-quantum era, lattice-based threshold constructions have emerged as promising candidates. However, existing approaches often struggle to scale efficiently, lack robustness guarantees, or are incompatible with standard schemes — most notably, the NIST-standard ML-DSA. In this work, we explore the design space of Fiat-Shamir-based lattice threshold signatures and introduce the two most practical schemes to date. First, we present an enhanced TRaccoon-based [DKM+24] construction that supports up to 64 participants with identifiable aborts, leveraging novel short secret-sharing techniques to achieve greater scalability than previous state-of-the-art methods. Second — and most importantly — we propose the first practical ML-DSA-compatible threshold signature scheme, supporting up to 6 users. We provide full implementations and benchmarks of our schemes, demonstrating their practicality and efficiency for real-world deployment as protocol messages are computed in at most a few milliseconds, and communication cost ranges from 10.5 kB to 525 kB depending on the threshold.
Last updated:  2025-06-19
Scalable Multi-Server Private Information Retrieval
Ashrujit Ghoshal, Baitian Li, Yaohua Ma, Chenxin Dai, and Elaine Shi
We revisit multi-server Private Information Retrieval (PIR), where the client interacts with $S$ non-colluding servers. Ideally, we want a *scalable* family of multi-server PIR schemes where all the performance metrics of the scheme decrease as $S$ increases. However, no prior work achieved scalability under any setting, and any hardness assumption. In this paper we construct new multi-server, information-theoretically secure *scalable* PIR schemes for three natural settings. First, we give a construction where all the performance metrics scale at equal rate. Second, we give a scalable construction that minimizes the per-query bandwidth. Third, we give a scalable construction that minimizes the per-query online bottleneck cost (the maximum of the bandwidth and computation). For the first two settings, our constructions are *doubly efficient* with only a super-constant number of servers. In comparison, the best known prior works in the information-theoretic setting required super-logarithmically many servers to achieve the doubly efficient notion. Our techniques for achieving scalable PIR also enable us to advance the state of the art in the polynomial space setting. In this setting, we show how to improve the space consumption of prior works by a polynomial factor while preserving all other metrics. Further, we show a new balancing technique that allows us to further minimize the bandwidth per query by trading off the computation and server space, thus enabling a more smooth tradeoff between the metrics and generalizing the design space.
Last updated:  2025-06-19
Automated Analysis and Synthesis of Message Authentication Codes
Stefan Milius, Dominik Paulus, Dominique Schröder, Lutz Schröder, and Julian Thomas
Message Authentication Codes (MACs) represent a fundamental symmetric key primitive, serving to ensure the authenticity and integrity of transmitted data. As a building block in authenticated encryption and in numerous deployed standards, including TLS, IPsec, and SSH, MACs play a central role in practice. Due to their importance for practice, MACs have been subject to extensive research, leading to prominent schemes such as HMAC, CBCMAC, or LightMAC. Despite the existence of various MACs, there is still considerable interest in creating schemes that are more efficient, potentially parallelizable, or have specific non-cryptographic attributes, such as being patent-free. In this context, we introduce an automated method for analyzing and synthesizing MAC schemes. In order to achieve this goal, we have constructed a framework that restricts the class of MACs in such a way that it is sufficiently expressive to cover known constructions, yet also admits automated reasoning about the security guarantees of both known and new schemes. Our automated analysis has identified a novel category of MACs, termed "hybrid" MACs. These MACs operate by processing multiple blocks concurrently, with each block managed by a different, specified MAC scheme. A key finding is that in certain scenarios, the hybrid MAC marginally outperforms the simultaneous operation of the individual MACs. This improvement is attributed to the hybrid approach exploiting the strengths and compensating for the weaknesses of each distinct MAC scheme involved. Our implementation confirms that we have successfully identified new schemes that have comparable performance with state-of-the-art schemes and in some settings seem to be slightly more efficient.
Last updated:  2025-06-19
Parasol Compiler: Pushing the Boundaries of FHE Program Efficiency
Rick Weber, Ryan Orendorff, Ghada Almashaqbeh, and Ravital Solomon
Fully Homomorphic Encryption (FHE) is a key technology to enable privacy-preserving computation. While optimized FHE implementations already exist, the inner workings of FHE are technically complex. This makes it challenging, especially for non-experts, to develop highly-efficient FHE programs that can exploit the advanced hardware of today. Although several compilers have emerged to help in this process, due to design choices, they are limited in terms of application support and the efficiency levels they can achieve. In this work, we showcase how to make FHE accessible to non-expert developers while retaining the performance provided by an expert-level implementation. We introduce Parasol, a novel end-to-end compiler encompassing a virtual processor with a custom Instruction Set Architecture (ISA) and a low-level library that implements FHE operations. Our processor integrates with existing compiler toolchains, thereby providing mainstream language support. We extract parallelism at multiple levels via our processor design and its computing paradigm. Specifically, we champion a Circuit Bootstrapping (CBS)-based paradigm, enabling efficient FHE circuit composition with multiplexers. Furthermore, Parasol’s underlying design highlights the benefits of expressing FHE computations at a higher level—producing highly compact program representations. Our experiments demonstrate the superiority of Parasol, in terms of runtime (up to 17x faster), program size (up to 22x smaller), and compile time (up to 32x shorter) compared to the current state-of-the-art. We expect the FHE computing paradigm underlying Parasol to attract future interest since it exposes added parallelism for FHE accelerators to exploit.
Last updated:  2025-06-19
Man-in-the-Middle and Key Recovery Attacks against QP-KEM
Nick Aquina, Simon Rommel, and Idelfonso Tafur Monroy
The Q-problem has been introduced as a new post-quantum hard problem. We present two man-in-the-middle and three key recovery attacks against the key exchange protocol based on the Q-problem. The man-in-the-middle attacks take negligible time and allow the attacker to recover the exchanged key. The most effective key recovery attack has a computational complexity of $2^{40}$. We also propose countermeasures against all attacks.
Last updated:  2025-06-19
Efficient, Scalable Threshold ML-DSA Signatures: An MPC Approach
Alexander Bienstock, Leo de Castro, Daniel Escudero, Antigoni Polychroniadou, and Akira Takahashi
A threshold signature is an advanced protocol that splits a secret signing key among multiple parties, allowing any subset above a threshold to jointly generate a signature. While post-quantum (PQ) threshold signatures are actively being studied --- especially in response to NIST's recent call for threshold schemes --- most existing solutions are tailored to specially designed, threshold-friendly signature schemes. In contrast, many real-world applications, such as distributed certificate authorities and digital currencies, require signatures that remain verifiable under the standardized verification procedures already in use. Given NIST's recent standardization of PQ signatures and ongoing industry deployment efforts, designing an efficient threshold scheme that interoperates with NIST-standardized verification remains a critical open problem. In this work, we present the first efficient and scalable solution for multi-party generation of the module-lattice digital signature algorithm (ML-DSA), one of NIST's PQ signature standards. Our contributions are two-fold. First, we present a variant of the ML-DSA signing algorithm that is amenable to efficient multi-party computation (MPC) and prove that this variant achieves the same security as the original ML-DSA scheme. Second, we present several efficient & scalable MPC protocols to instantiate the threshold signing functionality. Our protocols can produce threshold signatures with as little as 100 KB (per party) of online communication per rejection-sampling round. In addition, we instantiate our protocols in the honest-majority setting, which allows us to avoid any additional public key assumptions. The signatures produced by our protocols verify under the same implementation of ML-DSA verification for all three security levels. Thus, signatures and verification keys of our scheme are (naturally) the same size as that of ML-DSA; previous lattice-based threshold schemes could not match both of these sizes. Overall, our solution is the only method for producing threshold ML-DSA signatures compatible with NIST-standardized verification that scales to an arbitrary number of parties, without any new assumptions.
Last updated:  2025-06-19
On generalizations of the Lai-Massey scheme
Lorenzo Grassi
In this paper, we re-investigate the Lai-Massey scheme, originally proposed in the cipher IDEA. Due to the similarity with the Feistel networks, and due to the existence of invariant subspace attacks as originally pointed out by Vaudenay at FSE 1999, the Lai-Massey scheme has received only little attention by the community. As first contribution, we propose two new generalizations of such scheme that are not (extended) affine equivalent to any generalized Feistel network proposed in the literature so far. Then, inspired by the recent Horst construction, we propose the Amaryllises structure as a generalization of the Lai-Massey scheme, in which the linear combination in the Lai-Massey scheme can be replaced by a non-linear one. Besides proposing concrete examples of the Amaryllises construction, we analyze its cryptographic properties in the context of MPC-/HE-/ZK-friendly symmetric primitives.
Last updated:  2025-06-19
High-Performance FPGA Accelerator for the Post-quantum Signature Scheme CROSS
Patrick Karl, Francesco Antognazza, Alessandro Barenghi, Gerardo Pelosi, and Georg Sigl
A significant effort in designing and engineering post-quantum cryptosystems is currently ongoing, also as a result of the National Institute of Standards and Technology (NIST) Post-quantum Cryptography (PQC) standardization process that started in 2016 and recently completed selecting two Key Encapsulation Mechanisms (KEMs), CRYSTALS-Kyber and HQC, and three digital signatures CRYSTALS-Dilithium, Falcon, and SPHINCS+ for standardization. In 2022, NIST launched another standardization effort for additional post-quantum digital signatures, preferably not based on the security assumptions of structured lattices, and with performance better than or equal to that of already standardized schemes (e.g., SPHINCS+ ). This initiative has narrowed down the initial 40 candidates to 14 in October 2024, eliciting public scrutiny of their algorithms and technical evaluation of their performance figures. Among the schemes admitted to the second round of evaluation, the code-based CROSS signature scheme was praised for its speed and the noticeably smaller signature sizes over the standardized version of SPHINCS+ . In this work, we present the first RTL hardware design of CROSS tailored for FPGA devices, delineating efficient implementation strategies for the critical components of the cryptographic scheme. Depending on the chosen security level, our design generates a key pair in 9 to 152 µs, signs a message in 404 µs to 5.89 ms, and verifies a signature in 322 µs to 4.38 ms on the NIST reference FPGA, a Xilinx Artix-7 device, proving competitive when compared with other candidates in the on-ramp standardization effort, namely LESS, MEDS, MAYO, Raccoon and SDitH, and comparable to current standard-selected ML-DSA, FN-DSA, and SLH-DSA in terms of efficiency.
Last updated:  2025-06-19
Black-box Approaches to Authenticated Dictionaries: New Constructions and Lower Bounds
Francesca Falzon, Harjasleen Malvai, and Emanuel Opel
Authenticated dictionaries (ADs) enable secure lookups to a dictionary hosted by an untrusted server and are a key component of various real-world applications, including transparency systems and cryptocurrencies. Despite significant overlap in techniques for building ADs and related primitives, such as memory checkers and accumulators (i.e., authenticated sets), these relationships have yet to be formalized. In this work, we give a rigorous treatment of ADs and prove their precise connection to the latter two cornerstone primitives. We start by laying out the minimal algorithms and security properties needed in practice and introduce a new security notion for ADs called write-committing, which requires update proofs to guarantee an exact count of changes. We prove that any AD built from a black-box authenticated set (AS) makes at least $\Omega(\log n)$ AS calls per lookup and obeys a trade-off between lookups and updates. With optimal lookups, such a scheme requires at least $\Omega(\log n/\log\log n)$ AS calls per update. We also resolve the open question of constructing a secure AD from only black-box access to an AS and present two schemes adhering to the trade-off: one with optimal lookup overhead and the other with higher lookup complexity, but which only requires two AS calls for an update. Finally, we make strides towards unifying memory checkers and ADs. To this end, we present two constructions for memory checkers with black-box access to an AD: one that incurs constant overhead (but needs write-committing) and a second that only requires the AD to be lookup-secure but incurs logarithmic overhead. We then give a simple AD construction using a memory checker as a black-box, with $\mathcal{O}(1)$ overhead. Our results demonstrate the inherent limitations of ADs built from accumulators but lay the foundation for extending existing results on memory checkers and other primitives, such as vector commitments, to ADs.
Last updated:  2025-06-19
$\mathsf{DekartProof}$: Efficient Vector Range Proofs and Their Applications
Dan Boneh, Trisha Datta, Rex Fernando, Kamilla Nazirkhanova, and Alin Tomescu
Let $p$ be a prime and consider a committed vector $\vec{v} = (v_1, \ldots, v_m) \in \mathbb{F}_p^m$. We develop new techniques for succinctly proving in zero-knowledge that all the elements of $\vec{v}$ are in the range $\{0,1,\ldots,n\}$ for some $n<p$. We refer to this as a batched zero-knowledge range proof, or a batched ZKRP. This problem comes up often in cryptography: it is needed in publicly verifiable secret sharing (PVSS), confidential transactions, and election protocols. Our approach makes use of a multilinear polynomial commitment scheme and the sum check protocol to efficiently provide a batch range proof for the entire vector. Along the way we introduce a new type of a Polynomial Interactive Oracle Proof (PIOP) we call a Homomorphic PIOP that can be compiled into a SNARK. We use an HPIOP to construct a new efficient zero-knowledge version of the sum check protocol. We compare our new techniques with existing range proofs and lookup arguments.
Last updated:  2025-06-19
Abuse Reporting for Metadata-Hiding Communication Based on Secret Sharing
Saba Eskandarian
As interest in metadata-hiding communication grows in both research and practice, a need exists for stronger abuse reporting features on metadata-hiding platforms. While message franking has been deployed on major end-to-end encrypted platforms as a lightweight and effective abuse reporting feature, there is no comparable technique for metadata-hiding platforms. Existing efforts to support abuse reporting in this setting, such as asymmetric message franking or the Hecate scheme, require order of magnitude increases in client and server computation or fundamental changes to the architecture of messaging systems. As a result, while metadata-hiding communication inches closer to practice, critical content moderation concerns remain unaddressed. This paper demonstrates that, for broad classes of metadata-hiding schemes, lightweight abuse reporting can be deployed with minimal changes to the overall architecture of the system. Our insight is that much of the structure needed to support abuse reporting already exists in these schemes. By taking a non-generic approach, we can reuse this structure to achieve abuse reporting with minimal overhead. In particular, we show how to modify schemes based on secret sharing user inputs to support a message franking-style protocol. Compared to prior work, our shared franking technique more than halves the time to prepare a franked message and gives order of magnitude reductions in server-side message processing times, as well as in the time to decrypt a message and verify a report.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.