All papers in 2025 (1224 results)
Lattice EPID with Efficient Revocation
Enhanced Privacy Identification (EPID) is one of the anonymous authentication mechanisms that found their way into the industry, being deployed in billions of chips and standardized at ISO. The linchpin of EPID lies in its decentralized revocation procedure that allows to revoke a signer by simply placing one of its signatures on a signature revocation list SRL. Each new signature must then include a proof that it has been generated with a key different from those used to produce the signatures on the SRL. This proof of non-revocation in current post-quantum schemes either relies on general-purpose NIZKs or on regular zero-knowledge proofs (ZKP) but with a witness dimension linear in the size of the SRL, which leads to large size and/or computational complexity.
In this paper, we rethink the standard approach of non-revocation so as to avoid its heavy reliance on ZKP. Our construction indeed combines features from different tools (such as Falcon signatures) that are unusual in this context to pull most elements out of the ZKP, leading to significant performance improvements. Providing all these elements unconcealed creates many security challenges for our construction but we yet manage to address all of them and prove security under well-understood lattice assumptions, and in the strong model of Sanders-Traoré (CT-RSA'21) allowing malicious SRLs.
An Update to ``Polynomial Hashing over Prime Order Fields''
New state-of-the-art assembly implementations show that BRWHash is consistently faster than polyHash and both t-BRWHash and d-2LHash for all message lengths and for both the primes and .
Efficient Pseudorandom Correlation Generators over
Modern efficient secure multi-party computation (MPC) protocols typically follow an offline-online design, where offline protocols produce a sufficient amount of correlated randomness that would be consumed during the online phases. The past decades have witnessed maturing of efficient online protocols, for computing circuits over either arbitrary finite fields or rings . In particular, protocols tailored for arithmetic have achieved better concrete efficiency in most real-life applications, as it naturally captures modern CPU architectures. On the other hand, a recent paradigm of {\em pseudorandom correlation generator} (PCG) initiated by Boyle et al. (CCS'18, Crypto'19) opens a door to efficient preprocessing with sublinear communication. Since then, PCGs have been extensively studied and developed to produce various types of correlations required from online protocols. Although Li et al. (EuroCrypt'25) recently put a significant step forward and propose efficient PCGs for arbitrary finite fields, the current state of PCGs for rings is not satisfying at all.
Towards the great demand for efficiently generating correlations over rings, we investigate PCGs for general Galois rings, which simultaneously unify finite fields and integer rings modulo . In summary, we establish the following results:
(i) We generalize the state-of-the-art PCG constructions for oblivious linear evaluations (OLE) over Galois fields to {\em arbitrary Galois rings}, basing on Galois theory and the Hensel lift. Moreover, our PCGs for Galois rings are as efficient as PCGs for fields. Concretely, for OLE correlations over , we require communication and computation, where is an arbitrary integer . In comparison, to our best knowledge, previous approaches incur communication at least linear in .
(ii) We extend the above OLE construction to provide various types of correlations over any Galois ring. One of the fascinating applications is an efficient PCG for two-party SPD authenticated multiplication triples (Crypto'18). For SPD triples, our approach requires only communication and computation. Concrete evaluations show that our method significantly outperforms existing schemes based on homomorphic encryption.
(iii) In addition, our PCGs for Galois rings also enable multi-party multiplication triple generation, yielding the first efficient MPC protocol for arithmetic circuits over with \emph{silent} and \emph{sublinear} preprocessing. Additional applications include circuit-dependent preprocessing and matrix multiplication triples, etc, which are of independent interest.
SoK: Reassessing Side-Channel Vulnerabilities and Countermeasures in PQC Implementations
Post-Quantum Cryptography (PQC) algorithms should remain secure even in the presence of quantum computers. Although the security of such schemes is guaranteed at the algorithmic level, real-world implementations often suffer from other vulnerabilities like Side-Channel Attacks (SCA). This Systematization of Knowledge (SoK) paper investigates side-channel attacks targeting implementations of PQC algorithms. This work categorizes attacks from an adversarial perspective to identify the most vulnerable components of the algorithms' implementations and highlights unexplored parts in current implementations. In addition, it reviews and analyzes the efficiency and efficacy of existing countermeasures to SCA in current hardware implementations. This approach helps identify countermeasures that provide broader protection and highlights characteristics needed for future secure implementations. Our findings offer guidance in strengthening existing systems and developing more efficient defenses against side-channel attacks.
EWEMrl: A White-Box Secure Cipher with Longevity
We propose the first updatable white-box secure cipher, EWEMrl, and its natural extension EWEMxl, both achieving longevity against non-adaptive read-only malware. The notion of longevity, introduced by Koike et al., addresses continuous code leakage and is stronger than incompressibility. While Yoroi claimed longevity, but was broken by Isobe and Todo. Given the prevalence of continuous leakage, developing such ciphers is crucial in white-box cryptography. Precisely, we have the following.
• We first introduce EWEMr (Extended WEM against non-adaptive read-only adversaries), a generalization of WEM (White-box Even-Mansour). WEM is the first (and possibly only) white-box cipher based on EM, replacing its key addition layer with a secret Sbox. EWEMr achieves a high space-hardness bound, with a new generic proof strategy, but does not provide longevity. Instead, it serves as the base for EWEMrl.
• We also present EWEMx, which uses EWEMr as subroutines and is secure in the stronger adaptive model. While EWEMx does not achieve longevity, it is the base design for EWEMxl.
• We next propose EWEMrl, which is the first cipher to achieve longevity against non-adaptive read-only adversaries. No existing ciphers, such as SPNbox and SPACE, are designed for longevity. We show that EWEMrl ensures (against non-adaptive read-only adversaries) (1) longevity, (2) high space-hardness in both known-space and chosen-space settings, and (3) security against hybrid code-lifting attacks.
• Finally, we introduce EWEMxl, a natural extension of EWEMrl with a structure similar to EWEMx. EWEMxl achieves (2) and (3) in the stronger adaptive model while maintaining (1) in the same non-adaptive and read-only setting.
In summary, EWEMrl and EWEMxl are the first ciphers providing longevity against non-adaptive read-only malware while ensuring security confidence in the black-box setting.
RoK and Roll – Verifier-Efficient Random Projection for -size Lattice Arguments
Succinct non-interactive arguments of knowledge (SNARKs) based on lattice assumptions offer a promising post-quantum alternative to pairing-based systems, but have until now suffered from inherently quadratic proof sizes in the security parameter. We introduce RoK and Roll, the first lattice-based SNARK that breaks the quadratic barrier, achieving communication complexity of together with a succinct verification time. The protocol significantly improves upon the state of the art of fully-succinct argument systems established by ``RoK, Paper, SISsors'' (RPS) [ASIACRYPT'24] and hinges on two key innovations, presented as reductions of knowledge (RoKs):
- Structured random projections: We introduce a new technique for structured random projections that allows us to reduce the witness dimensions while approximately preserving its norm and maintaining the desired tensor structure. In order to maintain succinct communication and verification, the projected image is further committed and adjoined to the original relation. This procedure is recursively repeated until dimension of the intermediate witness becomes , i.e. independent of the original witness length.
- Unstructured random projection: When the witness is sufficiently small, we let the unstructured projection (over coefficients ) be sent in plain, as in LaBRADOR [CRYPTO'23]. We observe, however, that the strategy from prior works to immediately lift the projection claim to , and into our relation, would impose a quadratic communication cost. Instead, we gradually batch-and-lift the projection a the tower of intermediate ring extensions. This reduces the communication cost to while maintaining a succinct verification time.
These two techniques, combined with existing RoKs from RPS, yield a succinct argument system with communication complexity and succinct verification for structured linear relations.
Foundations of Single-Decryptor Encryption
Single decryptor encryption (SDE) is public key encryption (PKE) where the decryption key is an unclonable quantum state. Coladangelo, Liu, Liu, and Zhandry (CRYPTO 2021) realized the first SDE assuming subexponentially secure indistinguishability obfuscation (iO) and one-way functions (OWFs), along with the polynomial hardness of the learning with errors (LWE) assumption. Since then, SDE has played a pivotal role in recent advances in quantum cryptography. However, despite its central importance in unclonable cryptography, many fundamental questions about SDE remain unanswered. For example, a line of works has proposed various security notions for SDE, but their relationships have hardly been discussed. Moreover, while many subsequent works have adopted the construction methodology of Coladangelo et al., none have explored its improvement, leaving the possibility of a more efficient approach to SDE.
In this work, we address these fundamental questions concerning SDE. Our contributions are threefold.
New security notion: We introduce a strengthened indistinguishability-based security notion for SDE, which we call CPA+ anti-piracy security. We show that CPA+ security unifies the existing security notions for SDE, as detailed in the third item.
New construction: We present an SDE scheme that satisfies CPA+ anti-piracy security, based solely on polynomially secure iO and OWFs. In addition to relying on weaker and more general assumptions, our SDE scheme offers a significant advantage over the scheme of Coladangelo et al., as both the construction and its security proof are much simpler.
Relationships among security notions: We demonstrate that CPA+ anti-piracy security implies all existing security notions for SDE, with the sole exception of identical challenge ciphertext security proposed by Georgiou and Zhandry (EPRINT 2020). Although we do not establish a direct implication from CPA+ anti-piracy security to identical challenge ciphertext security, we provide a generic transformation from an SDE scheme satisfying the former to one achieving the latter in the quantum random oracle model. Additionally, we establish various relationships among different security notions for SDE. By combining these results with our SDE construction, we derive several new feasibility results.
Revisiting Module Lattice-based Homomorphic Encryption and Application to Secure-MPC
Homomorphic encryption (HE) schemes have gained significant popularity in modern privacy-preserving applications across various domains. While research on HE constructions based on learning with errors (LWE) and ring-LWE has received major attention from both cryptographers and software-hardware designers alike, their module-LWE-based counterpart has remained comparatively under-explored in the literature. A recent work provides a module-LWE-based instantiation (MLWE-HE) of the Cheon-Kim-Kim-Song (CKKS) scheme and showcases several of its advantages such as parameter flexibility and improved parallelism. However, a primary limitation of this construction is the quadratic growth in the size of the relinearization keys. Our contribution is two-pronged: first, we present a new relinearization key-generation technique that addresses the issue of quadratic key size expansion by reducing it to linear growth. Second, we extend the application of MLWE-HE in a multi-group homomorphic encryption (MGHE) framework, thereby generalizing the favorable properties of the single-keyed HE to a multi-keyed setting as well as investigating additional flexibility attributes of the MGHE framework.
Cymric: Short-tailed but Mighty
Authenticated encryption (AE) is a fundamental tool in today's secure communication. Numerous designs have been proposed, including well-known standards such as GCM. While their performance for long inputs is excellent, that for short inputs is often problematic due to high overhead in computation, showing a gap between the real need for IoT-like protocols where packets are often very short. Existing dedicated short-input AEs are very scarce, the classical Encode-then-encipher (Bellare and Rogaway, Asiacrypt 2000) and Manx (Adomnic\u{a}i et al., CT-RSA 2023), using up to two block cipher calls. They have superior performance for (very) short inputs, however, security is up to bits, where is the block size of the underlying block cipher.
This paper proposes a new family of short-input AEs, dubbed Cymric, which ensures beyond-birthday-bound (BBB) security. It supports a wider range of input space than EtE and Manx with the help of one additional block cipher call (thus three calls). In terms of the number of block cipher calls, Cymric is the known minimum construction of BBB-secure AEs, and we also prove this is indeed minimal by presenting an impossibility result on BBB-secure AE with two calls. Finally, we show a comprehensive benchmark on microcontrollers to show performance advantage over existing schemes.
Ring-LWR based Commitments and ZK-PoKs with Application to Verifiable Quantum-Safe Searchable Symmetric Encryption
Prior research on ensuring trust in delegated computation through lattice-based zero-knowledge proofs mostly rely on Learning-With-Errors (LWE) assumption. In this work, we propose a zero-knowledge proof of knowledge using the Ring Learning with Rounding (RLWR) assumption for an interesting and useful class of statements: linear relations on polynomials. We begin by proposing, to the best of our knowledge, the first efficient commitment scheme in literature based on the hardness of RLWR assumption. We establish two properties on RLWR that aid in the construction of our commitments: (i) closure under addition with double rounding, and (ii) closure under multiplication with a short polynomial. Building upon our RLWR commitment scheme, we consequently design a RLWR based protocol for proving knowledge of a single committed message under linear relations with public polynomials.
As an use-case of our proposed protocol, we showcase a construction of a quantum-safe Searchable Symmetric Encryption (SSE) scheme by plugging a prior LWR based SSE scheme from (EuroS&P 2023) with our protocol. Concretely, using our protocol for linear relations, we prove the correctness of an encrypted search result in a zero-knowledge manner. We implement our verifiable SSE framework and show that the overhead of an extra verification round is negligible ( seconds) and retains the asymptotic query execution time complexity of the original SSE scheme.
Our work establishes results on zero-knowledge proof systems that can be of independent interest. By shifting the setting from RLWE to RLWR, we gain significant (i) efficiency improvements in terms of communication complexity by (since some prior works on RLWE require rejection sampling by a factor of ), as well as (ii) very short proof size ( KB) and tighter parameters (since RLWR does not explicitly manipulate error polynomials like RLWE).
Highly Scalable Searchable Symmetric Encryption for Boolean Queries from NTRU Lattice Trapdoors
Searchable symmetric encryption (SSE) enables query execution directly over sym-
metrically encrypted databases. To support realistic query executions over encrypted
document collections, one needs SSE schemes capable of supporting both conjunctive
and disjunctive keyword queries. Unfortunately, existing solutions are either practi-
cally inefficient (incur large storage overheads and/or high query processing latency)
or are quantum-unsafe.
In this paper, we present the first practically efficient SSE scheme with fast con-
junctive and disjunctive keyword searches, compact storage, and security based on the
(plausible) quantum-hardness of well-studied lattice-based assumptions. We present
NTRU-OQXT – a highly compact NTRU lattice-based conjunctive SSE scheme that
outperforms all existing conjunctive SSE schemes in terms of search latency. We then
present an extension of NTRU-OQXT that additionally supports disjunctive queries,
we call it NTRU-TWINSSE. Technically, both schemes rely on a novel oblivious search
protocol based on highly optimized Fast-Fourier trapdoor sampling algorithms over
NTRU lattices. While such techniques have been used to design other cryptographic
primitives (such as digital signatures), they have not been applied before in the context
of SSE.
We present prototype implementations of both schemes, and experimentally val-
idate their practical performance over a large real-world dataset. Our experiments
demonstrate that NTRU-OQXT achieves 2× faster conjunctive keyword searches as
compared to all other conjunctive SSE schemes (including the best quantum-unsafe
conjunctive SSE schemes), and substantially outperforms many of these schemes in
terms of storage requirements. These efficiency benefits also translate to NTRU-
TWINSSE, which is practically competitive with the best quantum-unsafe SSE schemes
capable of supporting both conjunctive and disjunctive queries.
Hobbit: Space-Efficient zkSNARK with Optimal Prover Time
Zero-knowledge succinct non-interactive arguments (zkSNARKs) are notorious for their large prover space requirements, which almost prohibits their use for really large instances. Space-efficient zkSNARKs aim to address this by limiting the prover space usage, without critical sacrifices to its runtime. In this work, we introduce Hobbit, the only existing space-efficient zkSNARK that achieves optimal prover time for an arithmetic circuit . At the same time, Hobbit is the first transparent and plausibly post-quantum secure construction of its kind. Moreover, our experimental evaluation shows that Hobbit outperforms all prior general-purpose space-efficient zkSNARKs in the literature across four different applications (arbitrary arithmetic circuits, inference of pruned Multi-Layer Perceptron, batch AES128 evaluation, and select-and-aggregate SQL query) by 8- in terms or prover time while requiring up to 23 less total space.
At a technical level, we introduce two new building blocks that may be of independent interest: (i) the first sumcheck protocol for products of polynomials with optimal prover time in the streaming setting, and (ii) a novel multi-linear plausibly post-quantum polynomial commitment that outperforms all prior works in prover time (and can be tuned to work in a space-efficient manner). We build Hobbit by combining the above with a modified version of HyperPlonk, providing an explicit routine to stream access to the circuit evaluation.
Tightly Secure Public-Key Encryption with Equality Test Supporting Flexible Authorization in the Standard Model
We introduce a novel Public Key Encryption with Equality Test supporting Flexible Authorization scheme offering User-Level, Ciphertext-Level, and User-Specific-Ciphertext-Level authorizations. Notably, our construction achieves security under the Decisional Diffie-Hellman assumption with a tight reduction, whereas the existing works are either not tightly secure or rely heavily on the random oracles. By relying solely on the standard DDH assumption, our scheme offers practical implementation without specialized cryptographic structures.
All Proof of Work But No Proof of Play
Speedrunning is a competition that emerged from communities of early video
games such as Doom (1993). Speedrunners try to finish a game in minimal
time. Provably verifying the authenticity of submitted speedruns is an open
problem. Traditionally, best-effort speedrun verification is conducted by on-site
human observers, forensic audio analysis, or a rigorous mathematical analysis
of the game mechanics1. Such methods are tedious, fallible, and, perhaps worst
of all, not cryptographic. Motivated by naivety and the Dunning-Kruger effect,
we attempt to build a system that cryptographically proves the authenticity of
speedruns. This paper describes our attempted solutions and ways to circumvent
them. Through a narration of our failures, we attempt to demonstrate the difficulty
of authenticating live and interactive human input in untrusted environments, as
well as the limits of signature schemes, game integrity, and provable play.
May the Force Be with you: Brute-Force Resistant Biometric Authentication and Key Reconstruction
The use of biometric-based security protocols is on the steep rise. As
biometrics become more popular, we witness more attacks. For example, recent
BrutePrint/InfinityGauntlet attacks showed how to brute-force fingerprints
stored on an Android phone in about 40 minutes. The attacks are possible because biometrics, like passwords, do not have high entropy. But unlike passwords, brute-force attacks are much more damaging for biometrics, because one cannot easily change biometrics in case of compromise. In this work, we propose a novel provably secure Brute-Force Resistant Biometrics (BFRB) protocol for biometric-based authentication and key reconstruction that protects against brute-force attacks even when the server storing biometric-related data is compromised. Our protocol utilizes a verifiable partially oblivious pseudorandom function, an authenticated encryption scheme, a pseudorandom function, and a hash. We formally define security for a BFRB protocol and reduce the security of our protocol to the security of the building blocks. We implement the protocol and study its performance for the ND-0405 iris dataset.
A Generalized Approach to Root-based Attacks against PLWE
In the present work we address the robustness of the Polynomial Learning With Errors problem extending previous results in Blanco-Chacón et al. and in Elias et al. In particular, we produce two kinds of new distinguishing attacks: a) we generalize Blanco-Chacón et al. to the case where the defining polynomial has a root of degree up to 4, and b) we widen and refine the most general attack in Elias et al. to the non-split case and determine further dangerous instances previously not detected. Finally, we exploit our results in order to show vulnerabilities of some cryptographically relevant polynomials.
RingSG: Optimal Secure Vertex-Centric Computation for Collaborative Graph Processing
Collaborative graph processing refers to the joint analysis of inter-connected graphs held by multiple graph owners. To honor data privacy and support various graph processing algorithms, existing approaches employ secure multi-party computation (MPC) protocols to express the vertex-centric abstraction. Yet, due to certain computation-intensive cryptography constructions, state-of-the-art (SOTA) approaches are asymptotically suboptimal, imposing significant overheads in terms of computation and communication. In this paper, we present RingSG, the first system to attain optimal communication/computation complexity within the MPC-based vertex-centric abstraction for collaborative graph processing. This optimal complexity is attributed to Ring-ScatterGather, a novel computation paradigm that can avoid exceedingly expensive cryptography operations (e.g., oblivious sort), and simultaneously ensure the overall workload can be optimally decomposed into parallelizable and mutually exclusive MPC tasks. Within Ring-ScatterGather, RingSG improves the concrete runtime efficiency by incorporating 3-party secure computation via share conversion, and optimizing the most cost-heavy part using a novel oblivious group aggregation protocol. Finally, unlike prior approaches, we instantiate RingSG into two end-to-end applications to effectively obtain application-specific results from the protocol outputs in a privacy-preserving manner. We developed a prototype of RingSG and extensively evaluated it across various graph collaboration settings, including different graph sizes, numbers of parties, and average vertex degrees. The results show RingSG reduces the system running time of SOTA approaches by up to 15.34× and per-party communication by up to 10.36×. Notably, RingSG excels in processing sparse global graphs collectively held by more parties, consistent with our theoretical cost analysis.
End-to-End Encrypted Git Services
Git services such as GitHub, have been widely used to manage projects and enable collaborations among multiple entities. Just as in messaging and cloud storage, where end-to-end security has been gaining increased attention, such a level of security is also demanded for Git services. Content in the repositories (and the data/code supply-chain facilitated by Git services) could be highly valuable, whereas the threat of system breaches has become routine nowadays. However, existing studies of Git security to date (mostly open source projects) suffer in two ways: they provide only very weak security, and they have a large overhead.
In this paper, we initiate the needed study of efficient end-to-end encrypted Git services. Specifically, we formally define the syntax and critical security properties, and then propose two constructions that provably meet those properties. Moreover, our constructions have the important property of platform-compatibility: They are compatible with current Git servers and reserve all basic Git operations, thus can be directly tested and deployed on top of existing platforms. Furthermore, the overhead we achieve is only proportional to the actual difference caused by each edit, instead of the whole file (or even the whole repository) as is the case with existing works. We implemented both constructions and tested them directly on several public GitHub repositories. Our evaluations show (1) the effectiveness of platform-compatibility, and (2) the significant efficiency improvement we got (while provably providing much stronger security than prior ad-hoc treatments).
Copy-Protection from UPO, Revisited
Quantum copy-protection is a foundational notion in quantum cryptography that leverages the governing principles of quantum mechanics to tackle the problem of software anti-piracy. Despite progress in recent years, precisely characterizing the class of functionalities that can be copy-protected is still not well understood.
Two recent works, by [Coladangelo and Gunn, STOC 2024] and [Ananth and Behera, CRYPTO 2024, showed that puncturable functionalities can be copy-protected. Both works have significant caveats with regard to the underlying cryptographic assumptions and additionally restrict the output length of the functionalities to be copy-protected. In this work, we make progress towards simultaneously addressing both caveats. We show the following:
- Revisiting Unclonable Puncturable Obfuscation (UPO): We revisit the notion of UPO introduced by [Ananth and Behera, CRYPTO 2024]. We present a new approach to construct UPO and a variant of UPO, called independent-secure UPO. Unlike UPO, we show how to base the latter notion on well-studied assumptions.
- Copy-Protection from Independent-secure UPO: Assuming independent-secure UPO, we show that any m-bit, for m ≥ 2, puncturable functionality can be copy-protected.
- Copy-Protection from UPO: Assuming UPO, we show that any 1-bit puncturable functionality can be copy-protected. The security of copy-protection holds against identical challenge distributions.
New Upper and Lower Bounds for Perfectly Secure MPC
We consider perfectly secure MPC for players and malicious corruptions. We ask whether requiring only security with abort (rather than guaranteed output delivery, GOD) can help to achieve protocols with better resilience, communication complexity or round complexity. We show that for resilience and communication complexity, abort security does not help, one still needs for a synchronous network and in the asynchronous case. And, in both cases, a communication overhead of bits per gate is necessary.
When overhead is inevitable, one can explore if this overhead can be pushed to the preprocessing phase and the online phase can be achieved with overhead. This result was recently achieved in the synchronous setting, in fact, with GOD guarantee. We show this same result in the asynchronous setting. This was previously open since the main standard approach to getting constant overhead in a synchronous on-line phase fails in the asynchronous setting. In particular, this shows that we do not need to settle for abort security to get an asynchronous perfectly secure protocol with overheads and .
Lastly, in the synchronous setting, we show that perfect secure MPC with abort requires only 2 rounds, in contrast to protocols with GOD that require 4 rounds.
Generic Construction of Threshold Ring Signatures and Lattice-based Instantiations
A t-out-of-n threshold ring signature allows parties to jointly sign a message on behalf of parties without revealing the identities of the signers. In this paper, we introduce a new generic construction for threshold ring signature, called GCTRS, which can be built on top of a selection on identification schemes, commitment schemes and a new primitive called t-out-of-n proof protocol which is a special type of zero-knowledge proof. In general, our design enables a group of signers to first generate an aggregated signature by interacting with each other; then they are able to compute a t-out-of-n proof to convince the verifier that the aggregated signature is indeed produced by individuals among a particular set. The signature is succinct, as it contains only one aggregated signature and one proof in the final signature. We define all the properties required for the building blocks to capture the security of the GCTRS and provide a detailed security proof. Furthermore, we propose two lattice-based instantiations for the GCTRS, named LTRS and CTRS, respectively. Notably, the CTRS scheme is the first scheme that has a logarithmic signature size relative to the ring size. Additionally, during the instantiation process, we construct two t-out-of-n proof protocols, which may be of independent interest.
A search to distinguish reduction for the isomorphism problem on direct sum lattices
At Eurocrypt 2003, Szydlo presented a search to distinguish reduction for the Lattice Isomorphism Problem (LIP) on the integer lattice . Here the search problem asks to find an isometry between and an isomorphic lattice, while the distinguish variant asks to distinguish between a list of auxiliary lattices related to .
In this work we generalize Szydlo's search to distinguish reduction in two ways. Firstly, we generalize the reduction to any lattice isomorphic to , where is a fixed base lattice. Secondly, we allow to be a module lattice over any number field. Assuming the base lattice and the number field are fixed, our reduction is polynomial in .
As a special case we consider the module lattice used in the module-LIP based signature scheme HAWK, and we show that one can solve the search problem, leading to a full key recovery, with less than distinguishing calls on two lattices each, where is the degree of the power-of-two cyclotomic number field and its ring of integers.
Breaking The Authenticated Encryption scheme HiAE
HiAE is the fastest AEAD solution on ARM chips to date, utilizing AES round functions while also setting a new performance benchmark on the latest x86 processors. In this paper, we employ algebraic techniques to investigate the security of HiAE. Our findings reveal that HiAE is vulnerable. Firstly, we employ the meet-in-the-middle technique and guess-and-determine technique to recover the state and derive a key-related equation resulting from two layers of AES round functions. Secondly, by adopting an algebraic approach to study the properties of the round function, we decompose the equation into byte-level equations for divide-and-conquer. Finally, we utilize the guess-and-determine technique to recover the key. Collectively, these techniques enable us to present the first full key-recovery attack on HiAE. Our attack achieves a data complexity of and a time complexity of approximately , leveraging both encryption and decryption oracles with a success probability of 1. In a single-key and nonce-respecting scenario, the attack fully recovers the 256-bit key, breaking the claimed 256-bit security against key-recovery attacks.
t-Probing (In-)Security - Pitfalls on Noise Assumptions
The ongoing transition to post-quantum cryptography has led to a surge of research in side-channel countermeasures tailored to these schemes. A prominent method to prove security in the context of side-channel analysis is the utilization of the well-established t-probing model. However, recent studies by Hermelink et al. at CCS 2024 demonstrate a simple and practical attack on a provably secure implementation of the Fujisaki-Okamoto transform that raises concerns regarding the practical security of t-probing secure schemes.
In this paper, we present an unsupervised single-trace side-channel attack on a tenth order masked implementation of fixed-weight polynomial sampling, which has also been proven to be secure in the t-probing model. Both attacks reveal a mismatch between the correct, well-understood theory of the t-probing model and its practical application, since the security proofs are valid, yet the attacks still succeed at high noise levels. Therefore, we take a closer look at the underlying causes and the assumptions that are made for transferring t-probing security to practice. In particular, we investigate the amount of noise required for this transfer. We find that, depending on the design decisions made, this can be very high and difficult to achieve.
Consequently, we examine the factors impacting the required amount of noise and that should be considered for practically secure implementations. In particular, non-uniformly distributed shares - a setting that is increasingly encountered in post-quantum cryptographic algorithms - could lead to an increased noise requirement, and thus it could reduce the security level of the masking scheme. Our analysis then allows us to provide practical guidelines for implementation designers, thereby facilitating the development of practically secure designs.
BitBatSPIR: Efficient Batch Symmetric Private Information Retrieval from PSI
Private Information Retrieval (PIR) allows a client to retrieve an entry from a database held by a server without leaking which entry is being requested. Symmetric PIR (SPIR) is a stronger variant of PIR with database privacy so that the client knows nothing about the database other than the retrieved entry.
This work studies SPIR in the batch setting (BatchSPIR), where the client wants to retrieve multiple entries. In particular, we focus on the case of bit entries, which has important real-world applications. We set up the connection between bit-entry information retrieval and set operation, and propose a black-box construction of BatchSPIR from Private Set Intersection (PSI). By applying an efficient PSI protocol with asymmetric set sizes, we obtain our BatchSPIR protocol named . We also introduce several optimizations for the underlying PSI. These optimizations improve the efficiency of our concrete BatchSPIR construction as well as the PSI protocol.
We implement and compare the performance with the state-of-the-art PIR protocol in the batch setting. Our experimental results show that not only achieves a stronger security guarantee (symmetric privacy) but also has a better performance for large databases, especially in the Wide Area Network (WAN) setting.
Tricycle: Private Transformer Inference with Tricyclic Encodings
The growing adoption of Large Language Models in privacy-sensitive domains necessitates secure inference mechanisms that preserve data confidentiality. Homomorphic encryption offers a promising pathway by enabling computation on encrypted inputs, yet existing approaches struggle to scale efficiently to full transformer models due to limitations in packing schemes, which must efficiently support a wide range of operations, including matrix multiplications, row-wise nonlinear operations, and self-attention. In this work, we present Tricycle, a framework for private transformer inference built on our novel packing scheme, called tricyclic encodings, which are designed to efficiently support these core operations. Tricyclic encodings are a generalization of bicyclic encodings, enabling privacy-preserving batch matrix multiplications with optimal multiplicative depth in order to facilitate parallelized multi-head self-attention. We optimize our matrix multiplications by incorporating Baby-Step Giant-Step optimizations to reduce ciphertext rotations and presenting new ciphertext-plaintext matrix multiplication techniques that relax prior limitations. A further contribution of our work is a lightweight and effective approach for stabilizing the softmax function via statistical max estimation. Our end-to-end implementation on a BERT-Tiny model shows that Tricycle achieves a to speedup over previous approaches, marking a step toward practical and scalable private LLM inference without sacrificing model fidelity.
HypSCA: A Hyperbolic Embedding Method for Enhanced Side-channel Attack
Deep learning-based side-channel attack (DLSCA) has become the dominant paradigm for extracting sensitive information from hardware implementations due to its ability to learn discriminative features directly from raw side-channel traces. A common design choice in DLSCA involves embedding traces in Euclidean space, where the underlying geometry supports conventional objectives such as classification or contrastive learning. However, Euclidean space is fundamentally limited in capturing the multi-level hierarchical structure of side-channel traces, which often exhibit both coarse-grained clustering patterns (e.g., Hamming weight similarities) and fine-grained distinctions (e.g., instruction-level variations). These limitations adversely affect the discriminability and generalization of learned representations, particularly across diverse datasets and leakage models. In this work, we propose HypSCA, a dual-space representation learning method that embeds traces in hyperbolic space to exploit its natural ability to model hierarchical relationships through exponential volume growth. In contrast to existing approaches, HypSCA jointly combines hyperbolic structure modeling with local discriminative learning in Euclidean space, enabling the preservation of global hierarchies while enhancing fine-grained feature separation. Extensive experiments on multiple public datasets demonstrate that HypSCA achieves up to 51.6% improvement in attack performance over state-of-the-art DLSCA methods, consistently enhancing generalization across diverse datasets and leakage models.
Brief Comments on Rijndael-256 and the Standard RISC-V Cryptography Extensions
We evaluate the implementation aspects of Rijndael-256 using the ratified RISC-V Vector Cryptography extension Zvkn. A positive finding is that Rijndael-256 can be implemented in constant time with the existing RISC-V ISA as the critical AES and fixed crossbar permutation instructions are in the DIEL (data-independent execution latency) set. Furthermore, simple tricks can be used to expand the functionality of key expansion instructions to cover the additional round constants required. However, due to the required additional byte shuffle in each round, Rijndael-256 will be significantly slower than AES-256 in terms of throughput. Without additional ISA modifications, the instruction count will be increased by the required switching of the EEW (``effective element width'') parameter in each round between 8 bits (byte shuffle) and 32 bits (AES round instructions). Instruction counts for 1-kilobyte encryption and decryption with Rijndael-256 are factor higher than with AES-256. The precise amount of throughput slowdown depends on the microarchitectural details of a particular RISC-V ISA hardware instantiation, but it may be substantial with some high-performance vector AES architectures due to the breakdown of AES pipelining and the relative slowness of crossbar permutation instructions.
How to Copy-Protect All Puncturable Functionalities Without Conjectures: A Unified Solution to Quantum Protection
Quantum copy-protection (Aaronson, CCC'09) is the problem of encoding a functionality/key into a quantum state to achieve an anti-piracy security notion that guarantees that the key cannot be split into two keys that both still work. Most works so far has focused on constructing copy-protection for specific functionalities. The only exceptions are the work of Aaronson, Liu, Liu, Zhandry, Zhang (CRYPTO'21) and Ananth and Behera (CRYPTO'24). The former constructs copy-protection for all functionalities in the classical oracle model and the latter constructs copy-protection for all circuits that can be punctured at a uniformly random point with negligible security, assuming a new unproven conjecture about simultaneous extraction from entangled quantum adversaries, on top of assuming subexponentially-secure indistinguishability obfuscation (iO) and hardness of Learning with Errors (LWE).
In this work, we show that the construction of Aaronson et al (CRYPTO'21), when the oracles are instantiated with iO, satisfies copy-protection security in the plain model for all cryptographically puncturable functionalities (instead of only puncturable circuits) with arbitrary success threshold (e.g. we get CPA-style security rather than unpredictability for encryption schemes), without any unproven conjectures, assuming only subexponentially secure iO and one-way functions (we do not assume LWE). Thus, our work resolves the five-year-old open question of Aaronson et al, and further, our work encompasses/supersedes and significantly improves upon all existing plain-model copy-protection results.
Since puncturability has a long history of being studied in cryptography, our result immediately allows us to obtain copy-protection schemes for a large set of advanced functionalities for which no previous copy-protection scheme existed. Further, even for any functionality F that has not already been considered, through our result, constructing copy-protection for F essentially becomes a classical cryptographer's problem.
Going further, we show that our scheme also satisfies secure leasing (Ananth and La Placa, EUROCRYPT'21), unbounded/LOCC leakage-resilience and intrusion-detection security (Cakan, Goyal, Liu-Zhang, Ribeiro, TCC'24), giving a unified solution to the problem of quantum protection.
Limits on the Power of Private Constrained PRFs
Private constrained PRFs are constrained PRFs where the constrained key hides information about the predicate circuit. Although there are many constructions and applications of PCPRF, its relationship to basic cryptographic primitives, such as one-way functions and public-key encryptions, has been unclear. For example, we don't know whether one-way functions imply PCPRFs for general predicates, nor do we know whether 1-key secure PCPRF for all polynomial-sized predicates imply public-key primitives such as public-key encryption and secret-key agreement.
In this work, we prove the black-box separation between a 1-key secure PCPRF for any predicate and a secret-key agreement, which is the first black-box separation result about PCPRF. Specifically, we prove that there exists an oracle relative to which 1-key secure PCPRFs exist while secret-key agreement does not. Our proof is based on the simulation-based technique proposed by Impagliazzo and Rudich (STOC 89). The main technical challenge in generalizing the simulation-based technique to PCPRF is the issue of \textit{unfaithfulness} of Eve's simulation to the real world because our oracle is more complicated than a random oracle. We introduce a new technique which we call the ``weighting" technique and show how to leverage it to circumvent the issue of unfaithfulness in the proof framework of Impagliazzo and Rudich.
On symbolic computations and Post Quantum Cryptography with Lie Geometries.
Assume that the global density of multivariate map over the commutative ring is the total number of its coefficients. In the case of finite commutative ring K with the multiplicative group K* containing more than 2 elements we suggest multivariate public keys in n variables with the public rule of global density O(n) and degree O(1). Another public keys use public rule of global density O(n) and degree O(n) together with the space of plaintexts (K*)^n and the space of ciphertext K^n . We consider examples of protocols of Noncommutative Cryptography implemented on the platform of endomorphisms of which allow the con-version of mentioned above multivariate public keys into protocol based cryptosystems of El Gamal type. The cryptosystems and protocols are designed in terms of analogue of geometries of Chevalley groups over commutative rings and their temporal versions.
Private coins extension with verifiable encryption
This paper introduces a protocol for verifiable encryption of values committed using Pedersen commitments. It enables a recipient to decrypt the hidden amount while proving its consistency with the original commitment, without revealing the value publicly. The construction combines symmetric encryption with zero-knowledge proofs and is made non-interactive via the Fiat-Shamir heuristic. The protocol is particularly useful in blockchain settings where confidential but verifiable value transfers are required.
Non-Homomorphic Key Blinding from Symmetric Primitives
Key Blinding Signature Schemes allow to derive so-called
blinded keys from public keys, which can be used to verify signatures
created with the secret key. At the same time, neither the blinded keys
nor their signatures disclose from which public key they were derived,
effectively implementing pseudonyms for one’s identity.
In search of conservative schemes, we deviate from the homomorphism-
based re-randomization approach in favor of a novel proof of knowledge-
based approach. To authenticate a message, a signer proves that they
know an original keypair and a valid way to commit to the corresponding
verification key to derive a given blinded key. We provide a framework
for such constructions and indicate how MPC-friendly block ciphers and
one-way functions may be used for efficient instantiations. While the
general framework’s security arguments are stated in the random oracle
model, we show a natural instantiation approach whose security can be
based on collision-resistance and pseudorandomness instead. The result
is the first standard model construction of key blinding.
Using our framework, we identify a shortcoming in the usual definition
of unlinkability for key blinding signature schemes, which we rectify by
considering an additional notion called targeted unlinkability.
PrivacyGo: Privacy-Preserving Ad Measurement with Multidimensional Intersection
In digital advertising, accurate measurement is essential for optimiz- ing ad performance, requiring collaboration between advertisers and publishers to compute aggregate statistics—such as total conver- sions—while preserving user privacy. Traditional secure two-party computation methods allow joint computation on single-identifier data without revealing raw inputs, but they fall short when mul- tidimensional matching is needed and leak the intersection size, exposing sensitive information to privacy attacks.
This paper tackles the challenging and practical problem of multi- identifier private user profile matching for privacy-preserving ad measurement, a cornerstone of modern advertising analytics. We introduce a comprehensive cryptographic framework leveraging re- versed Oblivious Pseudorandom Functions (OPRF) and novel blind key rotation techniques to support secure matching across multiple identifiers. Our design prevents cross-identifier linkages and in- cludes a differentially private mechanism to obfuscate intersection sizes, mitigating risks such as membership inference attacks.
We present a concrete construction of our protocol that achieves both strong privacy guarantees and high efficiency. It scales to large datasets, offering a practical and scalable solution for privacy- centric applications like secure ad conversion tracking. By combin- ing rigorous cryptographic principles with differential privacy, our work addresses a critical need in the advertising industry, setting a new standard for privacy-preserving ad measurement frameworks.
A Polynomial Public-Key Cryptosystem Based on Jacobian-Preserving Composition
We propose a public-key cryptosystem based on Jacobian-preserving polynomial compositions, utilizing algebraically invertible polynomial maps with hard-to-invert composition. The construction utilizes polynomial maps over , where is a prime number, with Jacobian determinant equal to 1 to ensure invertibility. The public key function is defined as the composition of invertible polynomial maps , each with Jacobian determinant 1, while the private key consists of the individual components used in the composition. Although inverting the composition is possible, inverting without the knowledge of the factors is computationally infeasible. This system incorporates both triangular and affine polynomial maps. We discuss the construction, provide formal correctness proofs, analyze hardness assumptions, and present a Python-based prototype with benchmark results.
Towards AI-driven Optimization of Robust Probing Model-compliant Masked Hardware Gadgets Using Evolutionary Algorithms
Side-channel analysis (SCA) is a persistent threat to security-critical systems, enabling attackers to exploit information leakage. To mitigate its harmful impacts, masking serves as a provably secure countermeasure that performs computing on random shares of secret values. As masking complexity, required effort, and cost increase dramatically with design complexity, recent techniques rely on designing and implementing smaller building blocks, so-called “gadgets.” Existing work on optimizing gadgets has primarily focused on latency, area, and power as their objectives. To the best of our knowledge, the most up-to-date ASIC-specific masking gadget optimization frameworks require significant manual effort. This paper is inspired by previous work introducing open-source academic tools to leverage aspects of artificial intelligence (AI) in electronic design automation (EDA) to attempt to optimize and enhance existing gadgets and overall designs. We concentrate on evolutionary algorithms (EA), optimization techniques inspired by biological evolution and natural selection, to find optimal or near-optimal solutions. In this regard, our goal is to improve gadgets in terms of power and area metrics. The primary objective is to demonstrate the effectiveness of our methods by integrating compatible gates from a technology library to generate an optimized and functional design without compromising security. Our results show a significant reduction in power consumption and promising area improvements, with values reduced by 15% in some cases, compared to the naïve synthesis of masked designs. We evaluate our results using industry-standard synthesis and pre-silicon side-channel verification tools.
Performance and Privacy: A Low-Latency Secure Anonymous Authentication Protocol with OPRF
erforming privacy-preserving queries, particularly anonymous authentication, against large-scale datasets presents critical tradeoffs between security, latency, scalability. Existing cryptographic solutions often impose linear
computation or communication overheads. This paper introduces a novel,
efficient protocol for secure anonymous authentication, uniquely combining matrix partitioning via hash prefixes with Oblivious Pseudorandom Functions in a
three-server semi-honest model. Crucially, compared to our previous work published at TrustCom 2024, this enhanced protocol eliminates the dependency on a
designated fully trusted server, achieving security when any single server is corrupted. Furthermore, our protocol demonstrates significant performance improvements over current state-of-the-art methods. It achieves sub-linear online
communication complexity. Evaluations show that for datasets of size 𝑚 ≈ 106
,
our protocol reduces online communication by at least 30% compared to other
sub-linear schemes, while maintaining competitive online computation times. Security is proven via simulation, and comprehensive experiments confirm practicality for datasets up to 𝑚 = 10^8
Depth-Optimized Quantum Implementation of CHAM
Security weaknesses in the symmetric-key components of a cipher can compromise its overall security assurances. With the rapid progress in quantum computing in recent years, there is a growing focus on assessing the resilience of symmetric-key cryptography against possible quantum attacks (e.g., Grover's algorithm).
This paper is dedicated to examining the quantum attack resistance of CHAM, a family of lightweight block ciphers developed by a Korean research group. We provide an optimized quantum circuit implementation of CHAM and evaluate its complexity metrics, such as the number of qubits, gate count, and circuit depth, within the context of Grover's search algorithm.
For Grover's key search, minimizing the quantum circuit depth is the key optimization goal, particularly when parallel search capabilities are taken into account. Our approach enhances parallelism for a low-depth quantum circuit of the CHAM block cipher, significantly reducing the full circuit depth compared to previous works. For example, in the case of CHAM-128/128, our implementation achieves a full depth of 14,772, compared to 37,768 depth in the best known prior work. This highlights the substantial depth reduction enabled by our parallelism-oriented design, which facilitates more practical quantum attacks.
Ligerito: A Small and Concretely Fast Polynomial Commitment Scheme
In this note we present Ligerito, a small and practically fast polynomial commitment and inner product scheme. For the case of univariate and multilinear polynomial evaluations, the scheme has a proof size of up to constants and for a large enough field, where is the size of the input. Ligerito is also fast on consumer hardware: when run on an M1 MacBook Pro for a polynomial with coefficients over a 32-bit binary field, our Julia prover implementation has a proving time of 1.3 seconds and a proof size of 255 KiB. Ligerito is also relatively flexible: any linear code for which the rows of the generator matrix can be efficiently evaluated can be used. Such codes include Reed–Solomon codes, Reed–Muller codes, among others. This, in turn, allows for a high degree of flexibility on the choice of field and can likely give further efficiency gains in specific applications.
Unconditional Individual Verifiability with Receipt Freeness via Post-Cast Isolation
We introduce a trapdoorless tracker construction for electronic voting that fundamentally reimagines verifiability through information flow control. Unlike existing E2E verifiable systems where receipt-freeness compromises individual verifiability, our approach achieves both simultaneously by requiring only temporary isolation of the voting calculator between ballot casting and verification—when voters enter unique challenges to compute trackers for locating their votes on the public tally board. Our construction leverages perfectly hiding Pedersen commitments and a unique tracker challenge mechanism to simultaneously achieve unconditional individual verifiability, practical everlasting privacy, and receipt-freeness while relying only on standard cryptographic assumptions. When verification failures occur, our system provides transparent accountability by precisely identifying whether the voting calculator or voting device is responsible. The system maintains security even with partial compliance with isolation procedures and offers robust protection against various adversaries while requiring minimal trust assumptions.
From Worst-Case Hardness of to Quantum Cryptography via Quantum Indistinguishability Obfuscation
Indistinguishability obfuscation (iO) has emerged as a powerful cryptographic primitive with many implications. While classical iO, combined with the infinitely-often worst-case hardness of , is known to imply one-way functions (OWFs) and a range of advanced cryptographic primitives, the cryptographic implications of quantum iO remain poorly understood. In this work, we initiate a study of the power of quantum iO. We define several natural variants of quantum iO, distinguished by whether the obfuscation algorithm, evaluation algorithm, and description of obfuscated program are classical or quantum. For each variant, we identify quantum cryptographic primitives that can be constructed under the assumption of quantum iO and the infinitely-often quantum worst-case hardness of (i.e., ). In particular, we construct pseudorandom unitaries, QCCC quantum public-key encryption and (QCCC) quantum symmetric-key encryption, and several primitives implied by them such as one-way state generators, (efficiently-verifiable) one-way puzzles, and EFI pairs, etc. While our main focus is on quantum iO, even in the classical setting, our techniques yield a new and arguably simpler construction of OWFs from classical (imperfect) iO and the infinitely-often worst-case hardness of .
zkGPT: An Efficient Non-interactive Zero-knowledge Proof Framework for LLM Inference
Large Language Models (LLMs) are widely employed for their ability to generate human-like text. However, service providers may deploy smaller models to reduce costs, potentially deceiving users. Zero-Knowledge Proofs (ZKPs) offer a solution by allowing providers to prove LLM inference without compromising the privacy of model parameters. Existing solutions either do not support LLM architectures or suffer from significant inefficiency and tremendous overhead. To address this issue, this paper introduces several new techniques. We propose new methods to efficiently prove linear and non-linear layers in LLMs, reducing computation overhead by orders of magnitude. To further enhance efficiency, we propose constraint fusion to reduce the overhead of proving non-linear layers and circuit squeeze to improve parallelism. We implement our efficient protocol, specifically tailored for popular LLM architectures like GPT-2, and deploy optimizations to enhance performance. Experiments show that our scheme can prove GPT-2 inference in less than 25 seconds. Compared with state-of-the-art systems such as Hao et al. (USENIX Security'24) and ZKML (Eurosys'24), our work achieves nearly and speedup, respectively.
PA1 Security on Release of Unverified Plaintext in Encrypt-then-MAC AE Schemes
At ASIACRYPT 2014, Andreeva et al. put forward a definition for security of authenticated encryption under release of unverified plaintext. They introduced two notions of plaintext awareness (PA1 and its stronger sibling PA2), suggested to be used in conjunction with confidentiality in case of release of unverified plaintext, as well as the notion of integrity under release of unverified plaintext (INT-RUP). Various efforts have been made to develop a unified model (e.g., Ashur et al., CRYPTO 2017, Chang et al., ToSC 2019(4)). With respect to the analysis of existing and new modes under release of unverified plaintext, most research however has focused on INT-RUP security only. Plaintext awareness is less studied and understood.
In this work, we take a detailed look at the original definitions of PA1 and PA2 security. We observe that the definitions leave too much room for interpretation, and claimed results such as PA1 security of Encrypt-then-MAC are unjustified. The core of the issue lies in the fact that PA1 security is necessarily tied to the implementation of the scheme. To resolve this, we present refined definitions of PA1 and PA2 security. We argue that even for these refined definitions, there is no implementation of Encrypt-and-MAC that is PA1 (nor PA2) secure. For MAC-then-Encrypt, results depend on the actual scheme, as we demonstrate using a negative result and a positive result (from literature, on Romulus-M). Furthermore, we formally prove for Encrypt-then-MAC that (i) there exist implementations that are PA1 insecure and (ii) there exist implementations that are PA1 secure. In other words, Encrypt-then-MAC is insecure under the old definition but secure under the new definition, provided a proper implementation is used. We apply this observation to Isap v2, finalist in the NIST Lightweight Cryptography competition, where we additionally deal with the complication that the same key is used for encryption and authentication.
Pseudorandom Correlation Generators for Multiparty Beaver Triples over
We construct an efficient pseudorandom correlation generator (PCG) (Boyle et al., Crypto'19) for two-party programmable oblivious linear evaluation (OLE) functionality over . Our construction (i) has an efficient seed expansion phase, and (ii) comes with a concretely efficient protocol for distributing the seeds that makes black-box use of cryptography and runs in a constant number of rounds.
PCGs for programmable OLE are known to imply PCGs for generating -party Beaver triples over . The resultant PCG has a seed setup phase whose communication cost is times than that of the programmable OLE protocol. The per-party seed size and the seed expansion time have a multiplicative overhead of . Prior constructions for efficiently generating multiparty Beaver triples only worked for finite fields where or required one bit of per-party communication for each triple generated (and hence, do not satisfy the PCG definition). Thus, ours is the first concretely efficient PCG for generating Beaver triples over in the multiparty setting.
Our distributed seed generation protocol generates two-party programmable OLEs in 3.5 minutes with 255 MB of communication over a LAN network. The PCG seed size is around 55 MB and the expansion phase requires 10 PRG calls and around 229 thousand XOR and AND operations per triple, producing roughly 31,000 triples per second.
Our PCG for generating multiparty Beaver triples has lower concrete communication cost than the state-of-the-art for small number of parties. When compared to the FOLEAGE protocol (Bombar et al, Asiacrypt 2024) which requires one bit of per-party communication per triple that is generated, our communication cost is lower by when generating triples between three parties and is lower for the case of five parties.
At a conceptual level, our protocol deviates from the prior approaches which relied on variants of dual learning parity with noise (LPN) assumption. Instead, our construction combines both the primal and dual versions of LPN to achieve the aforementioned efficiency.
UOV-Based Verifiable Timed Signature Scheme
Verifiable Timed Signatures (VTS) are cryptographic primitives that enable the creation of a signature that can only be retrieved after a specific time delay, while also providing verifiable evidence of its existence. This framework is particularly useful in blockchain applications. Current VTS schemes rely on signature algorithms such as BLS, Schnorr, and ECDSA, which are vulnerable to quantum attacks due to the vulnerability of the discrete logarithm problem to Shor's Algorithm. We introduce VT-UOV, a novel VTS scheme based on the Salt-Unbalanced Oil and Vinegar (Salt-UOV) Digital Signature Algorithm. As a multivariate polynomial-based cryptographic primitive, Salt-UOV provides strong security against both classical and quantum adversaries. Adapting Salt-UOV into the VTS framework requires addressing challenges such as complex parameters instead of a integer, the computational complexity of solving multivariate equations, and the integration of Time-Lock Puzzles (TLPs) for enforcing delayed signature generation. Our experimental results show that VT-UOV exhibits a unique performance profile among existing VTS constructions. This paper offers a detailed exploration of the VT-UOV scheme and its overall security and performance properties.
Cryptanalysis of HiAE
We describe key recovery attacks on the authenticated stream cipher HiAE, which was recently proposed for future high-throughput communication networks such as 6G by Huawei. HiAE uses a 2048-bit state, a 256-bit key and produces 128-bit tags, targeting 256-bit security against key and state recovery. As a nonce-based AEAD scheme, it relies on the uniqueness of the nonce per key for these security claims. Our analysis indicates that a complete recovery of the 256-bit key of HiAE is possible with a complexity of data and at most time in the nonce-respecting attack setting, with various small tradeoffs concerning the data and time complexity. While infeasible in practice, this attack therefore violates the 256-bit security claim for HiAE. We describe further complete key-recovery attacks in the nonce-misuse and release of unverfied plaintext (RUP) settings which require only a small constant number of repeated nonces or unverified decryption queries, respectively.
A Tale of Two Worlds, a Formal Story of WireGuard Hybridization
PQ-WireGuard is a post-quantum variant of WireGuard
Virtual Private Network (VPN), where Diffie-Hellman-based key exchange is
replaced by post-quantum Key Encapsulation Mechanisms-based key
exchange. In this paper, we first conduct a thorough formal analysis
of PQ-WireGuard's original design, in which we point out and fix a
number of weaknesses. This leads us to an improved construction
PQ-WireGuard*. Secondly, we propose and formally analyze a new
protocol, based on both WireGuard and PQ-WireGuard*, named
Hybrid-WireGuard, compliant with current best practices for
post-quantum transition about hybridization techniques. For our
analysis, we use the SAPIC+ framework that enables the generation of
three state-of-the-art protocol models for the verification tools
ProVerif, DeepSec and Tamarin from a single specification,
leveraging the strengths of each tool. We formally prove that
Hybrid-WireGuard is secure. Eventually, we propose a generic, efficient and usable Rust implementation of our new protocol.
Engel p-adic Supersingular Isogeny-based Cryptography over Laurent series
This paper builds the foundation for a cryptosystem based on p-adic representations of supersingular elliptic curve isogenies generated through Engel expansions of Laurent series. This mathematical framework manifests as a lightweight encryption scheme implemented on ESP32 microcontrollers for IoT applications. Efficient isogeny paths are constructed for quantum-resistant primitives secured against Shor's algorithm by decomposing elements into Engel sequences. Performance analysis confirms linear computational scaling with message size and speed gain at a higher clock rate, along with power trace signatures corroborating theoretical computational models. Consequently, we confirm the practical feasibility of our proposed p-adic isogeny cryptography on resource-constrained embedded systems while offering rigorous post-quantum security assurances.
Mind the Gap: Securing QKD Interfaces with Post-Quantum Proxies
Quantum Key Distribution (QKD) is a promising technology that enables information-theoretic secure key exchange using quantum principles. It is being increasingly deployed in critical sectors through emerging Quantum Key-as-a-Service (QKaaS) models. However, current standards like ETSI GS QKD 014 assume that QKD keys are consumed within trusted environments—an assumption that breaks down in real-world deployments where keys are delivered over classical networks to remote, potentially untrusted endpoints. This creates a security gap at the interface between QKD systems and key-consuming applications. In this paper, we identify this gap and propose a proxy-based solution that secures QKD key delivery using post-quantum cryptography (PQC). Our proxy transparently applies PQC-based signatures and key encapsulation to ETSI-compliant QKD APIs, without requiring changes to existing infrastructure. It supports cryptographic agility, allowing runtime switching between multiple PQC schemes. We benchmark our design using both QKD simulators and production-grade QKD hardware, and show that it introduces minimal overhead with efficient NIST PQC algorithms. Our findings highlight the need for stronger protection of the QKD interface in practical deployments. We advocate for a revision to ETSI GS QKD 014 to include an addendum that addresses this critical gap and promotes end-to-end quantum-safe integration.
Solve Approximate CVP via Variants of Nearest-Colattice
The approximate Closest Vector Problem (CVP) is a core computational problem underlying many post-quantum lattice-based signature schemes, including Dilithium, one-more-ISIS, and HuFu. While the security of these schemes is typically expressed in terms of the Inhomogeneous Short Integer Solution (ISIS) problem, it is well-known that ISIS can be efficiently reduced to approximate CVP. Despite its foundational role, approximate CVP with non-negligible approximation factors remains far less explored than other lattice problems such as SVP or LWE, creating a critical gap in both theory and practice.
In this work, we bridge this gap by advancing the Colattice framework for solving approximate CVP with large approximation factors. More concretely, (1) We define a practical version of the Colattice algorithm and propose a randomized Nearest Colattice for generating more than one approximate closest vector. (2) Define a formal strategy space for blockwise approximate CVP. (3) Propose a polynomial-time strategy selection algorithm and prove its correctness under standard lattice heuristics. (4) Building on this, we design an efficient security estimator for approximate CVP in both Euclidean and Infinity norms, and extend it to approximate batch-CVP attack settings. (5) By applying this estimator, we perform concrete security evaluations of Dilithium, HuFu, and one-more-ISIS. Our results reveal that the security of Dilithium are at least 10 lower than the required security thresholds at NIST levels 3 and 5, and almost none of the evaluated schemes withstand approximate batch-CVP attacks with queries. (6) We integrate a slicer and Colattice into G6K-CPU, leveraging the Locality-Sensitive Hashing (LSH) technqiue for nearest neighbors search (NNS). This is the first practical implementation of an NNS-accelerated slicer. Our results demonstrate the practical efficiency of approximate CVP and batch-CVP attacks, highlighting the need for more accurate security estimation.
These findings underscore the practical importance of accurate approximate CVP modeling and call for a reassessment of current parameter sets in post-quantum signature schemes.
Simple VESS
We present a scheme for verifiably encrypting a Shamir secret sharing to a committee of shareholders. Such a scheme can be used to easily implement distributed key generation (DKG) and resharing protocols used in threshold signing and decryption protocols. Our scheme is a minor variation on known techniques, and is not the most efficient in terms of communication and computational complexity. However, it is extremely simple and easy to implement. Moreover, for moderately sized shareholder committees of up to, say, 13 parties or so, and for applications where a DKG/resharing only needs to be performed occasionally, its performance should be acceptable in practice.
Efficient Constant-Size Linkable Ring Signatures for Ad-Hoc Rings via Pairing-Based Set Membership Arguments
Linkable Ring Signatures (LRS) allow users to anonymously sign messages on behalf of ad-hoc rings, while ensuring that multiple signatures from the same user can be linked. This feature makes LRS widely used in privacy-preserving applications like e-voting and e-cash. To scale to systems with large user groups, efficient schemes with short signatures and fast verification are essential. Recent works, such as DualDory (ESORICS’22) and LLRing (ESORICS’24), improve verification efficiency through offline precomputations but rely on static rings, limiting their applicability in ad-hoc ring scenarios. Similarly, constant-size ring signature schemes based on accumulators face the same limitation.
In this paper, we propose a framework for constructing constant-size LRS suitable for large ad-hoc rings. We introduce a novel pairing-based Set Membership Argument (SMA) with a proof size of only three group elements. By leveraging KZG polynomial commitments, we optimize the verification to require only constant group exponentiations and pairings, as well as linear field multiplications. Utilizing the SMA, our framework achieves constant-size signatures with verification dominated by linear field operations, outperforming existing schemes that require linear group exponentiations in ad-hoc ring settings. Moreover, it exhibits strong scalability: (i) compatibility with any PKI-based cryptosystem and (ii) scoped linkability, enabling flexible definitions of linking scope.
We instantiate our framework using a discrete logarithm public key structure. On the curve, our signature size is fixed at 687 bytes, which to our best knowledge is the shortest LRS for ring sizes larger than 32. For a ring size of 1024, our verification cost is only 10.4 ms, achieving 48.6×, 2.6×–467×, 7.9×–13.2×, and 2.2×–102.5× improvements over Omniring (CCS’19), DualDory (with and without precomputation), LLRing-DL (with and without precomputation), and LLRing-P (with and without precomputation), respectively. Moreover, this performance gap continues to grow as the ring size increases.
The Effectiveness of Differential Privacy in Real-world Settings: A Metrics-based Framework to help Practitioners Visualise and Evaluate
Differential privacy (DP) has emerged as a preferred solution for privacy-preserving data analysis, having been adopted by several leading Internet companies. DP is a privacy-preserving mechanism that protects against re-identification of individuals within aggregated datasets. It is known that the privacy budget determines the trade-off between privacy and utility. In this paper, we propose the use of novel set of metrics and an easy-to-implement, step-by-step framework to facilitate the implementation of the DP mechanism on real-world datasets and guide the selection of based on desired accuracy vs utility trade-off. Currently, for a given query there is no widely accepted methodology on how to select and choose the best DP mechanism that offers an optimal trade-off between privacy and utility. In order to address this gap, we perform experiments by considering three real-world datasets, aiming to identify optimal and suitable mechanisms (Laplace or Gaussian) based on privacy utility trade-off as per use case for the commonly used count, sum and average queries for each dataset. Based on our experiment results, we observe that using our metric and framework, one can analyse noise distribution charts of multiple queries, and choose the suitable and the DP mechanism for achieving a balance between privacy and utility. Additionally, we show that the optimal depends on the particular query, desired accuracy and context in which DP is implemented, which suggests that an arbitrary, a-prior selection of cannot provide adequate results. Our framework prioritises the plotting and visualisation of values and results in the DP analysis, making its adoption easy for a wider audience.
Guarding the Signal: Secure Messaging with Reverse Firewalls
Secure messaging protocols allow users to communicate asynchronously over untrusted channels with strong guarantees of privacy, authenticity, forward secrecy, and post-compromise security. However, traditional security analyses of these protocols assume complete trust in the hardware and software of honest participants, overlooking a significant class of real-world threats known as subversion attacks. These attacks alter cryptographic algorithms to compromise security, by exfiltrating secrets or creating vulnerabilities that are often undetected.
The notion of reverse firewalls (EC'15), aims at protecting against subversion attacks by introducing a third party, called a "reverse firewall" (RF), which sits between a party and the outside world and modifies its outgoing and incoming messages in a way such that, even if the party's machine has been corrupted (in a way that maintains functionality), security is still preserved. Importantly, the firewall shares no private information with the parties, and parties put no more trust in the firewall than they do in the communication channel. In this work, we address the existing gap in secure messaging and subversion attacks by presenting several key contributions:
- We design the first subversion-resilient secure messaging protocol based on the model of RF. Our protocol is based on the Signal protocol---the current state-of-the-art in two-party secure messaging, though it lacks subversion resilience---and achieves subversion resilience with only constant overhead over Signal.
- We develop a subversion-resilient version of the X3DH protocol in the RF model. X3DH is a core component that facilitates secure initial key agreement in Signal's protocol.
- We introduce and formalize the notion of Continuous Key Agreement with Tamper Detection, an essential concept for subversion-resilient secure messaging. Our notion enables parties to continuously agree on keys, even in the presence of active adversaries capable of partially tampering with the key exchange transcript. We present a construction of our notion and prove its subversion resilience in the model of RF.
Beyond LWE: a Lattice Framework for Homomorphic Encryption
We suggest a generalization of homomorphic encryption (HE) schemes from a purely geometrical and lattice-based perspective. All the current reference HE schemes are based on the ring version of the Learning with Errors (LWE) problem. In this proposal, we first investigate LWE-based cryptosystems from a lattice point of view and present a framework that allows to obtain the same result, in geometrical terms, from any lattice — as long as it contains a sufficiently short trapdoor vector.
More precisely, we generalize the classical BGV (Brakerski, Gentry and Vaikuntanathan, ITCS '12) and GSW (Gentry, Sahai and Waters, CRYPTO '14) schemes to purely lattice-based variants, which we call Lattice-BGV and Lattice-GSW.
By abstracting away the particular hardness assumption, our lattice framework allows to be instantiated with a broader range of lattices and hardness assumptions.
For example, LWE gives a natural trapdoor for random -ary lattices, and when plugged into our framework one obtains the original BGV and GSW schemes, while in this work we will also consider an instantiation based on the Lattice Isomorphism Problem (LIP), leading to the first more advanced cryptographic scheme build from LIP . Our framework also gives a geometrical and natural explanation of HE procedures and generalizes some properties, such as the ability to store many messages in a single ciphertext, one for each short trapdoor vector, without relying on any particular algebraic structure.
In a concurrent work Branco, Malavolta and Maradni (ePrint 2025/993) propose an alternative LIP-based FHE construction.
Optimized Rank Sort for Encrypted Real Numbers
Sorting arrays encrypted under fully homomorphic encryption remains a fundamental challenge due to the high cost of private comparisons and the incompatibility of conventional sorting algorithms with the encrypted domain. Recently, Hong et al. (IEEE Transactions on Information Forensics and Security, 2021) proposed a -way sorting network tailored to encrypted real numbers, but its reliance on multiple comparison stages incurs substantial multiplicative depth and significant bootstrapping overhead, even for modest array sizes.
In this work, we propose a novel rank-based sorting algorithm for encrypted real numbers that performs only a single comparison stage, thereby eliminating the need for bootstrapping operations. Our empirical evaluation demonstrates that the proposed method significantly outperforms the -way approach for small to medium array sizes , achieving a speedup at with a total runtime of seconds. Furthermore, we examine the recent matrix-based rank sort method by Mazzone et al. (USENIX Security '25) and show that integrating our optimized rank construction improves its efficiency. Specifically, we achieve and performance gains for and , respectively.
Understanding Lasso: A Novel Lookup Argument Protocol
In 2023, Srinath Setty, Justin Thaler, and Riad Wahby published a paper that describes a novel lookup argument with efficient verification called Lasso. We present a focused and accessible overview of the Lasso lookup argument that stands for the foundational component of the Jolt ZK-VM. This article distills the core principles behind Lasso: the sum-check protocol, multilinear polynomials and their extensions, Spark commitment, offline memory-checking, and the evolution of Spark called Surge. By clarifying the underlying protocols and their relationship to innovations like Spark and Surge, we aim to provide researchers and engineers with practical insights into the cryptographic foundations powering both Lasso and the Jolt virtual machine.
On Frontrunning Risks in Batch-Order Fair Systems for Blockchains (Extended Version)
In timing-sensitive blockchain applications, such as decentralized finance (DeFi), achieving first-come-first-served (FCFS) transaction ordering among decentralized nodes is critical to prevent frontrunning attacks. Themis[CCS'23], a state-of-the-art decentralized FCFS ordering system, has become a key reference point for high-throughput fair ordering systems for real-world blockchain applications, such as rollup chains and decentralized sequencing, and has influenced the design of several subsequent proposals. In this paper, we critically analyze its core system property of practical batch-order fairness and evaluate the frontrunning resistance claim of Themis. We present the Ambush attack, a new frontrunning technique that achieves nearly 100% success against the practical batch-order fair system with only a single malicious node and negligible attack costs. This attack causes a subtle temporary information asymmetry among nodes, which is allowed due to the heavily optimized communication model of the system. A fundamental trade-off we identify is a challenge in balancing security and performance in these systems; namely, enforcing timely dissemination of transaction information among nodes (to mitigate frontrunning) can easily lead to non-negligible network overheads (thus, degrading overall throughput performance). We show that it is yet possible to balance these two by delaying transaction dissemination to a certain tolerable level for frontrunning mitigation while maintaining high throughput. Our evaluation demonstrates that the proposed delayed gossiping mechanism can be seamlessly integrated into existing systems with only minimal changes.
Security Analysis on a Public-Key Inverted-Index Keyword Search Scheme with Designated Tester
Gao et al. (IEEE Internet of Things Journal 2024) proposed public-key inverted-index keyword search with designated tester as an extension of public key encryption with keyword search (PEKS). In their scheme, a server (a tester) has a secret key and uses the key for running the search algorithm due to the designated tester setting. They proved that no information of keyword is revealed from trapdoors under the decisional Diffie-Hellman (DDH) assumption. However, they also employed a symmetric pairing which can be seen as a DDH-solver. Thus, it is expected that information of keyword is revealed from trapdoors since the underlying complexity assumption does not hold. In this paper, we demonstrate two attacks against the Gao et al.'s scheme where information of keyword is revealed from a trapdoor. The first attack completes by using only the server's secret key in addition to the challenge trapdoor, without any additional encryption/trapdoor queries. We remark that an adversary is not allowed to obtain the server's secret key in their security model, and our attack is outside of their security model. Thus, we discuss the roles of the server, and stress that our attack scenario is reasonable. The second attack does not employ the server's secret key and utilizes linkability of two trapdoors. In both attacks, the attack complexity is just two pairing computations and is feasible in terms of the computational cost.
Threshold Signatures Reloaded: ML-DSA and Enhanced Raccoon with Identifiable Aborts
Threshold signatures enable multiple participants to collaboratively produce a digital signature, ensuring both fault tolerance and decentralization. As we transition to the post-quantum era, lattice-based threshold constructions have emerged as promising candidates. However, existing approaches often struggle to scale efficiently, lack robustness guarantees, or are incompatible with standard schemes — most notably, the NIST-standard ML-DSA.
In this work, we explore the design space of Fiat-Shamir-based lattice threshold signatures and introduce the two most practical schemes to date. First, we present an enhanced TRaccoon-based [DKM+24] construction that supports up to 64 participants with identifiable aborts, leveraging novel short secret-sharing techniques to achieve greater scalability than previous state-of-the-art methods. Second — and most importantly — we propose the first practical ML-DSA-compatible threshold signature scheme, supporting up to 6 users.
We provide full implementations and benchmarks of our schemes, demonstrating their practicality and efficiency for real-world deployment as protocol messages are computed in at most a few milliseconds, and communication cost ranges from 10.5 kB to 525 kB depending on the threshold.
Automated Analysis and Synthesis of Message Authentication Codes
Message Authentication Codes (MACs) represent a fundamental symmetric key primitive, serving to ensure the authenticity and integrity of transmitted data.
As a building block in authenticated encryption and in numerous deployed standards, including TLS, IPsec, and SSH, MACs play a central role in practice.
Due to their importance for practice, MACs have been subject to extensive research, leading to prominent schemes such as HMAC, CBCMAC, or LightMAC.
Despite the existence of various MACs, there is still considerable interest in creating schemes that are more efficient, potentially parallelizable, or have specific non-cryptographic attributes, such as being patent-free.
In this context, we introduce an automated method for analyzing and synthesizing MAC schemes.
In order to achieve this goal, we have constructed a framework that restricts the class of MACs in such a way that it is sufficiently expressive to cover known constructions, yet also admits automated reasoning about the security guarantees of both known and new schemes.
Our automated analysis has identified a novel category of MACs, termed "hybrid" MACs. These MACs operate by processing multiple blocks concurrently, with each block managed by a different, specified MAC scheme. A key finding is that in certain scenarios, the hybrid MAC marginally outperforms the simultaneous operation of the individual MACs. This improvement is attributed to the hybrid approach exploiting the strengths and compensating for the weaknesses of each distinct MAC scheme involved.
Our implementation confirms that we have successfully identified new schemes that have comparable performance with state-of-the-art schemes and in some settings seem to be slightly more efficient.
Man-in-the-Middle and Key Recovery Attacks against QP-KEM
The Q-problem has been introduced as a new post-quantum hard problem. We present two man-in-the-middle and three key recovery attacks against the key exchange protocol based on the Q-problem. The man-in-the-middle attacks take negligible time and allow the attacker to recover the exchanged key. The most effective key recovery attack has a computational complexity of . We also propose countermeasures against all attacks.
Efficient, Scalable Threshold ML-DSA Signatures: An MPC Approach
A threshold signature is an advanced protocol that splits a secret signing key among multiple parties, allowing any subset above a threshold to jointly generate a signature. While post-quantum (PQ) threshold signatures are actively being studied --- especially in response to NIST's recent call for threshold schemes --- most existing solutions are tailored to specially designed, threshold-friendly signature schemes. In contrast, many real-world applications, such as distributed certificate authorities and digital currencies, require signatures that remain verifiable under the standardized verification procedures already in use. Given NIST's recent standardization of PQ signatures and ongoing industry deployment efforts, designing an efficient threshold scheme that interoperates with NIST-standardized verification remains a critical open problem.
In this work, we present the first efficient and scalable solution for multi-party generation of the module-lattice digital signature algorithm (ML-DSA), one of NIST's PQ signature standards. Our contributions are two-fold. First, we present a variant of the ML-DSA signing algorithm that is amenable to efficient multi-party computation (MPC) and prove that this variant achieves the same security as the original ML-DSA scheme. Second, we present several efficient & scalable MPC protocols to instantiate the threshold signing functionality. Our protocols can produce threshold signatures with as little as 100 KB (per party) of online communication per rejection-sampling round. In addition, we instantiate our protocols in the honest-majority setting, which allows us to avoid any additional public key assumptions.
The signatures produced by our protocols verify under the same implementation of ML-DSA verification for all three security levels. Thus, signatures and verification keys of our scheme are (naturally) the same size as that of ML-DSA; previous lattice-based threshold schemes could not match both of these sizes. Overall, our solution is the only method for producing threshold ML-DSA signatures compatible with NIST-standardized verification that scales to an arbitrary number of parties, without any new assumptions.
Last updated: 2025-06-21
SV-LLM: An Agentic Approach for SoC Security Verification using Large Language Models
Ensuring the security of complex system-on-chips (SoCs) designs is a critical imperative, yet traditional verification techniques struggle to keep pace due to significant challenges in automation, scalability, comprehensiveness, and adaptability. The advent of large language models (LLMs), with their remarkable capabilities in natural language understanding, code generation, and advanced reasoning, presents a new paradigm for tackling these issues. Moving beyond monolithic models, an agentic approach allows for the creation of multi-agent systems where specialized LLMs collaborate to solve complex problems more effectively. Recognizing this opportunity, we introduce SV-LLM, a novel multi-agent assistant system designed to automate and enhance SoC security verification. By integrating specialized agents for tasks like verification question answering, security asset identification, threat modeling, test plan and property generation, vulnerability detection, and simulation-based bug validation, SV-LLM streamlines the workflow. To optimize their performance in these diverse tasks, agents leverage different learning paradigms, such as in-context learning, fine-tuning, and retrieval-augmented generation (RAG). The system aims to reduce manual intervention, improve accuracy, and accelerate security analysis, supporting proactive identification and mitigation of risks early in the design cycle. We demonstrate its potential to transform hardware security practices through illustrative case studies and experiments that showcase its applicability and efficacy.
High-Performance FPGA Accelerator for the Post-quantum Signature Scheme CROSS
A significant effort in designing and engineering post-quantum cryptosystems is currently ongoing, also as a result of the National Institute of Standards and Technology (NIST) Post-quantum Cryptography (PQC) standardization process that started in 2016 and recently completed selecting two Key Encapsulation Mechanisms (KEMs), CRYSTALS-Kyber and HQC, and three digital signatures CRYSTALS-Dilithium, Falcon, and SPHINCS+ for standardization. In 2022, NIST launched another standardization effort for additional post-quantum digital signatures, preferably not based on the security assumptions of structured lattices, and with performance better than or equal to that of already standardized schemes (e.g., SPHINCS+ ). This initiative has narrowed down the initial 40 candidates to 14 in October 2024, eliciting public scrutiny of their algorithms and technical evaluation of their performance figures. Among the schemes admitted to the second round of evaluation, the code-based CROSS signature scheme was praised for its speed and the noticeably smaller signature sizes over the standardized version of SPHINCS+ . In this work, we present the first RTL hardware design of CROSS tailored for FPGA devices, delineating efficient implementation strategies for the critical components of the cryptographic scheme. Depending on the chosen security level, our design generates a key pair in 9 to 152 µs, signs a message in 404 µs to 5.89 ms, and verifies a signature in 322 µs to 4.38 ms on the NIST reference FPGA, a Xilinx Artix-7 device, proving competitive when compared with other candidates in the on-ramp standardization effort, namely LESS, MEDS, MAYO, Raccoon and SDitH, and comparable to current standard-selected ML-DSA, FN-DSA, and SLH-DSA in terms of efficiency.
Black-box Approaches to Authenticated Dictionaries: New Constructions and Lower Bounds
Authenticated dictionaries (ADs) enable secure lookups to a dictionary hosted by an untrusted server and are a key component of various real-world applications, including transparency systems and cryptocurrencies. Despite significant overlap in techniques for building ADs and related primitives, such as memory checkers and accumulators (i.e., authenticated sets), these relationships have yet to be formalized.
In this work, we give a rigorous treatment of ADs and prove their precise connection to the latter two cornerstone primitives. We start by laying out the minimal algorithms and security properties needed in practice and introduce a new security notion for ADs called write-committing, which requires update proofs to guarantee an exact count of changes.
We prove that any AD built from a black-box authenticated set (AS) makes at least AS calls per lookup and obeys a trade-off between lookups and updates. With optimal lookups, such a scheme requires at least AS calls per update.
We also resolve the open question of constructing a secure AD from only black-box access to an AS and present two schemes adhering to the trade-off: one with optimal lookup overhead and the other with higher lookup complexity, but which only requires two AS calls for an update.
Finally, we make strides towards unifying memory checkers and ADs. To this end, we present two constructions for memory checkers with black-box access to an AD: one that incurs constant overhead (but needs write-committing) and a second that only requires the AD to be lookup-secure but incurs logarithmic overhead. We then give a simple AD construction using a memory checker as a black-box, with overhead.
Our results demonstrate the inherent limitations of ADs built from accumulators but lay the foundation for extending existing results on memory checkers and other primitives, such as vector commitments, to ADs.
Let be a prime and consider a committed vector .
We develop new techniques for succinctly proving in zero-knowledge that all the elements of are in the range for some .
We refer to this as a batched zero-knowledge range proof, or a batched ZKRP.
This problem comes up often in cryptography: it is needed in publicly verifiable secret sharing (PVSS), confidential transactions, and election protocols.
Our approach makes use of a multilinear polynomial commitment scheme and the sum check protocol to efficiently provide a batch range proof for the entire vector.
Along the way we introduce a new type of a Polynomial Interactive Oracle Proof (PIOP) we call a Homomorphic PIOP that can be compiled into a SNARK.
We use an HPIOP to construct a new efficient zero-knowledge version of the sum check protocol.
We compare our new techniques with existing range proofs and lookup arguments.
Bridging Bitcoin to Second Layers via BitVM2
A holy grail in blockchain infrastructure is a trustless bridge between Bitcoin and its second layers or other chains. We make progress toward this vision by introducing the first light-client based Bitcoin bridge. At the heart of its design lies BitVM2-core, a novel paradigm that enables arbitrary program execution on Bitcoin, combining Turing-complete expressiveness with the security of Bitcoin consensus. BitVM2-bridge advances prior approaches by reducing the trust assumption from an honest majority (t-of-n) to existential honesty (1-of-n) during setup. Liveness is guaranteed with only one rational operator, and any user can act as a challenger, enabling permissionless verification. A production-level implementation of BitVM2 has been developed and a full challenge verification has been executed on the Bitcoin mainnet.
General Multi-Prime Multi-Power RSA - A Generalization of RSA and CRT-RSA to Regular Integers Modulo
Based on a sharpening of Carmichael’s theorem and Euler’s theorem we generalize RSA and CRT-RSA to arbitrary integer moduli , while restricting messages to integers which are regular modulo , meaning that they have a John-von-Neumann pseudoinverse modulo . We prove the correctness and completeness of our encryption and decryption method for this class of messages, and show that the restriction to regular integers is negligible, since under reasonable assumptions almost all integers are regular modulo .
An efficient construction of Raz's two-source randomness extractor with improved parameters
Randomness extractors are algorithms that distill weak random sources into near-perfect random numbers. Two-source extractors enable this distillation process by combining two independent weak random sources. Raz’s extractor (STOC '05) was the first to achieve this in a setting where one source has linear min-entropy (i.e., proportional to its length), while the other has only logarithmic min-entropy in its length. However, Raz's original construction is impractical due to a polynomial computation time of at least degree 4. Our work solves this problem by presenting an improved version of Raz's extractor with quasi-linear computation time, as well as a new analytic theorem with reduced entropy requirements. We provide comprehensive analytical and numerical comparisons of our construction with others in the literature, and we derive strong and quantum-proof versions of our efficient Raz extractor. Additionally, we offer an easy-to-use, open-source code implementation of the extractor and a numerical parameter calculation module.
On the Security of Group Ring Learning with Errors
We propose a dimension-reducing transformation on Group Ring Learning with Errors (GRLWE) samples. We exhibit an efficiently computable isomorphism which takes samples defined over the group rings used in the construction of GRLWE to twice as many samples defined over matrix rings, in half the dimension. This is done by composing two maps: the first map is a transformation showing that the group rings used are orders in central simple algebras, and the second map takes the obtained central simple algebra to a matrix ring. When combined with lattice reduction on the resulting matrix samples, this gives an attack on the GRLWE problem. We extend this attack to other groups proposed for cryptographic use by the creators of GRLWE, and display some numerical results quantifying the effects of the transformation, using the `Lattice Estimator'. We then give a family of groups from which GRLWE-style group rings can be constructed which are immune to our attack, namely the generalized quaternion groups. Finally, we discuss the merits and vulnerabilities of a number of different forms of structured LWE.
Evaluation of Modular Polynomials from Supersingular Elliptic Curves
We present several new algorithms to evaluate modular polynomials of level modulo a prime on an input .
More precisely, we introduce two new generic algorithms, sharing the following similarities: they are based on a CRT approach; they make use of supersingular curves and the Deuring correspondence; and, their memory requirements are optimal.
The first algorithm combines the ideas behind a hybrid algorithm of Sutherland in 2013 with a recent algorithm to compute modular polynomials using supersingular curves introduced in 2023 by Leroux. The complexity (holding around several plausible heuristic assumptions) of the resulting algorithm matches the time complexity of the best known algorithm by Sutherland, but has an optimal memory requirement.
Our second algorithm is based on a sub-algorithm that can evaluate modular polynomials efficiently on supersingular -invariants defined over , and achieves heuristic complexity quadratic in both and , and linear in . In particular, it is the first generic algorithm with optimal memory requirement to obtain a quadratic complexity in~ .
Additionally, we show how to adapt our method to the computation of other types of modular polynomials such as the one stemming from Weber's function.
Finally, we provide an optimised implementation of the two algorithms detailed in this paper, though we emphasise that various modules in our codebase
may find applications outside their use in this paper.
Privacy-aware White and Black List Searching for Fraud Analysis
In many areas of cybersecurity, we require access to Personally Identifiable Information (PII), such as names, postal addresses and email addresses. Unfortunately, this can lead to data breaches, especially in relation to data compliance regulations such as GDPR. An Internet Protocol (IP) address is an identifier that is assigned to a networked device to enable it to communicate over networks that use IP. Thus, in applications which are privacy-aware, we may aim to hide the IP address while aiming to determine if the address comes from a blacklist. One solution to this is to use homomorphic encryption to match an encrypted version of an IP address to a blacklisted network list. This matching allows us to encrypt the IP address and match it to an encrypted version of a blacklist. In this paper, we use the OpenFHE library \cite{OpenFHE} to encrypt network addresses with the BFV homomorphic encryption scheme. In order to assess the performance overhead of BFV, we implement a matching method using the OpenFHE library and compare it against partial homomorphic schemes, including Paillier, Damgard-Jurik, Okamoto-Uchiyama, Naccache-Stern and Benaloh. The main findings are that the BFV method compares favourably against the partial homomorphic methods in most cases.
ZK-ProVer: Proving Programming Verification in Non-Interactive Zero-Knowledge Proofs
Program verification ensures software correctness through formal methods but incurs substantial computational overhead. It typically encodes program execution into formulas that are verified using a SAT solver and its extensions. However, this process exposes sensitive program details and requires redundant computations when multiple parties need to verify correctness. To overcome these limitations, zero-knowledge proofs (ZKPs) generate compact, reusable proofs with fast verification times, while provably hiding the program’s internal logic. We propose a two-phase zero-knowledge protocol that hides program implementation details throughout verification. Phase I uses a zero-knowledge virtual machine (zkVM) to encode programs into SAT formulas without
revealing their semantics. Phase II employs the encoding of resolution proofs for UNSAT instances and circuits for satisfying assignment verification for SAT instances through PLONKish circuits. Evaluation on the Boolector benchmark demonstrates that our method achieves verification time that is efficient and is independent of clause width for UNSAT instances and formula size for SAT instances. The resulting ZKPs enable efficient verification of program properties while providing strong end-to-end privacy guarantees.
Faster signature verification with 3-dimensional decomposition
We introduce a novel technique for verifying Schnorr signatures using fast endomorphisms. Traditionally, fast endomorphisms over prime field curves are used to decompose a scalar into two scalars of half of the size. This work shows that the context of the verification of signatures allows for the decomposition into three scalars of a third of the size. We apply our technique to three scenarios: verification of a single Schnorr signature, batch verification, and verification of BLS signatures within the Blind Diffie-Hellman key exchange protocol. Our experiments on AMD x86 and ARM Cortex-M4 platforms show performance improvements of approximately 20%, 13%, and 27%, respectively. The technique can also be used to accelerate the verification of ECDSA signatures, provided that one additional bit of information is appended to the signature.
As part of our work, we analyze the performance of 3-dimensional lattice reduction algorithms, which are critical for multi-scalar decompositions. To identify the most efficient approach, we experimentally compare Semaev’s algorithm --- known for its best asymptotic complexity --- with the simpler Lagrange’s algorithm. Our results reveal that, despite its simplicity, Lagrange’s algorithm is nearly twice as fast as Semaev’s in practice.
Lightweight Sorting in Approximate Homomorphic Encryption
Sorting encrypted values is an open research problem that plays a crucial role in the broader objective of providing efficient and practical privacy-preserving online services.
The current state of the art work by Mazzone, Everts, Hahn and Peter (USENIX Security '25) proposes efficient algorithms for ranking, indexing and sorting based on the CKKS scheme, which deviates from the compare-and-swap paradigm, typically used by sorting networks, using a permutation-based approach. This allows to build shallow sorting circuits in a very simple way.
In this work, we follow up their work and explore different approaches to approximate the nonlinear functions required by the encrypted circuit (where only additions and multiplications can be evaluated), and we propose simpler solutions that allow for faster computations and smaller memory requirements.
In particular, we drastically reduce the upper bound on the depth of the circuits from 65 to 20, making our circuits usable in relatively small rings such as , even for sorting values while preserving up to three decimal places. As an example, our circuit sorts 128 values with duplicates in roughly 20 seconds on a laptop, using roughly 1 GB of memory, maintaining a precision of 0.01.
Furthermore, we propose an implementation of a swap-based bitonic network that is not based on approximations of the sgn function, which scales linearly with the number of values, useful when the number of available slots is small.
An Efficient Encryption Scheme Based on Codes
In this paper, we propose an improvement to the McEliece encryption scheme by replacing the Goppa code with a code. Specifically, we embed the generator matrices of a split Reed-Solomon code into the generator matrix of the code. This approach disrupts the algebraic structure of Reed-Solomon codes, thereby enhancing resistance against structural attacks targeting such codes, while simultaneously preserving their excellent error-correcting capabilities. As a result, the proposed scheme achieves a significant reduction in public key size. Under the hardness assumptions of the decoding problem and the code distinguishing problem for codes, we prove that the scheme achieves indistinguishability under chosen-plaintext attacks (IND-CPA security). Finally, we provide recommended parameters for various security levels and compare the proposed scheme with other code-based public key encryption schemes.
On the Composition of Single-Keyed Tweakable Even-Mansour for Achieving BBB Security
Observing the growing popularity of random permutation (RP)-based designs (e.g, Sponge), Bart Mennink in CRYPTO 2019 has initiated an interesting research in the direction of RP-based pseudorandom functions (PRFs). Both are claimed to achieve beyond-the-birthday-bound (BBB) security of bits ( being the input block size in bits) but require two instances of RPs and can handle only one-block inputs. In this work, we extend research in this direction by providing two new BBB-secure constructions by composing the tweakable Even-Mansour appropriately. Our first construction requires only one instance of an RP and requires only one key. Our second construction extends the first to a nonce-based Message Authentication Code (MAC) using a universal hash to deal with multi-block inputs. We show that the hash key can be derived from the original key when the underlying hash is the Polyhash. We provide matching attacks for both constructions to demonstrate the tightness of the proven security bounds.
Jigsaw: Doubly Private Smart Contracts
Privacy is a growing concern for smart contracts on public ledgers.
In recent years, we have seen several practical systems for privacy-preserving smart contracts, but they only target privacy of on-chain data, and rely on trusted off-chain parties with user data -- for instance, a decentralized finance application (e.g. exchange) relies on an off-chain matching engine to process client orders that get settled on-chain, where privacy only applies to the on-chain data.
Privacy conscious users demand stronger notions of privacy, for their identity and their data, from all other parties in the ecosystem.
We propose a novel framework for smart contracts that ensures {\em doubly private}
execution, addressing {both on-chain and off-chain privacy} requirements.
In our framework, clients submit their requests in a privacy-preserving manner to a group of (potentially mutually untrusting) servers. These servers collaboratively match client requests without learning any information about the data or identities of the clients.
We then present {\em Jigsaw}, an efficient cryptographic realization of our proposed framework. {\em Jigsaw} builds on the ZEXE architecture (Bowe et al., S\&P 2020), which leverages zkSNARKs, and extends Collaborative zkSNARKs (Ozdemir and Boneh, USENIX 2022) to enable proof generation by a group of servers.
In Jigsaw, we introduce a novel collaborative zkSNARK construction that achieves low latency and reduced proving time, and showcase these advantages over sample applications ranging from trading in a decentralized exchange to auctions and voting.
Our experiments demonstrate that {\em Jigsaw} is roughly x faster in proof generation and uses orders-of-magnitude less bandwidth than the naive approach of using off-the-shelf Collaborative zkSNARKs.
QV-net: Decentralized Self-Tallying Quadratic Voting with Maximal Ballot Secrecy
Decentralized e-voting enables secure and transparent elections without relying on trusted authorities, with blockchain emerging as a popular platform. It has compelling applications in Decentralized Autonomous Organizations (DAOs), where governance relies on voting with blockchain-issued tokens. Quadratic voting (QV), a mechanism that mitigates the dominance of large token holders, has been adopted by many DAO elections to enhance fairness. However, current QV systems deployed in practice publish voters' choices in plaintext with digital signatures. The open nature of all ballots comprises voter privacy, potentially affecting voters' honest participation. Prior research proposes using cryptographic techniques to encrypt QV ballots, but they work in a centralized setting, relying on a trusted group of tallying authorities to administrate an election. However, in DAO voting, there is no trusted third party.
In this paper, we propose QV Network (QV-net), the first decentralized quadratic voting scheme, in which voters do not need to trust any third party other than themselves for ballot secrecy. QV-net is self-tallying with maximal ballot secrecy. Self-tallying allows anyone to compute election results once all ballots are cast. Maximal ballot secrecy ensures that what each voter learns from QV-net is nothing more than the tally and their own ballot. We provide an open-source implementation of QV-net to demonstrate its practicality based on real-world DAO voting settings, reporting only a few milliseconds for voting and a maximum of 255 milliseconds for tallying.
The exceptional efficiency of QV-net is attributed to the design of two new Zero-Knowledge Argument of Knowledge (ZKAoK) protocols for QV ballot secrecy and integrity. Previous works generally rely on pairing-friendly curves to prove the well-formedness of an encrypted QV ballot. But they incur heavy computation and large data sizes. We tackle the challenges of appropriately formalizing and proving ZKAoK relations for QV without using these curves. Specifically, we develop a succinct ZKAoK to prove a new relation: the sum of squares of a private vector's components equals a private scalar. We also introduce the first aggregated range proof to prove that values committed under different keys fall within their respective ranges. Together, these two new zero-knowledge protocols enable us to build an efficient decentralized QV scheme and are of independent interest.
Dynamic Group Signatures with Verifier-Local Revocation
Group Signatures are fundamental cryptographic primitives that allow users to sign a message on behalf of a predefined set of users, curated by the group manager. The security properties ensure that members of the group can sign anonymously and without fear of being framed. In dynamic group signatures, the group manager has finer-grained control over group updates while ensuring membership privacy (i.e., hiding when users join and leave). The only known scheme that achieves standard security properties and membership privacy has been proposed by Backes et al. CCS 2019. However, they rely on an inefficient revocation mechanism that re-issues credentials to all active members during every group update, and users have to rely on a secure and private channel to join the group.
In this paper, we introduce a dynamic group signature that supports verifier-local revocation, while achieving strong security properties, including membership privacy for users joining over a public channel. Moreover, when our scheme is paired with structure-preserving signatures over equivalence class it enjoys a smaller signature size compared to Backes et al. Finally, as a stand-alone contribution we extend the primitive Asynchronous Remote Key Generation (Frymann et al. CCS 2020) with trapdoors and introduce new security properties to capture this new functionality, which is fundamental to the design of our revocation mechanism
Parasol Compiler: Pushing the Boundaries of FHE Program Efficiency
Fully Homomorphic Encryption (FHE) is a key technology to enable privacy-preserving computation. While optimized FHE implementations already exist, the inner workings of FHE are technically complex. This makes it challenging, especially for non-experts, to develop highly-efficient FHE programs that can exploit the advanced hardware of today. Although several compilers have emerged to help in this process, due to design choices, they are limited in terms of application support and the efficiency levels they can achieve.
In this work, we showcase how to make FHE accessible to non-expert developers while retaining the performance provided by an expert-level implementation. We introduce Parasol, a novel end-to-end compiler encompassing a virtual processor with a custom Instruction Set Architecture (ISA) and a low-level library that implements FHE operations. Our processor integrates with existing compiler toolchains, thereby providing mainstream language support. We extract parallelism at multiple levels via our processor design and its computing paradigm. Specifically, we champion a Circuit Bootstrapping (CBS)-based paradigm, enabling efficient FHE circuit composition with multiplexers. Furthermore, Parasol’s underlying design highlights the benefits of expressing FHE computations at a higher level—producing highly compact program representations. Our experiments demonstrate the superiority of Parasol, in terms of runtime (up to 17x faster), program size (up to 22x smaller), and compile time (up to 32x shorter) compared to the current state-of-the-art. We expect the FHE computing paradigm underlying Parasol to attract future interest since it exposes added parallelism for FHE accelerators to exploit.
Wedges, oil, and vinegar -- An analysis of UOV in characteristic 2
The Unbalanced Oil and Vinegar construction (UOV) has been the backbone of multivariate cryptography since the fall of HFE-based schemes. In fact, 7 UOV-based schemes have been submitted to the NIST additional call for signatures, and 4 of these made it to the second round. For efficiency considerations, most of these schemes are defined over a field of characteristic 2. This has as a side effect that the polar forms of the UOV public maps are not only symmetric, but also alternating.
In this work, we propose a new key-recovery attack on UOV in characteristic 2 that makes use of this property. We consider the polar forms of the UOV public maps as elements of the exterior algebra. We show that these are contained in a certain subspace of the second exterior power that is dependent on the oil space. This allows us to define relations between the polar forms and the image of the dual of the oil space under the Plücker embedding. With this, we can recover the secret oil space using sparse linear algebra.
This new attack has an improved complexity over previous methods and reduces the security by 4, 11, and 20 bits for uov-Ip, uov-III, and uov-V, respectively. Furthermore, the attack is applicable to MAYO and improves on the best attack by 28 bits.
OnionPIRv2: Efficient Single-Server PIR
This paper presents OnionPIRv2, an efficient implementation of OnionPIR incorporating standard orthogonal techniques and engineering improvements. OnionPIR is a single-server PIR scheme that improves response size and computation cost by utilizing recent advances in somewhat homomorphic encryption (SHE) and carefully composing two lattice-based SHE schemes to control the noise growth of SHE. OnionPIRv2 achieves x- x response overhead for databases with moderately large entries (around KB or above) and up to MB/s server computation throughput.
LZKSA: Lattice-Based Special Zero-Knowledge Proofs for Secure Aggregation's Input Verification
In many fields, the need to securely collect and aggregate data from distributed systems is growing. However, designs that rely solely on encrypted data transmission make it difficult to trace malicious users. To address this challenge, we have enhanced the secure aggregation (SA) protocol proposed by Bell et al. (CCS 2020) by introducing verification features that ensure compliance with user inputs and encryption processes while preserving data privacy. We present LZKSA, a quantum-safe secure aggregation system with input verification. LZKSA employs seven zero-knowledge proof (ZKP) protocols based on the Ring Learning with Errors problem, specifically designed for secure aggregation. These protocols verify whether users have correctly used SA keys and their , norms and cosine similarity of data, meet specified constraints, to exclude malicious users from current and future aggregation processes. The specialized ZKPs we propose significantly enhance proof efficiency. In practical federated learning scenarios, our experimental evaluations demonstrate that the proof generation time for and constraints is reduced to about of that required by the current state-of-the-art method, RoFL (S\&P 2023), and ACORN (USENIX 2023). For example, the proof generation/verification time of RoFL, ACORN and LZKSA for is 94s/29.9s, 78.7s/33.9s, and 0.02s/0.0062s for CIFAR10, respectively.
Unconditionally secure encryption algorithm with unified confidentiality and integrity
One-Time Pad (OTP), introduced by Shannon, is well-known as an unconditionally secure encryption algorithm and has become the cornerstone of modern cryptography. However, the unconditional security of OTP applies solely to confidentiality and does not extend to integrity. Hash functions such as SHA2, SHA3 or SM3 applies only to integrity but not to confidentiality and also can not obtain unconditional security. Encryption and digital signatures based on asymmetric cryptography can provide confidentiality, integrity and authentication, but they can only achieve computational security. Leveraging the fundamental principles of quantum mechanics,Quantum key distribution(QKD)can achieve unconditional security in theory. However, due to limitations in eavesdropping detection, the use of classical channels and imperfections in quantum devices, it cannot reach unconditional security in practical applications. In this paper, based on polynomial rings and the theory of probability, we propose an unconditionally secure encryption algorithm with unified confidentiality and integrity. The main calculation of the encryption algorithm is Cyclic Redundancy Check(CRC). Theoretical analysis proves that the encryption algorithm not only meets the unconditional security of confidentiality, but also guarantees the unconditional security of integrity, especially suitable for high-security communications such as finance, military, government and other fields.
From Permissioned to Proof-of-Stake Consensus
This paper presents the first generic compiler that transforms any permissioned consensus protocol into a proof-of-stake permissionless consensus protocol. For each of the following properties, if the initial permissioned protocol satisfies that property in the partially synchronous setting, the consequent proof-of-stake protocol also satisfies that property in the partially synchronous and quasi-permissionless setting (with the same fault-tolerance): consistency; liveness; optimistic responsiveness; every composable log-specific property; and message complexity of a given order. Moreover, our transformation ensures that the output protocol satisfies accountability (identifying culprits in the event of a consistency violation), whether or not the original permissioned protocol satisfied it.
ZK-NR: A Layered Cryptographic Architecture for Explainable Non-Repudiation
This paper introduces ZK-NR, a modular cryptographic protocol designed to ensure privacy-preserving non-repudiation in the co-production of digital public services. By integrating Merkle commitments, zero-knowledge proofs (STARKs), threshold BLS signatures, and post-quantum Dilithium authentication, ZK-NR enables the creation of secure, verifiable, and auditable evidence across decentralized infrastructures. Unlike traditional digital signatures or blockchain-based logs, ZK-NR provides formally verifiable attestations without disclosing sensitive content, making it suitable for public finance, e-government, and regulated digital ecosystems. The protocol is modeled in Tamarin and implemented as a proof-of-concept using open cryptographic tools. This contribution offers a reproducible foundation for future infrastructures requiring long-term trust, data minimization, and legal admissibility, particularly in contexts where citizens and institutions share responsibility for digital evidence. ZK-NR addresses the tension between confidentiality and accountability, providing an interoperable and future-ready layer for trustworthy public service delivery.
This preliminary work focuses on architectural composition and implementation feasibility. It does not include formal security proofs.
Security Analysis on UOV Families with Odd Characteristics: Using Symmetric Algebra
Multivariate public key cryptosystems represent a promising family of post-quantum cryptographic schemes. Extensive research has demonstrated that multivariate polynomials are particularly well-suited for constructing digital signature schemes. Notably, the Unbalanced Oil and Vinegar (UOV) signature scheme and its variants have emerged as leading candidates in NIST's recent call for additional digital signature proposals.
Security analysis against UOV variants are typically categorized into key-recovery attacks and forgery attacks, with the XL algorithm serving as one of the most significant methods for mounting key-recovery attacks. Recently, Lars Ran introduced a new attack against UOV variants that could be seen as an XL attack using exterior algebra; nevertheless, this new attacking algorithm is applicable only when the underlying (finite) field of the UOV variant is of characteristic .
In this work, we address this limitation by proposing a unified framework. Specifically, we first propose the notion of reduced symmetric algebra over any field, whose strength can be gleaned from the fact that it is essentially symmetric algebra when the characteristic of the underlying field is and is exterior algebra when . Based upon the reduced symmetric algebra, we then propose a new XL attack against all UOV variants. Our XL attack is equivalent to Lars Ran's one for those UOV variants whose underlying fields are of characteristic ; more importantly, our XL attack can also be applied to analyze those UOV variants with odd characteristic, such as QR-UOV submitted to NIST's PQC Standardization Project. It should be noted that in regard to those 12 QR-UOV recommended instances, our XL attack does not outperform existing key-recovery counterparts; nevertheless, it is the optimal key-recovery attack for some specific UOV instances with odd characteristic.
Learning Parity with Quantization: Achieving Full-Rate Encryption by Exploiting Quantization Noise in Code-Based Cryptography
The Learning Parity with Noise (LPN) problem has become a cornerstone for building lightweight, post-quantum secure encryption schemes. Despite its widespread adoption, LPN-based constructions suffer from a fundamental efficiency limitation: the essential noise term that provides security simultaneously requires error correction coding, leading to bandwidth overhead. We introduce a variant of LPN termed Learning Parity with Quantization (LPQ). While maintaining the ``learning from noisy equations'' framework, LPQ generates Bernoulli-like noise from code-aided quantization and enables simultaneous security and compression. Formally, the problem challenges adversaries to distinguish the triplet from uniform, where is a vector quantization function based on an code , and serves as a public dither. We establish the hardness of LPQ through a tight reduction from the LPN problem, maintaining equivalent security guarantees. We demonstrate LPQ’s practical efficacy through a full rate (i.e., rate-1) symmetric key encryption scheme, where LPQ combined with an extendable output function (XOF) achieves optimal ciphertext efficiency ( ).
Keep It Unsupervised: Horizontal Attacks Meet Simple Classifiers
In the last years, Deep Learning algorithms have been browsed and applied to Side-Channel Analysis in order to enhance attack’s performances. In some cases, the proposals came without an indepth analysis allowing to understand the tool, its applicability scenarios, its limitations and the advantages it brings with respect to classical statistical tools. As an example, a study presented at CHES 2021 proposed a corrective iterative framework to perform an unsupervised attack which achieves a 100% key bits recovery.
In this paper we analyze the iterative framework and the datasets it was applied onto. The analysis suggests a much easier and interpretable way to both implement such an iterative framework and perform the attack using more conventional solutions, without affecting the attack’s performances.
Optimal Dimensionality Reduction using Conditional Variational AutoEncoder
The benefits of using Deep Learning techniques to enhance side-channel attacks performances have been demonstrated over recent years.
Most of the work carried out since then focuses on discriminative models.
However, one of their major limitations is the lack of theoretical results.
Indeed, this lack of theoretical results, especially concerning the choice of neural network architecture to consider or the loss to prioritize to build an optimal model, can be problematic for both attackers and evaluators.
Recently, Zaid et al. addressed this problem by proposing a generative model that bridges conventional profiled attacks and deep learning techniques, thus providing a model that is both explicable and interpretable.
Nevertheless the proposed model has several limitations.
Indeed, the architecture is too complex, higher-order attacks cannot be mounted and desynchronization is not handled by this model.
In this paper, we address the first limitation namely the architecture complexity, as without a simpler model, the other limitations cannot be treated properly.
To do so, we propose a new generative model that relies on solid theoretical results.
This model is based on conditional variational autoencoder and converges towards the optimal statistical model i.e. it performs an optimal attack.
By building on and extending the state-of-the-art theoretical works on dimensionality reduction, we integrate into this neural network an optimal dimensionality reduction i.e. a dimensionality reduction that is achieved without any loss of information.
This results in a gain of , with the dimension of traces, compared to Zaid et al. neural network in terms of architecture complexity, while at the same time enhancing the explainability and interpretability.
In addition, we propose a new attack strategy based on our neural network, which reduces the attack complexity of generative models from to , with the number of generated traces.
We validate all our theoretical results experimentally using extensive simulations and various publicly available datasets covering symmetric, asymmetric pre and post-quantum cryptography implementations.
A Note on the Rank Defect Phenomena in The Linearization Attack on Elisabeth-4
This note gives an explanation for a phenomenon which appeared in the cryptanalysis of the Elisabeth-4 stream cipher, a stream cipher optimized for Torus Fully Homomorphic Encryption (TFHE). This primitive was broken in 2023 by a linearization attack. The authors of this attack made an observation on the rank of the linear system they generated, which was lower than expected. They have provided a partial explanation for it using some properties of the negacyclic lookup tables (NLUT), one of the potential building block of the ciphers optimized for TFHE. NLUTs are defined as functions over integers modulo 2^n such that for all x, L(x + 2^(n−1) ) = −L(x). Their explanation of the rank defect of the linear system relies on the observation that the least significant bit of L(x) does not depend on the most significant bit of x, which prevents some monomials from appearing in the algebraic normal form (ANF) of the system. In this note, we prove a stronger property of the ANF of NLUTs and use it to give full proof of their observation on the rank of the system.
Foundations of Multi-Designated Verifier Signature: Comprehensive Formalization and New Constructions in Subset Simulation
A multi-designated verifier signature (MDVS) is a digital signature that empowers a signer to designate specific verifiers capable of verifying signatures. Notably, designated verifiers are allowed to not only verify signatures but also simulate “fake” signatures indistinguishable from real ones produced by the original signer. Since this property is useful for realizing off-the-record (i.e., deniable) communication in group settings, MDVS is attracting attention in secure messaging. Recently, Damgård et al. (TCC’20) and Chakraborty et al. (EUROCRYPT’23) have introduced new MDVS schemes, allowing a subset of designated verifiers to simulate signatures in contrast to the conventional one, which requires all designated verifiers for signature simulation. They also define a stronger notion of security for them. This work delves into this new MDVS and offers a comprehensive formalization. We identify all possible security levels of MDVS schemes in subset simulations and prove that some of them are not feasible. Furthermore, we demonstrate that MDVS schemes meeting the security notion defined by Chakraborty et al. imply IND-CCA secure public-key encryption schemes. Beyond formalization, we present new constructions of MDVS schemes in subset simulation. Notably, we introduce a new construction of strongly secure MDVS schemes based on ring signatures and public-key encryption, accompanied by a generic conversion for achieving consistency through non-interactive zero-knowledge arguments. Finally, we evaluate the efficiency of our MDVS schemes in classical and post-quantum settings, showing their practicality.
Empowering Privacy: A Zero Cost Protocol for Concealing LGBTQ Search Queries
FHE-based private information retrieval (PIR) is widely used to maintain the secrecy of the client queries in a client-server architecture. There are several ways to implement FHE-based PIR. Most of these approaches results in server computation overhead. Attempts for reducing the server computation overhead results in 1) fetching incorrect results, 2) leakage of queries, 3) large number of homomorphic operations (which is a time consuming process), and 4) downloading the entire dataset in the client side. In this work, we design a three server based approach where the first server discuss the nature of dataset, second server stores the computation results performed over first server, and third server stores the dataset in accordance to the proposed novel technique, that is, restricted bin packing algorithm (RBA). The proposed three server based approach optimise the aforementioned limitations. Later we implement the designed protocol using Tenseal library. Our protocol provides to retrieve the data by providing security to the client's query.
An Open-Source Framework for Efficient Side-Channel Analysis on Cryptographic Implementations
Side-channel attacks are increasingly recognized as a significant threat to hardware roots of trust. As a result, cryptographic module designers must ensure that their modules are resilient to such attacks before deployment. However, efficient evaluation of side-channel vulnerabilities in cryptographic implementations remains challenging. This paper introduces an open-source framework integrating FPGA designs, power measurement tools, and high-performance side-channel analysis libraries to streamline the evaluation process. The framework provides design templates for two widely used FPGA boards in the side-channel analysis community, enabling Shell-Role architecture, a modern FPGA design pattern. This shell abstraction allows designers to focus on developing cryptographic modules while utilizing standardized software tools for hardware control and power trace acquisition. Additionally, the framework includes acceleration plugins for ChipWhisperer, the leading open-source side-channel analysis platform, to enhance the performance of correlation power analysis (CPA) attacks. These plugins exploit modern many-core processors and Graphics Processing Units (GPUs) to speed up analysis significantly. To showcase the capabilities of the proposed framework, we conducted multiple case studies and highlighted significant findings that advance side-channel research. Furthermore, we compare our CPA plugins with existing tools and show that our plugins achieve up to 8.60x speedup over the state-of-the-art CPA tools.
Lattice-based Obfuscation from NTRU and Equivocal LWE
Indistinguishability obfuscation (iO) turns a program unintelligible without altering its functionality and is a powerful cryptographic primitive that captures the power of most known primitives. Recent breakthroughs have successfully constructed iO from well-founded computational assumptions, yet these constructions are unfortunately insecure against quantum adversaries. In the search of post-quantum secure iO, a line of research investigates constructions from fully homomorphic encryption (FHE) and tailored decryption hint release mechanisms. Proposals in this line mainly differ in their designs of decryption hints, yet all known attempts either cannot be proven from a self-contained computational assumption, or are based on novel lattice assumptions which are subsequently cryptanalysed.
In this work, we propose a new plausibly post-quantum secure construction of iO by designing a new mechanism for releasing decryption hints. Unlike prior attempts, our decryption hints follow a public Gaussian distribution subject to decryption correctness constraints and are therefore in a sense as random as they could be. To generate such hints efficiently, we develop a general-purpose tool called primal lattice trapdoors, which allow sampling trapdoored matrices whose Learning with Errors (LWE) secret can be equivocated. We prove the security of our primal lattice trapdoors construction from the NTRU assumption. The security of the iO construction is then argued, along with other standard lattice assumptions, via a new Equivocal LWE assumption, for which we provide evidence for plausibility and identify potential targets for further cryptanalysis.
Solving LWE with Independent Hints about Secret and Errors
At CRYPTO 2020, Dachman-Soled et al. introduced a framework for to analyze the security loss of Learning with Errors ( ), which enables the incremental integration of leaked hints into lattice-based attacks. Later Nowakowski and May at ASIACRYPT 2023 proposed a novel method capable of integrating and combining an arbitrary number of both perfect and modular hints for the LWE secret within a unified framework, which achieves better efficiency in constructing the lattice basis and makes the attacks more practical. In this paper, we first consider solving LWE with independent hints about both the secret and errors. Firstly, we introduce a novel approach to embed the hints for secret into the lattice by just matrix multiplication instead of the LLL reduction as in Nowakowski and May's attack, which further reduces the time complexity to construct the lattice basis. For example, given 234 perfect hints about CRYSTALS-KYBER 512, our method reduces the running time from 2.16 hours to 0.35 hours. Secondly, we show how to embed the hints about errors into the obtained lattice basis.
KIVR: Committing Authenticated Encryption Using Redundancy and Application to GCM, CCM, and More
Constructing a committing authenticated encryption (AE)
satisfying the CMT-4 security notion is an ongoing research challenge.
We propose a new mode KIVR, a black-box conversion for adding the
CMT-4 security to existing AEs. KIVR is a generalization of the Hash-
then-Enc (HtE) [Bellare and Hoang, EUROCRYPT 2022] and uses a
collision-resistant hash function to generate an initial value (or nonce)
and a mask for redundant bits, in addition to a temporary key. We ob-
tain a general bound r/2 + tag-col with r-bit redundancy for a large class
of CTR-based AEs, where tag-col is the security against tag-collision at-
tacks. Unlike HtE, the security of KIVR linearly increases with r, achiev-
ing beyond-birthday-bound security. With a t-bit tag, tag-col lies 0 ≤
tag-col ≤ t/2 depending on the target AE. We set tag-col = 0 for GCM,
GCM-SIV, and CCM, and the corresponding bound r/2 is tight for GCM
and GCM-SIV. With CTR-HMAC, tag-col = t/2, and the bound (r + t)/2
is tight.
Leakage-Resilient Extractors against Number-on-Forehead Protocols
Given a sequence of independent sources , how many of them must be good (i.e., contain some min-entropy) in order to extract a uniformly random string? This question was first raised by Chattopadhyay, Goodman, Goyal and Li (STOC '20), motivated by applications in cryptography, distributed computing, and the unreliable nature of real-world sources of randomness. In their paper, they showed how to construct explicit low-error extractors for just good sources of polylogarithmic min-entropy. In a follow-up, Chattopadhyay and Goodman improved the number of good sources required to just (FOCS '21). In this paper, we finally achieve .
Our key ingredient is a near-optimal explicit construction of a new pseudorandom primitive, called a leakage-resilient extractor (LRE) against number-on-forehead (NOF) protocols. Our LRE can be viewed as a significantly more robust version of Li's low-error three-source extractor (FOCS '15), and resolves an open question put forth by Kumar, Meka, and Sahai (FOCS '19) and Chattopadhyay, Goodman, Goyal, Kumar, Li, Meka, and Zuckerman (FOCS '20). Our LRE construction is based on a simple new connection we discover between multiparty communication complexity and non-malleable extractors, which shows that such extractors exhibit strong average-case lower bounds against NOF protocols.