Papers updated in last 31 days (322 results)
Multi-Party Homomorphic Encryption with Dynamicity and Ciphertext Reusability
Homomorphic Encryption (HE) is a cryptographic primitive that enables computation on encrypted data while preserving user privacy. We explore its application in the multi-party setting, where data is stored in the cloud encrypted under several distinct keys.
A straightforward approach is to use Multi-Key Homomorphic Encryption (MKHE), which supports computation over ciphertexts encrypted under different keys. However, MKHE incurs space and computational overhead of $O(n)$ with respect to the number of users $n$, making it impractical for large-scale scenarios. In contrast, Multi-Party Homomorphic Encryption (MPHE) achieves constant overhead $O(1)$, but suffers from significant limitations: ciphertexts are tied to a fixed group of parties. They cannot be reused across different contexts, and new parties cannot join dynamically.
To address these limitations, we consider a relaxed model where secret key owners perform limited communication before the computation phase. Consequently, the ciphertext sizes and evaluation complexities are reduced from $O(n)$ to $O(1)$, matching the efficiency of a single‑key HE setting during computation. Building on this idea, we propose Reusable Dynamic Multi-Party Homomorphic Encryption (rdMPHE), a primitive better suited for real-world deployments. Within this framework, the pre-computation phase requires just a few rounds of communication, and both evaluation and space complexities remain independent of the number of users.
We construct and implement a rdMPHE scheme based on the Ring Learning with Errors (RLWE) assumption. Our asymptotic and experimental analyses demonstrate that rdMPHE attains efficiency comparable to MKHE while overcoming its scalability limitations. Additionally, our experiments verify support for the dynamic participation of new parties and the reusability of ciphertexts.
Public-Key Quantum Money From Standard Assumptions (In The Generic Model)
Our main result is a quantum polynomial-time reduction from the group action discrete logarithm (DLP) problem to a specific cloning problem. A consequence of this result is that the public-key quantum money scheme proposed by Zhandry (2024), which is based on abelian group actions, is secure in the generic group action model. Specifically, our result shows that breaking the quantum money scheme is equivalent, under quantum polynomial-time reductions, to solving the group action DLP. An immediate implication of our result concerns the relationship between cloning and preparing Fourier states: our main theorem shows that the problem of cloning group action Fourier states is equivalent to the problem of preparing them.
HiAE Remains Secure in Its Intended Model: A Clarification of Claimed Attacks
HiAE is a recently proposed high-throughput authenticated encryption algorithm that achieves exceptional performance on both x86 and ARM architectures. Following its publication, several cryptanalysis papers have claimed that HiAE’s 256-bit encryption security is broken under the nonce-respecting model. In this note, we clarify that the claimed attacks rely critically on submitting forged-tag decryption queries — a type of behavior explicitly excluded by HiAE’s original security model.
HiAE was designed under a standard nonce-based AEAD setting without decryption oracle access, offering 256-bit security against key and state recovery, and 128-bit security against forgery. This design approach follows the same principle as well-known schemes such as AEGIS and MORUS.
The conclusion that HiAE is broken is based on a misinterpretation of its security model, as the attacks rely on conditions that the design explicitly excludes.
The Concrete Security of Two-Party Computation: Simple Definitions, and Tight Proofs for PSI and OPRFs
This paper initiates a concrete-security treatment of two-party secure computation. The first step is to propose, as target, a simple, indistinguishability-based definition that we call InI. This could be considered a poor choice if it were weaker than standard simulation-based definitions, but it is not; we show that for functionalities satisfying a condition called invertibility, that we define and show is met by functionalities of practical interest like PSI and its variants, the two definitions are equivalent. Based on this, we move forward to study the concrete security of a canonical OPRF-based construction of PSI, giving a tight proof of InI security of the constructed PSI protocol based on the security of the OPRF. This leads us to the concrete security of OPRFs, where we show how different DH-style assumptions on the underlying group yield proofs of different degrees of tightness, including some that are tight, for the well-known and efficient 2H-DH OPRF, and thus for the corresponding DH PSI protocol. We then give a new PSI protocol, called salted-DH PSI, that is as efficient as DH-PSI, yet enjoys tighter proofs.
Circular Insecure Encryption: from Long Cycles to Short Cycles
A length $n$ encryption cycle consists of a sequence of $n$ keys, each encrypting the next, forming a cycle, and an encryption scheme is $n$-circular secure if a length $n$ encryption cycle is computationally indistinguishable from encryptions of zeros. An interesting problem is whether CPA security implies circular security. This is shown to be not true. Using standard cryptographic assumptions and LWE, it was shown that within the class of CPA secure encryption schemes, for any $n$, there exists an $n$-circular insecure encryption scheme. Furthermore, there exists a particular encryption scheme that is $\ell$-circular insecure for all $\ell$. Following these results, it is natural to ask whether a circular insecurity of a particular length implies circular insecurity of different lengths and of multiple lengths. We answer this problem with an affirmative in this paper. We constructively prove that a CPA secure encryption scheme that is insecure in the presence of encryption cycles of length $(n+1)$ implies the existence of such a scheme for encryption cycles of any length less than $(n+1)$. The constructed $(\le n)$-circular insecure construction may have the same message space as the $(n+1)$-circular insecure encryption scheme, and our results apply to both public key and symmetric key settings.
Solve Approximate CVP via Variants of Nearest-Colattice
The approximate Closest Vector Problem (CVP) is a core computational problem underlying many post-quantum lattice-based signature schemes, including Dilithium, one-more-ISIS, and HuFu. While the security of these schemes is typically expressed in terms of the Inhomogeneous Short Integer Solution (ISIS) problem, it is well-known that ISIS can be efficiently reduced to approximate CVP. Despite its foundational role, approximate CVP with non-negligible approximation factors remains far less explored than other lattice problems such as SVP or LWE, creating a critical gap in both theory and practice.
In this work, we bridge this gap by advancing the Colattice framework for solving approximate CVP with large approximation factors. More concretely, (1) We define a practical version of the Colattice algorithm and propose a randomized Nearest Colattice for generating more than one approximate closest vector. (2) Define a formal strategy space for blockwise approximate CVP. (3) Propose a polynomial-time strategy selection algorithm and prove its correctness under standard lattice heuristics. (4) Building on this, we design an efficient security estimator for approximate CVP in both Euclidean and Infinity norms, and extend it to approximate batch-CVP attack settings. (5) By applying this estimator, we perform concrete security evaluations of Dilithium, HuFu, and one-more-ISIS. Our results reveal that almost none of the evaluated schemes withstand approximate batch-CVP attacks with $2^{32}$ queries. (6) We integrate a slicer and Colattice into G6K-CPU, leveraging the Locality-Sensitive Hashing (LSH) technqiue for nearest neighbors search (NNS). This is the first practical implementation of an NNS-accelerated slicer. Our results demonstrate the practical efficiency of approximate CVP and batch-CVP attacks, highlighting the need for more accurate security estimation.
These findings underscore the practical importance of accurate approximate CVP modeling and call for a reassessment of current parameter sets in post-quantum signature schemes.
Improving the Masked Division for the FALCON Signature
FALCON is a post-quantum signature selected by the National Institute of Standards and Technology (NIST). Although its side-channel resilience has been studied and a masking countermeasure proposed, the division is a major performance bottleneck. This work proposes a different approach to the masked FALCON division. We use the Newton-Raphson method and a convergent sequence to approximate this operation. The first term of the sequence is evaluated using pre-calculated second-order minimax polynomials. As a consequence, computing the inverse only requires 5 masked additions and 8 masked multiplications, compared to the roughly 55 masked additions required by the previous state-of-the-art. Formal security proofs using the MIMO-SNI criteria are also provided.
Baloo: Nearly Optimal Lookup Arguments
We present Baloo, a protocol for lookup tables where the prover work is linear on the number of lookups and independent of the table size. Baloo is built over previous lookup arguments, and the framework for SNARKs from Ràfols and Zapico (CRYPTO 21).
Our protocol supports commit-and-prove expansions: the prover selects the subtable containing the elements used in the lookup, that is unknown to the verifier, commits to it and later proves its relation with the committed elements. This feature makes Baloo especially suitable for proving input-output relations on hash functions, and in particular to instantiate the Ethereum Virtual Machine (EVM).
One-Step Schnorr Threshold Identification
Threshold zero-knowledge protocols have not been widely adopted,
presumably due to the relevant network overhead,
complicated certification processes
and thus limited interoperability chances.
In this work, we propose $\mathsf{OSST}$,
a Schnorr-based threshold identification scheme
that is both non-interactive and non-reliant on the public shares.
Given a $(n, t)$-shared secret $x$,
the proposed protocol allows
any $t^* \ge t$ (but no less) shareholders to collectively prove
that their secret keys combine to $x$ in the sense of interpolation
without revealing any information beyond that.
On the one side, the provers do not need to engage in
multi-party computations,
sending their packets to the verifier asynchronously.
On the other side, the verification operation
involves the combined public key $y \equiv g ^ x$ alone,
meaning that the verifier does not need to
have validated and registered the individual member identities.
The protocol
can be cheaply adopted in permissionless and dynamic environments,
where no certification processes or infrastructure support
are available, and be easily integrated
with existing verifiers by means of pure software extension.
No publicly trusted setup is required beyond
the assumption that $x$ has been
distributed by means of Shamir's secret sharing
(or equivalent distribution scheme)
and that the public counterpart has been advertised correctly;
in particular, the protocol is intended to be
secure against impersonation attacks
in the plain public-key model.
We provide evidence that this has good chances to
hold true by giving a formal security proof
in the random oracle model under the
one-more discrete-logarithm hardness
($\mathsf{OMDL}$) assumption.
UTRA: Universe Token Reusability Attack and Verifiable Delegatable Order-Revealing Encryption
As datasets grow, users increasingly rely on \modi{ cloud services for data storage and processing. Consequently, concerns regarding data protection and the practical use of encrypted data have emerged as significant challenges. One promising solution is} order-revealing encryption (ORE), which enables efficient operations on encrypted numerical data. \modi{To support distributed environments with different users, delegatable ORE (DORE) extends this functionality to multi-client settings, enabling order comparisons between ciphertexts encrypted under different secret keys. However, Hahn et al. proposed a token forgery attack against DORE with a threat model and introduced the secure DORE (SEDORE) scheme as a countermeasure. Despite this enhancement, we claim that SEDORE remains vulnerable under the same threat model.}
In this paper, we present a novel Universal Token Reusability Attack, which exposes a critical vulnerability in SEDORE with the identical threat model. To mitigate this, we introduce the concept of verifiable delegatable order-revealing encryption (VDORE), along with a formal definition of token unforgeability. Building on this, we design a new scheme, Token Unforgeable DORE ($\mathsf{TUDORE}$), which ensures token unforgeability.
Moreover, $\mathsf{TUDORE}$ achieves 1.5× faster token generation than SEDORE with enhanced security.
QuickPool: Privacy-Preserving Ride-Sharing Service
Online ride-sharing services (RSS) have become very popular owing to increased awareness of environmental concerns and also as a response to increased traffic congestion. To request a ride, users submit their locations and route information for ride matching to a service provider (SP), leading to possible privacy concerns caused by leakage of users' location data. We propose QuickPool, an efficient SP-aided RSS solution for oblivious rider-driver matching, without involving any auxiliary server. Our solution provides simultaneous multiple matching which, to the best of our knowledge, is the first one to do so. End-users, namely, riders and drivers share their route information with SP as encryptions of the ordered set of points-of-interest (PoI) of their route from their start to end locations. SP performs a zone based oblivious matching of drivers and riders, based on partial route overlap as well as proximity of start and end points. QuickPool is in the semi-honest setting, and makes use of secure multi-party computation. We provide security proof of our protocol, perform extensive testing of our implementation and show that our protocol simultaneously matches multiple drivers and riders very efficiently. We compare the performance of QuickPool with state-of-the-art works and observe a run time improvement of 1.7 - 2.2x, and communication improvement of at least 8x.
Dynamic-FROST: Schnorr Threshold Signatures with a Flexible Committee
Threshold signatures enable any subgroup of predefined cardinality $t$ out of a committee of $n$ participants to generate a valid, aggregated signature.
Although several $(t,n)$-threshold signature schemes exist, most of them assume that the threshold $t$ and the set of participants do not change over time.
Practical applications of threshold signatures might benefit from the possibility of updating the threshold or the committee of participants. Examples of such applications are consensus algorithms and blockchain wallets.
In this paper, we present Dynamic-FROST (D-FROST, for short) that combines FROST, a Schnorr threshold signature scheme, with CHURP, a dynamic proactive secret sharing scheme. The resulting protocol is the first Schnorr threshold signature scheme that accommodates changes in both the committee and the threshold value without relying on a trusted third party.
Besides detailing the protocol, we present a proof of its security: as the original signing scheme, D-FROST preserves the property of Existential Unforgeability under Chosen-Message Attack.
UNIDLE: A Unified Framework for Deep Learning-based Side-channel Analysis
Side-channel analysis (SCA) exploits the dependency of a cryptographic device's power consumption or electromagnetic (EM) emissions on the data being processed to extract secret keys. While deep learning (DL) has advanced SCA, most existing models struggle to scale to long traces and tend to be effective only against specific countermeasures. Therefore, their applicability is limited in implementations that employ multiple countermeasures simultaneously.
This paper introduces UNIDLE, a UNIfied framework for Deep LEarning-based SCA, for building scalable and robust models for long traces. UNIDLE employs a hierarchical divide-and-conquer strategy: it segments each trace into smaller parts, which are independently processed by base models, and then aggregates their outputs using a top-level model. This approach enables efficient information extraction from long traces while making the model robust against combinations of masking and desynchronization-based countermeasures. UNIDLE also naturally supports multi-task learning, a useful DL approach for attacking countermeasures such as shuffling, while being less susceptible to the capacity bottlenecks observed in existing multi-task models.
Leveraging this framework, we develop HierNet, a novel DL model that integrates a shift-invariant base model and a transformer-based top-level model. HierNet demonstrates strong empirical performance across three datasets of masked implementations, reaching the guessing entropy 1 with fewer than 50 attack traces--where several state-of-the-art benchmark models fail even with up to 5K attack traces. Experiments on traces protected by shuffling show that HierNet can reach the guessing entropy 1 using 95% fewer attack traces than a state-of-the-art multi-task benchmark while requiring only one-tenth of the model size.
Security of Fixed-Weight Repetitions of Special-Sound Multi-Round Interactive Proofs
Interactive proofs are a cornerstone of modern cryptography and, as such, used in many areas, from digital signatures to multi-party computation. Often the knowledge error $\kappa$ of an interactive proof is not small enough and thus needs to be reduced. This is usually achieved by repeating the interactive proof in parallel $t$ times.
Recently, it was shown that the $t$-fold parallel repetition of any $(k_1,\ldots,k_{\mu})$-special-sound multi-round public-coin interactive proof reduces the knowledge error from $\kappa$ to $\kappa^t$, which is optimal.
However, parallel repetitions lead to an increase in transcript size. A common technique to mitigate this drawback, which is often employed in digital signatures obtained by using the Fiat-Shamir transform, is to use fixed-weight challenges, i.e. vectors of challenges having a constant number of entries (for which the last component is) equal to a fixed value.
While widely used, this method has not been fully assessed from a security standpoint. In particular, the effect of the technique on the knowledge error of repeated interactive proofs has remained unstudied.
In this work, we fill the gap and prove that a fixed-weight repetition of a $(k_1,\ldots,k_{\mu})$-special-sound multi-round public-coin interactive proof
is still knowledge sound. We provide an explicit bound for the knowledge error of the protocol, proving that it matches the maximum cheating probability of a dishonest prover.
Our results apply to some recently-proposed digital signatures which are supposed to be quantum resistant, for example the code-based signature CROSS.
Fully Secure Searchable Encryption from PRFs, Pairings, and Lattices
Searchable encryption is a cryptographic primitive that allows us to perform searches on encrypted data. Searchable encryption schemes require that ciphertexts do not leak information about keywords. However, most of the existing schemes do not achieve the security notion that trapdoors do not leak information. Shen et al. (TCC 2009) proposed a security notion called full security, which includes both ciphertext privacy and trapdoor privacy, but there are few fully secure constructions. Full security is defined for the secret key settings since it is known that public key schemes cannot achieve the trapdoor privacy in principle.
In this paper, we construct a query-bounded fully secure scheme from pseudorandom functions. In addition, we propose three types of efficient (unbounded) fully secure schemes. One of them is based on bilinear groups, and the others are besed on lattices. We then analyze the existing constructions. We then analyze the existing constructions. First, we simplify the Cheng et al. scheme (Information Sciences 2023) and prove its security. This scheme had not been proved to be secure. Second, we show that the Li-Boyen pairing-based scheme (IACR CiC 2024) does not achieve the trapdoor privacy, not as claimed.
On the Estonian Internet Voting System, IVXV, SoK and Suggestions
The Estonian i-voting experience is probably the richest to analyze; a country that is considered a pioneer in digitizing both the government and private sector since 2001 followed by online internet voting (i-voting) in 2005. However, there are still some complaints submitted, critics and remarks to consider about the IVXV system. In this paper, we introduce a Systemization of Knowledge of the Estonian IVXV i-voting system and propose some added security enhancements. The presented SoK discusses applications implemented by election observers in 2023 & 2024 elections, which, to our knowledge, have never been mentioned and/or analyzed in the academia before; we also point out to unnoticed paper that presented a formal verification of IVXV discovering some attacks and introducing some solutions. In addition, we identify and analyze recent fixes and improvements in the June 2024 version used in the European Parliament elections connecting them to their academic sources. Finally, we discuss the current system status, propose our own suggestions to some remaining vulnerabilities, then raise the inevitable question of the approaching quantum threat.
A Generalized Approach to Root-based Attacks against PLWE
In the present work we address the robustness of the Polynomial Learning With Errors problem extending previous results in Blanco-Chacón et al. and in Elias et al. In particular, we produce two kinds of new distinguishing attacks: a) we generalize Blanco-Chacón et al. to the case where the defining polynomial has a root of degree up to 4, and b) we widen and refine the most general attack in Elias et al. to the non-split case and determine further dangerous instances previously not detected. Finally, we exploit our results in order to show vulnerabilities of some cryptographically relevant polynomials.
Multi-Key Fully Homomorphic Encryption: removing noise flooding in distributed decryption via the smudging lemma on discrete Gaussian distribution
The current Multi-key Fully Homomorphic Encryption (MKFHE) needs to add exponential noise in the distributed decryption phase to ensure the simulatability of partial decryption. Such a large noise causes the ciphertext modulus of the scheme to increase exponentially compared to the Single-key Fully Homomorphic Encryption (FHE), further reducing the efficiency of the scheme and making the hardness problem on the lattice on which the scheme relies have a sub-exponential approximation factor $\widetilde{O}(n\cdot2^{\sqrt{nL}})$ (which means that the security of the scheme is reduced). To address this problem, this paper analyzes in detail the noise in partial decryption of the MKFHE based on the LWE problem. It points out that this part of the noise is composed of private key and the noise in initial ciphertext. Therefore, as long as the encryption scheme is leak-resistant and the noise in partial decryption is independent of the noise in the initial ciphertext, the semantic security of the ciphertext can be guaranteed. In order to make the noise in the initial ciphertext independent of the noise in the partial decryption, this paper proves the smudging lemma on discrete Gaussian distribution and achieves this goal by multiplying the initial ciphertext by a ``dummy" ciphertext with a plaintext of 1. Based on the above method, this paper removes the exponential noise in the distributed decryption phase for the first time and reduces the ciphertext modulus of MKFHE from $2^{\omega(\lambda L \log \lambda)}$ to $2^{O(\lambda+L)}$ as the same level as the FHE.
Foundations of Single-Decryptor Encryption
Single decryptor encryption (SDE) is public key encryption (PKE) where the decryption key is an unclonable quantum state. Coladangelo, Liu, Liu, and Zhandry (CRYPTO 2021) realized the first SDE assuming subexponentially secure indistinguishability obfuscation (iO) and one-way functions (OWFs), along with the polynomial hardness of the learning with errors (LWE) assumption. Since then, SDE has played a pivotal role in recent advances in quantum cryptography. However, despite its central importance in unclonable cryptography, many fundamental questions about SDE remain unanswered. For example, a line of works has proposed various security notions for SDE, but their relationships have hardly been discussed. Moreover, while many subsequent works have adopted the construction methodology of Coladangelo et al., none have explored its improvement, leaving the possibility of a more efficient approach to SDE.
In this work, we address these fundamental questions concerning SDE. Our contributions are threefold.
New security notion: We introduce a strengthened indistinguishability-based security notion for SDE, which we call CPA+ anti-piracy security. We show that CPA+ security unifies the existing security notions for SDE, as detailed in the third item.
New construction: We present an SDE scheme that satisfies CPA+ anti-piracy security, based solely on polynomially secure iO and OWFs. In addition to relying on weaker and more general assumptions, our SDE scheme offers a significant advantage over the scheme of Coladangelo et al., as both the construction and its security proof are much simpler.
Relationships among security notions: We demonstrate that CPA+ anti-piracy security implies all existing security notions for SDE, with the sole exception of identical challenge ciphertext security proposed by Georgiou and Zhandry (EPRINT 2020). Although we do not establish a direct implication from CPA+ anti-piracy security to identical challenge ciphertext security, we provide a generic transformation from an SDE scheme satisfying the former to one achieving the latter in the quantum random oracle model. Additionally, we establish various relationships among different security notions for SDE. By combining these results with our SDE construction, we derive several new feasibility results.
SoK: Signatures With Randomizable Keys
Digital signature schemes with specific properties have recently seen various real-world applications with a strong emphasis on privacy-enhancing technologies. They have been extensively used to develop anonymous credentials schemes and to achieve an even more comprehensive range of functionalities in the decentralized web.
Substantial work has been done to formalize different types of signatures where an allowable set of transformations can be applied to message-signature pairs to obtain new related pairs. Most of the previous work focused on transformations with respect to the message being signed, but little has been done to study what happens when transformations apply to the signing keys. A first attempt to thoroughly formalize such aspects was carried by Derler and Slamanig (ePrint'16, Designs, Codes and Cryptography'19), followed by the more recent efforts by Backes et al. (ASIACRYPT'18) and Eaton et al. (ePrint'23). However, the literature on the topic is vast and different terminology is used across contributions, which makes it difficult to compare related works and understand the range of applications covered by a given construction.
In this work, we present a unified view of signatures with randomizable keys and revisit their security properties. We focus on state-of-the-art constructions and related applications,identifying existing challenges. Our systematization allows us to highlight gaps, open questions and directions for future research on signatures with randomizable keys.
CCA-Secure Traceable Threshold (ID-based) Encryption and Application
A recent work by Boneh, Partap, and Rotem [Crypto'24] introduced the concept of traceable threshold encryption, in that if $t$ or more parties collude to construct a decryption box, which performs decryptions, then at least one party's identity can be traced by making a few black-box queries to the box. This has important applications, e.g., in blockchain mempool privacy, where collusion yields high financial gain through MEVs without any consequence - the possibility of tracing discourages collusion.
Nevertheless, their definitions leave room for exploitation as they only achieve CPA security and do not consider inconsistency in decryption via different participating sets.
This paper proposes stronger definitions of traceable threshold encryption, which supports CCA-security and consistency. Our main approach considers identity-based variants of traceable encryption (which we also define). It converts that to a CCA-secure construction, adapting two generic transformations, first using a one-time signature and then a fingerprinting code.
We put forward two efficient instantiations of our identity-based scheme with different merits: our first construction is based on Boneh-Franklin IBE [Crypto'01] and has constant size ciphertexts but quadratic size public keys - this is proven secure based on XDH and BDDH. Our second construction is based on Boneh-Boyen IBE [Eurocrypt'04]. It supports both constant-size ciphertexts and constant-size public keys - this is proven secure based on a variant of the uber assumption over bilinear pairings. Our concrete analysis shows that the first construction's ciphertext is much (~6x) smaller than the second construction. Finally, we extend the definitions to support consistency and achieve it by adjoining an efficient, non-interactive proof of correct encryption.
mUOV: Masking the Unbalanced Oil and Vinegar Digital Signature Scheme at First- and Higher-Order
In the recent search for additional post-quantum designs, multivariate quadratic equations (MQE) based designs have been receiving attention due to their small signature sizes. Unbalanced Oil and Vinegar (UOV) is an MQE-based digital signature (DS) scheme proposed over two decades ago. Although the mathematical security of UOV has been thoroughly analyzed, several practical side-channel attacks (SCA) have been shown on UOV based DS schemes.
In this work, we perform a thorough analysis to identify the variables in UOV based DS schemes that can be exploited with passive SCA, specifically differential power attacks (DPA). Secondly, we introduce masking as a countermeasure to protect the sensitive components of UOV based schemes. We propose efficient masked gadgets for all the critical operations, including the masked dot-product and matrix-vector multiplication. We show that our gadgets are secure in the $t$-probing model through formal proofs, mechanically verified using the maskVerif tool. We implemented and demonstrated the practical feasibility of our arbitrary-order masking algorithms for UOV-Ip and UOV-III. We show that the masked signature generation of UOV-Ip performs up to 62% better than Dilithium2 or ML-DSA and 99% better than Falcon 512 or FN-DSA. In addition, the security of our implementation is practically validated using the test vector leakage assessment (TVLA) methodology.
PEGASIS: Practical Effective Class Group Action using 4-Dimensional Isogenies
In this paper, we present the first practical algorithm to compute an effective group action of the class group of any imaginary quadratic order $\mathcal{O}$ on a set of supersingular elliptic curves primitively oriented by $\mathcal{O}$. Effective means that we can act with any element of the class group directly, and are not restricted to acting by products of ideals of small norm, as for instance in CSIDH. Such restricted effective group actions often hamper cryptographic constructions, e.g. in signature or MPC protocols.
Our algorithm is a refinement of the Clapoti approach by Page and Robert, and uses $4$-dimensional isogenies. As such, it runs in polynomial time, does not require the computation of the structure of the class group, nor expensive lattice reductions, and our refinements allows it to be instantiated with the orientation given by the Frobenius endomorphism. This makes the algorithm practical even at security levels as high as CSIDH-4096. Our implementation in SageMath takes 1.5s to compute a group action at the CSIDH-512 security level, 21s at CSIDH-2048 level and around 2 minutes at the CSIDH-4096 level. This marks the first instantiation of an effective cryptographic group action at such high security levels. For comparison, the recent KLaPoTi approach requires around 200s at the CSIDH-512 level in SageMath and 2.5s in Rust.
Dynamic Bounded-Collusion Streaming Functional Encryption from Minimal Assumptions
Streaming functional encryption (sFE), recently introduced by Guan, Korb, and Sahai [Crypto 2023], is an extension of functional encryption (FE) tailored for iterative computation on dynamic data streams. Unlike in regular FE, in an sFE scheme, users can encrypt and compute on the data as soon as it becomes available and in time proportional to just the size of the newly arrived data.
As sFE implies regular FE, all known constructions of sFE and FE for P/poly require strong cryptographic assumptions which are powerful enough to build indistinguishability obfuscation (iO). In contrast, dynamic bounded-collusion FE, in which the adversary is restricted to making at most Q function queries for some Q determined during encryption (but not fixed at time of setup), can be built from the minimal assumptions of identity-based encryption (for public-key FE) [Agrawal, Maitra, Vempati and Yamada, Crypto 2021; Garg, Goyal, Lu and Waters, Eurocrypt 2022] and one-way functions (for secret-key FE), as secret-key IBE is implied by one-way functions (folklore).
In this paper, we introduce and build dynamic bounded-collusion streaming FE from the same minimal assumptions of identity-based encryption (for public-key sFE) and one-way functions (for secret-key sFE). Similarly to the original sFE paper of Guan, Korb, and Sahai, our scheme satisfies semi-adaptive-function-selective security which is similar to standard adaptive indistinguishability-based security except that we require all functions to be queried before any of the challenge messages.
Along the way, our work also replaces a key ingredient (called OnesFE) from the original work of Guan, Korb, and Sahai with a much simpler construction based on garbled circuits. In contrast, the original approach relied on the powerful object of compact FE (which is known to imply iO) to construct this primitive.
Under What Conditions Is Encrypted Key Exchange Actually Secure?
A Password-Authenticated Key Exchange (PAKE) protocol allows two parties to agree upon a cryptographic key, in the setting where the only secret shared in advance is a low-entropy password. The standard security notion for PAKE is in the Universal Composability (UC) framework. In recent years there have been a large number of works analyzing the UC-security of Encrypted Key Exchange (EKE), the very first PAKE protocol, and its One-encryption variant (OEKE), both of which compile an unauthenticated Key Agreement (KA) protocol into a PAKE.
In this work, we present a comprehensive and thorough study of the UC-security of both EKE and OEKE in the most general setting and using the most efficient building blocks:
1. We show that among the five existing results on the UC-security of (O)EKE using a general KA protocol, all are incorrect;
2. We show that for (O)EKE to be UC-secure, the underlying KA protocol needs to satisfy several additional security properties: though some of these are closely related to existing security properties, some are new, and all are missing from existing works on (O)EKE;
3. We give UC-security proofs for EKE and OEKE using Programmable-Once Public Function (POPF), which is the most efficient instantiation to date and is around 4 times faster
than the standard instantiation using Ideal Cipher (IC).
Our results in particular allow for PAKE constructions from post-quantum KA protocols such as Kyber. We also present a security analysis of POPF using a new, weakened notion of almost UC realizing a functionality, that is still sufficient for proving composed protocols to be fully
UC-secure.
vr$^2$FHE- Securing FHE from Reaction-based Key Recovery Attacks
Fully Homomorphic Encryption (FHE) inherently lacks data integrity mechanisms, allowing a malicious server to arbitrarily tamper with the data associated with FHE computations on its end. A series of works named reaction attacks further demonstrates that a malicious server can exploit this lack of integrity checks to carry out interactive full-key recovery attacks on state-of-the-art FHE schemes. In this paper, we propose an efficient solution to this problem by utilizing the concept of the Merkle tree. Our solution uses cryptographic hash functions to ensure the integrity of data involved in FHE computations. The efficiency of our solution comes from the lower sizes and ease of computations of the hash values, which subsequently leads to a reduction in both the network and computation overhead. Given that protecting the entire FHE circuit can lead to tremendous network overhead, we further perform scheme-specific optimizations by identifying a small portion of the FHE circuit, protecting which is sufficient to thwart these reaction-based attacks. Finally, we propose a framework that evaluates different FHE schemes based on the overhead that will be incurred when protecting these schemes through our solution. Our framework can be leveraged by application developers to choose an optimal FHE scheme in terms of both performance and overhead.
A proof of P≠NP (New symmetric encryption algorithm against any linear attacks and differential attacks)
P vs NP problem is the most important unresolved problem in the field of computational complexity. Its impact has penetrated into all aspects of algorithm design, especially in the field of cryptography. The security of cryptographic algorithms based on short keys depends on whether P is equal to NP. In fact, Shannon strictly proved that the one-time-pad system meets unconditional security, but because the one-time-pad system requires the length of key to be at least the length of plaintext, how to transfer the key is a troublesome problem that restricts the use of the one-time-pad system in practice. Cryptography algorithms used in practice are all based on short key, and the security of the short key mechanism is ultimately based on one-way assumption. In fact, the existence of one-way function can directly lead to the important conclusion P≠NP.
In this paper, we originally constructed a short-key block cipher algorithm. The core feature of this algorithm is that, when any plaintext-ciphertext pairs are known, the problem of cracking its key is equilavent to the problem of two independent encryption system cracking their keys separately when only knowing their respective ciphertexts. In addition, for these two independent encryption systems, verifying a possible plaintext is also exponentially computationally complexity when only the ciphertext is known.
Based on the above feature, we construct a problem and theoretically prove that the problem satisfies the properties of one-way functions, thereby solving the problem of the existence of one-way functions, that is, directly proving that P≠NP.
CAPSS: A Framework for SNARK-Friendly Post-Quantum Signatures
In this paper, we present a general framework for constructing SNARK-friendly post-quantum signature schemes based on minimal assumptions, specifically the security of an arithmetization-oriented family of permutations. The term "SNARK-friendly" here refers to the efficiency of the signature verification process in terms of SNARK constraints, such as R1CS constraints. Within the CAPSS framework, signature schemes are designed as proofs of knowledge of a secret preimage of a one-way function, where the one-way function is derived from the chosen permutation family. To obtain compact signatures with SNARK-friendly verification, we rely on SmallWood, a recently proposed hash-based zero-knowledge argument scheme well suited for statements arising in this context. From this proof system which we tweak towards SNARK-friendliness, the CAPSS framework offers a generic transformation of any arithmetization-oriented permutation family into a SNARK-friendly post-quantum signature scheme. We provide concrete instances built on permutations such as Rescue-Prime, Poseidon, Griffin, and Anemoi. For the Anemoi family, achieving 128-bit security, our approach produces signatures of sizes ranging from 9 to 13.4 KB, with R1CS constraints between 19K and 29K. This represents a 4-6x reduction in signature size and a 5-8x reduction in R1CS constraints compared to Loquat (CRYPTO 2024), a SNARK-friendly post-quantum signature scheme based on the Legendre PRF.
Fast amortized bootstrapping with small keys and polynomial noise overhead
Most homomorphic encryption (FHE) schemes exploit a technique called single-instruction multiple-data (SIMD) to process several messages in parallel. However, they base their security in somehow strong assumptions, such as the hardness of approximate lattice problems with superpolynomial approximation factor. On the other extreme of the spectrum, there are lightweight FHE schemes that have much faster bootstrapping but no SIMD capabilities. On the positive side, the security of these schemes is based on lattice problems with (low-degree) polynomial approximation factor only, which is a much weaker security assumption. Aiming the best of those two options, Micciancio and Sorrell (ICALP'18) proposed a new amortized bootstrapping that can process many messages at once, yielding sublinear time complexity per message, and allowing one to construct FHE based on lattice problems with polynomial approximation factor.
Some subsequent works on this line achieve near-optimal asymptotic performance, nevertheless, concrete efficiency remains mostly an open problem. The only existing implementation to date (GPV23, Asiacrypt 2023) requires keys of up to a hundred gigabytes while only providing gains for relatively large messages. In this paper, we introduce a new method for amortized bootstrapping where the number of homomorphic operations required per message is $O(h)$ and the noise overhead is $O(\sqrt{h \lambda} \log \lambda)$, where $h$ is the Hamming weight of the LWE secret key and $\lambda$ is the security parameter. This allows us to use much smaller parameters and to obtain faster running time. Our method is based on a new efficient homomorphic evaluation of sparse polynomial multiplication. We bootstrap 2 to 8-bit messages in 1.46 ms to 28.5 ms, respectively. Compared to TFHE-rs, this represents a performance improvement of 2.5 to 38.7 times while requiring bootstrapping keys up to 47.5 times smaller.
Impossibility Results for Post-Compromise Security in Real-World Communication Systems
Modern secure communication systems, such as iMessage, WhatsApp, and Signal include intricate mechanisms that aim to achieve very strong security properties. These mechanisms typically involve continuously merging in new fresh secrets into the keying material, which is used to encrypt messages during communications. In the literature, these mechanisms have been proven to achieve forms of Post Compromise Security (PCS): the ability to provide communication security even if the full state of a party was compromised some time in the past. However, recent work has shown these proofs do not transfer to the end-user level, possibly because of usability concerns. This has raised the question of whether end-users can actually obtain PCS or not, and under which conditions.
Here we show and formally prove that communication systems that need to be resilient against certain types of state loss (which can occur in practice) fundamentally cannot achieve full PCS for end-users. Whereas previous work showed that the Signal messenger did not achieve this with its current session-management layer, we isolate the exact conditions that cause this failure, and why this cannot be simply solved in communication systems by implementing a different session-management layer or an entirely different protocol. Moreover, we clarify the trade-off of the maximum number of sessions between two users (40 in Signal) in terms of failure-resilience versus security.
Our results have direct consequences for the design of future secure communication systems, and could motivate either the simplification of redundant mechanisms, or the improvement of session-management designs to provide better security trade-offs with respect to state loss/failure tolerance.
Binding Security of Implicitly-Rejecting KEMs and Application to BIKE and HQC
In this work, we continue the analysis of the binding properties of implicitly-rejecting key-encapsulation mechanisms (KEMs) obtained via the Fujisaki-Okamoto (FO) transform. These binding properties, in earlier literature known under the term robustness, thwart attacks that can arise when using KEMs in complex protocols. Recently, Cremers et al. (CCS'24) introduced a framework for binding notions, encompassing previously existing but also new ones. While implicitly-rejecting FO-KEMs have been analyzed with respect to multiple of these notions, there are still several gaps. We complete the picture by providing positive and negative results for the remaining notions. Further, we show how to apply our results to the code-based KEMs BIKE and HQC, which were round-4 candidates in NIST's PQC standardization process. Through this, we close a second gap as our results complete the analysis of the binding notions for the NIST round-4 KEMs. Finally, we give a modified version of the FO transform that achieves all binding notions.
Discrete Gaussians Modulo Sub-Lattices: New Leftover Hash Lemmas for Discrete Gaussians
The Leftover Hash Lemma (LHL) is a powerful tool for extracting randomness from an entropic distribution, with numerous applications in cryptography. LHLs for discrete Gaussians have been explored in both integer settings by Gentry et al. (GPV, STOC'08) and algebraic ring settings by Lyubashevsky et al. (LPR, Eurocrypt'13). However, the existing LHLs for discrete Gaussians have two main limitations: they require the Gaussian parameter to be larger than certain smoothing parameters, and they cannot handle cases where fixed and arbitrary information is leaked.
In this work, we present new LHLs for discrete Gaussians in both integer and ring settings. Our results show that the Gaussian parameter can be improved by a factor of $\omega(\sqrt{\log\lambda}) $ and $ O(\sqrt{N}) $ compared to the regularity lemmas of GPV and LPR, respectively, under similar parameter choices such as the dimension and ring. Furthermore, our new LHLs can be applied to leaked discrete Gaussians, and the result can be used to establish asymptotic hardness of the extended MLWE assumptions, addressing an open question in recent works by Lyubashevsky et al. (LNP, Crypto'22). Our central techniques involve new fine-grained analyses of the min-entropy in discrete Gaussians modulo sublattices and should be of interest.
Provably Secure Butterfly Key Expansion from the CRYSTALS Post-Quantum Schemes
This work presents the first provably secure protocol for Butterfly Key Expansion (BKE) -- a tripartite protocol for provisioning users with pseudonymous certificates -- based on post-quantum cryptographic schemes. Our work builds upon the CRYSTALS family of post-quantum algorithms that have been selected for standardization by NIST. We extend those schemes by imbuing them with the additional functionality of public key expansion: a process by which pseudonymous public keys can be derived by a single public key. Our work is the most detailed analysis yet of BKE: we formally define desired properties of BKE -- unforgeability and unlinkability -- as cryptographic games, and prove that BKE implemented with our modified CRYSTALS schemes satisfy those properties. We implemented our scheme by modifying the Kyber and Dilithium algorithms from the LibOQS project, and we report on our parameter choices and the performance of the schemes.
On the Definition of Malicious Private Information Retrieval
A multi-server private information retrieval (PIR) protocol allows a client to obtain an entry of its choice from a database, held by one or more servers, while hiding the identity of the entry from small enough coalitions of servers. In this paper, we study PIR protocols in which some of the servers are malicious and may not send messages according to the pre-described protocol. In previous papers, such protocols were defined by requiring that they are correct, private, and robust to malicious servers, i.e., by listing 3 properties that they should satisfy. However, 40 years of experience in studying secure multiparty protocols taught us that defining the security of protocols by a list of required properties is problematic.
In this paper, we rectify this situation and define the security of PIR protocols with malicious servers using the real vs. ideal paradigm. We study the relationship between the property-based definition of PIR protocols and the real vs. ideal definition, showing the following results:
- We prove that if we require full security from PIR protocols, e.g., the client outputs the correct value of the database entry with high probability even if a minority of the servers are malicious, then the two definitions are equivalent. This implies that constructions of such protocols that were proven secure using the property-based definition are actually secure under the ``correct'' definition of security.
- We show that if we require security-with-abort from PIR protocols (called PIR protocols with error-detection in previous papers), i.e., protocols in which the user either outputs the correct value or an abort symbol, then there are protocols that are secure under the property-based definition; however, they do not satisfy the real vs. ideal definition, that is, they can be attacked allowing selective abort. This shows that the property-based definition of PIR protocols with security-with-abort is problematic.
- We consider the compiler of Eriguchi et al. (TCC 22) that starts with a PIR protocol that is secure against semi-honest servers and constructs a PIR protocol with security-with-abort; this compiler implies the best-known PIR protocols with security-with-abort. We show that applying this protocol does not result in PIR protocols that are secure according to the real vs. ideal definition. However, we prove that a simple modification of this compiler results in PIR protocols that are secure according to the real vs. ideal definition.
A Generic Construction of Tightly Secure Password-based Authenticated Key Exchange
We propose a generic construction of password-based authenticated key exchange (PAKE) from key encapsulation mechanisms (KEM). Assuming that the KEM is oneway secure against plaintext-checkable attacks (OW-PCA), we prove that our PAKE protocol is \textit{tightly secure} in the Bellare-Pointcheval-Rogaway model (EUROCRYPT 2000). Our tight security proofs require ideal ciphers and random oracles. The OW-PCA security is relatively weak and can be implemented tightly with the Diffie-Hellman assumption, which generalizes the work of Liu et al. (PKC 2023), and ``almost'' tightly with lattice-based assumptions, which tightens the security loss of the work of Beguinet et al. (ACNS 2023) and allows more efficient practical implementation with Kyber. Beyond these, it opens an opportunity of constructing tight PAKE based on various assumptions.
The supersingular endomorphism ring problem given one endomorphism
Given a supersingular elliptic curve $E$ and a non-scalar endomorphism $\alpha$ of $E$, we prove that the endomorphism ring of $E$ can be computed in classical time about $\text{disc}(\mathbb{Z}[\alpha])^{1/4}$ , and in quantum subexponential time, assuming the generalised Riemann hypothesis. Previous results either had higher complexities, or relied on heuristic assumptions.
Along the way, we prove that the Primitivisation problem can be solved in polynomial time (a problem previously believed to be hard), and we prove that the action of smooth ideals on oriented elliptic curves can be computed in polynomial time (previous results of this form required the ideal to be powersmooth, i.e., not divisible by any large prime power).
Following the attacks on SIDH, isogenies in high dimension are a central ingredient of our results.
Exploring Marginal Guesswork with the Theorem of Berry-Esséen
In 2000, Pliam showed that there does not exist an upper or lower bound in terms of Shannon entropy alone for the number of guesses required in order to guess some randomly sampled element $s$ with certainty $0<q<1$, denoted as marginal guesswork.
In comparison, it was shown by Arikan in 1996 that, if the distribution is defined over finite support, the expected guesswork, that is the expected amount of trials it takes to guess some random element $s$ with certainty, is bound in terms of R\'enyi entropy $\operatorname{H}_{1/2}$.
In this paper, we utilize the Theorem of Berry-Ess\'een to amend Pliams result. We show that there exists some asymptotic expression for the marginal guesswork in terms of Shannon entropy, variance in information and the probit function, provided $q$ is fixed.
Low Communication Threshold FHE from Standard (Module-)LWE
Threshold fully homomorphic encryption (ThFHE) is a multi-party extension of FHE; any subset of at least $T$ out of $N$ parties can decrypt the ciphertexts by combining their decryption shares. Recently, Passelègue and Stehlé (Asiacrypt 2024) presented a ThFHE scheme with polynomially short decryption shares from the ``known-norm'' variant of learning with errors (LWE) assumption, in which the norm of the secret key is leaked to the adversary. While known-norm LWE is reduced from standard LWE, its module extension, known-covariance module-LWE (MLWE), lacks a known reduction from standard MLWE. Hence, extending their ThFHE scheme to the MLWE-based construction remains an open question.
In this paper, we address this open problem: We construct a ThFHE scheme with polynomially small decryption shares from standard LWE/MLWE. Our core technique, which we call noise padding, eliminates the need of known-norm variants of LWE. We distribute shares of a padding noise and use them to adjust the distribution of decryption noise so that no information about the secret key is leaked. Furthermore, our ThFHE efficiently realizes arbitrary $T$-out-of-$N$ threshold decryption via simple Shamir secret sharing instead of $\{0,1\}$-linear secret sharing. Hence, the sizes of the keys, ciphertexts and decryption shares in our scheme are compact: they are $O(1)$ w.r.t. the number of parties $N$.
BitVM: Quasi-Turing Complete Computation on Bitcoin
A long-standing question in the blockchain community is which class of computations are efficiently expressible in cryptocurrencies with limited scripting languages, such as Bitcoin Script. Such languages expose a reduced trusted computing base, thereby being less prone to hacks and vulnerabilities, but have long been believed to support only limited classes of payments.
In this work, we confute this long-standing belief by showing for the first time that arbitrary computations can be encoded in today's Bitcoin Script without introducing any language modification or additional security assumptions, such as trusted hardware, trusted parties, or committees with an honest majority. We present BitVM, a two-party protocol that realizes a generic virtual machine by combining cryptographic primitives and economic incentives. We conduct a formal analysis of BitVM, characterizing its functionality, system assumptions, and security properties. We further demonstrate the practicality of our approach by implementing a prototype and performing an experimental evaluation: in the optimistic case (i.e., when parties agree), our protocol requires just three on-chain transactions, whereas in the pessimistic case, the number of transactions grows logarithmically with the size of the virtual machine. We exemplify the deployment potential of BitVM by building a Bitcoin-sidechain bridge application. This work not only solves a long-standing theoretical problem, but it also promises a strong practical impact, enabling the development of complex applications in Bitcoin.
A Certified-Input Mixnet from Two-Party Mercurial Signatures on Randomizable Ciphertexts
A certified-input mixnet introduced by Hébant et al. (PKC '20) employs homomorphically signed ciphertexts to reduce the complexity of shuffling arguments. However, the state-of-the-art construction relies on heavy Groth-Sahai proofs for key homomorphism, and only achieves honest-user security, limiting broader applicability.
This work proposes a novel certified-input mixnet achieving stronger security guarantees, alongside better efficiency. This is achieved by introducing a tailored signature scheme, two-party mercurial signatures on randomizable ciphertexts, that allows users and an authority to jointly sign ciphertexts supporting key, ciphertext, and signature randomization without compromising integrity and privacy.
We compare our approach to previous works that employ structured ciphertexts, implement our protocols, and provide performance benchmarks. Our results show that verifying the mixing process for 50,000 ciphertexts takes just 135 seconds on a commodity laptop using ten mixers, underscoring the practicality and efficiency of our approach.
(Interleaved) Extended Gabidulin Codes, More Attacks on Rank Decoding Problem, and Their Applications to Cryptosystems
In this paper, we investigate the Extended Gabidulin (EG) codes and the Interleaved EG (IEG) codes, develop more powerful attacks on the variants of the Rank Decoding (RD) problem, and enhance rank-based cryptosystems such as RQC and ROLLO.
First, we develop a general decoding algorithm for the (I)EG codes by solving the Linear Reconstruction (LR) problem. We find that the (I)EG codes can be probabilistically decoded by Welch-Berlekamp like algorithm, can achieve an arbitrarily small decoding failure rate, and can decode up to the rank Gilbert-Varshamov bound (even close to the minimal distance). Our conclusion intrinsically shows that it is not necessary to require that the generator be linearly independent as Gabidulin codes for designing decodable codes from $q$-polynomials. An interesting and important byproduct is that we demonstrate that decoding interleaved Gabidulin codes can be achieved deterministically by solving the LR problem. It has long been believed that there are only probabilistic decoding algorithms for interleaved Gabidulin codes (IEEE TIT 2011, DCC 2014, DCC 2024).
Second, we develop the Blockwise Puncturing (BP) strategy for attacking on the Blockwise RD (BRD) problem (Asiacrypt 2023, IEEE TIT 2025) and Non-Homogenous RD (NHRD) problem (NIST PQC 2020, IEEE TIT 2024). We find that the BP strategy can significantly speed up the overdetermined MM modeling and even the underdetermined MM modeling. When the proposed attacks are applied to existing rank-based cryptosystems based on the BRD and NHRD problems, such as RQC (IEEE TIT 2025, IEEE TIT 2024, PQC 2024) and ROLLO (IEEE TIT 2025, IEEE TIT 2022), the parameters sets are lower than the claimed security. This implies that these cryptosystems should enlarge parameters to resist the MM attack with the BP strategy.
Third, we apply the EG codes to RQC based on the BRD problem. We find that the gain of using the EG codes in decoding capacity outweighs the complexity loss in solving the BRD problem with the BP strategy, which still makes it possible to design more efficient RQC. As a result, RQC remains attractive sizes with a bandwidth of about 2.3 KB for 128-bit security. Overall, RQC still outperforms Hamming metric ones of NIST PQC Round 4 submissions, such as HQC, BIKE, and Classic McEliece, in terms of bandwidth, especially about 65% more compact than the NIST PQC selected HQC.
LegoLog: A configurable transparency log
Transparency logs are critical for a wide range of
applications, from web certificates to end-to-end encrypted
messaging. Today, many transparency log designs exist for
various applications and workloads, and developers must
fully understand the design space to find the best design
for their needs. Worse, if a developer needs a transparency
log for an application and workload without an existing
transparency log, the developer (who might not be an expert)
must design a new log. To address these challenges, we
introduce the paradigm of a configurable transparency log,
which takes as input a description of the application work-
load and constraints of different entities and automatically
outputs a transparency log uniquely suited to the application.
We present the first configurable transparency log design,
LegoLog, which we implement and empirically evaluate end-
to-end for three specialized transparency logs. We also show
that LegoLog can express six different applications, and
we compare the asymptotic complexity of LegoLog and
existing transparency logs tailored to individual applications.
We find that configurability does not come at the cost of
performance: LegoLog can capture a variety of applications
while performing comparably to existing, special-purpose
transparency logs.
Improved Constant-Sized Polynomial Commitment Schemes Without Trusted Setup
Argument systems are a fundamental ingredient in many cryptographic constructions. The best-performing argument systems to date largely rely on a trusted setup, which is undesirable in trust-minimized applications. While transparent argument systems avoid this trust assumption, they have historically been inefficient, typically exhibiting polylogarithmic proof sizes compared to their trusted counterparts. In 2023, Arun et al. (PKC 2023) constructed the first transparent constant-sized polynomial commitment scheme (PCS), leading to transparent constant-sized arguments. However, the evaluation proof still comprises 66 group elements in a group of unknown order (GUO), rendering it rather impractical. In this work, we address this challenge by presenting a set of novel batching and aggregation techniques tailored for proofs of knowledge of ranges in GUOs. These techniques may also be of independent interest and are readily applicable to enhance and shorten other existing schemes in GUOs. Consequently, by applying these techniques, we immediately achieve an improved PCS with an evaluation proof consisting of only 10 group elements---an impressive 85% reduction. To our knowledge, this represents the shortest PCS in the transparent setting. Thus compiling known information-theoretic proof systems using our improved PCS yields highly compact transparent argument systems when instantiated in a class group, which is more practical than prior constant-sized schemes.
How to Model Unitary Oracles
We make the case for modeling unitary oracles by allowing for controlled access to the oracle as well as its conjugate transpose (inverse), but also its conjugate and transpose. Controlling and conjugate transposes are common if even standard, but conjugates and transposes appear to be non-standard. In order to justify our modeling, we give several formal examples of what goes wrong or is missed when using a more restrictive modeling. We also argue that our model is the "right" level of granularity, and that other transformations likely do not correspond to efficient computation. We also discuss other modeling choices, such as ancillas and approximation error.
Through our exploration, we uncover interesting phenomena. Examples include an attack on the recent pseudorandom unitary construction of Ma and Huang (STOC'25) if used incorrectly as a publicly evaluatable unitary, and a quantum complexity-theoretic separation that follows from a purely classical separation.
OMIX: Offline Mixing for Scalable Self-Tallying Elections
In electronic voting systems, guaranteeing voter anonymity is essential. One primary method to ensure this is the use of a mix-net, in which a set of mix-servers sequentially shuffle a set of encrypted votes, and generate proofs that a correct permutation has been applied. Whilst mix-nets offer advantages over alternative approaches, their traditional use during the tallying phase introduces a significant robustness bottleneck: the process is inherently sequential and critically depends on trusted authorities to perform shuffling and decryption. Any disruption can prevent the final result from being revealed.
In this work, we propose offline mixing OMIX, the first voting framework to support a mix-net-based system in which trustees never handle encrypted votes, while also ensuring that each voter's cost is independent of the total number of voters. In particular, the contributions of permutations by mix-servers and decryption shares by trustees are completed and publicly verified before any vote is cast. This eliminates the need for their participation during tallying and enables the first scalable, mix-net-based, and self-tallying voting protocol in the sense of Kiayias and Yung (PKC'02).
At the core of OMIX is a distributed key-generation mechanism: each voter locally generates a private voting key and registers a constant-size set of basis public keys. These are permuted and partially decrypted in an offline phase, resulting in a final public decryption key that reveals votes in shuffled order. Our construction leverages the homomorphic and structure-preserving properties of function-hiding inner-product functional encryption, combined with standard primitives, to achieve self-tallying, client scalability, ballot privacy and other voting properties. To support the new mixing structure introduced by OMIX, we also develop a compact and verifiable offline mix-net, based on an enhanced linearly homomorphic signature scheme. This latter primitive may be of independent interest.
Compressing steganographic payloads with LLM assistance
Steganography is the practice of concealing messages or information within other non-secret text or media to avoid detection. A central challenge in steganography is balancing payload size with detectability and media constraints—larger payloads increase the risk of detection and require proportionally larger or higher-capacity carriers. In this paper, we introduce a novel approach that combines Huffman coding, suitable dictionary identification, and large language models (LLMs) rephrasing techniques to significantly reduce payload size. This enables more efficient use of limited-capacity carriers, such as images, while minimizing the visual or statistical footprint. Our method allows for the embedding of larger payloads into fixed-size media, addressing a key bottleneck in traditional steganographic systems. By optimizing payload compression prior to encoding, we improve both the stealth and scalability of steganographic communication.
Shared OT and Its Applications
We present unconditionally perfectly secure protocols in the semi-honest setting for several functionalities: (1) private elementwise equality; (2) private bitwise integer comparison; and (3) bit-decomposition. These protocols are built upon a new concept called Shared Oblivious Transfer (Shared OT). Shared OT extends the one-out-of-N String OT by replacing strings with integers modulo $M$ and allowing additive secret-sharing of all inputs and outputs. These extensions can be implemented by simple local computations without incurring additional OT invocations. We believe our Shared OT may be of independent interest.
Our protocols demonstrate the best round, communication, and computational complexities compared to all other protocols secure in a similar setting. Moreover, all of our protocols involve either 2 or 3 rounds.
ABE Cubed: Advanced Benchmarking Extensions for ABE Squared
Since attribute-based encryption (ABE) was proposed in 2005, it has established itself as a valuable tool in the enforcement of access control. For practice, it is important that ABE satisfies many desirable properties such as multi-authority and negations support. Nowadays, we can attain these properties simultaneously, but none of these schemes have been implemented. Furthermore, although simpler schemes have been optimized extensively on a structural level, there is still much room for improvement for these more advanced schemes. However, even if we had schemes with such structural improvements, we would not have a way to benchmark and compare them fairly to measure the effect of such improvements. The only framework that aims to achieve this goal, ABE Squared (TCHES '22), was designed with simpler schemes in mind.
In this work, we propose the ABE Cubed framework, which provides advanced benchmarking extensions for ABE Squared. To motivate our framework, we first apply structural improvements to the decentralized ciphertext-policy ABE scheme supporting negations presented by Riepel, Venema and Verma (ACM CCS '24), which results in five new schemes with the same properties. We use these schemes to uncover and bridge the gaps in the ABE Squared framework. In particular, we observe that advanced schemes depend on more "variables" that affect the schemes' efficiency in different dimensions. Whereas ABE Squared only considered one dimension (as was sufficient for the schemes considered there), we devise a benchmarking strategy that allows us to analyze the schemes in multiple dimensions. As a result, we obtain a more complete overview on the computational efficiency of the schemes, and ultimately, this allows us to make better-founded choices about which schemes provide the best efficiency trade-offs for practice.
Truncation Untangled: Scaling Fixed-Point Arithmetic for Privacy-Preserving Machine Learning to Large Models and Datasets
Fixed Point Arithmetic (FPA) is widely used in Privacy-Preserving Machine Learning (PPML) to efficiently handle decimal values. However, repeated multiplications in FPA can lead to overflow, as the fractional part doubles in size with each multiplication. To address this, truncation is applied post-multiplication to maintain precision. Various truncation schemes based on Secure Multiparty Computation (MPC) exist, but trade-offs between accuracy and efficiency in PPML models and datasets remain underexplored. In this work, we analyze and consolidate different truncation approaches from the MPC literature.
We conduct the first large-scale systematic evaluation of PPML inference accuracy across truncation schemes, ring sizes, neural network architectures, and datasets. Our study provides clear guidelines for selecting the optimal truncation scheme and parameters for PPML inference. All evaluations are implemented in the open-source HPMPC MPC framework, facilitating future research and adoption.
Beyond our large scale evaluation, we also present improved constructions for each truncation scheme, achieving up to a fourfold reduction in communication and round complexity over existing schemes. Additionally, we introduce optimizations tailored for PPML, such as strategically fusing different neural network layers. This leads to a mixed-truncation scheme that balances truncation costs with accuracy, eliminating communication overhead in the online phase while matching the accuracy of plaintext floating-point PyTorch inference for VGG-16 on the ImageNet dataset.
NTRU with Hints: Recovering NTRU Secret Keys from Partial Leakage
NTRU-based structured lattices underpin several standardized post-quantum cryptographic schemes, most notably the Falcon signature algorithms. While offering compactness and efficiency, the algebraic structure of NTRU lattices introduces new vulnerabilities under physical attacks, where partial secret key leakage may occur.
This work addresses the problem of full key recovery in NTRU-based schemes when adversaries obtain partial information through side-channel or fault attacks. Existing leakage-aware frameworks, including the DDGR estimator and the approach of May and Nowakowski, either lack scalability or are limited to structured, single-source leakage on one secret vector. These constraints make them ineffective against practical leakage patterns in NTRU settings.
We propose a unified and scalable framework for recovering NTRU secret keys under partial leakage. Our method supports diverse hint types, such as perfect hints, modular hints, and low-bit leakage, and enables joint integration of leakage across both secret polynomials \( f \) and \( g \). At its core, the framework uses a dimension-reduction strategy to eliminate
known coefficients and reduce the problem to a lower-dimensional NTRU instance suitable for lattice reduction. Additionally, we introduce a transformation that converts hints on \( g \) into modular constraints on \( f \), allowing unified hint embedding.
We demonstrate practical attacks on Falcon using NIST reference implementations. Leaking 400 coefficients of $f$ in Falcon-512 reduces the required BKZ block size from over 350 to 38, enabling full key recovery within 6 hours. Compared to MN23, our method achieves significant speedups: $5.83\times$ for Falcon-512 with 400 leaked coefficients, and over $15\times$ for Falcon-1024 with 910 leaked coefficients. These results highlight the efficiency and scalability of our framework and the importance of leakage-resilient design for structured NTRU lattices.
Quantum-Safe Hybrid Key Exchanges with KEM-Based Authentication
Authenticated Key Exchange (AKE) between any two entities is one of the most important security protocols available for securing our digital networks and infrastructures. In PQCrypto 2023, Bruckner, Ramacher and Striecks proposed a novel hybrid AKE (HAKE) protocol dubbed Muckle+ that is particularly useful in large quantum-safe networks consisting of a large number of nodes. Their protocol is hybrid in the sense that it allows key material from conventional, post-quantum, and quantum cryptography primitives to be incorporated into a single end-to-end authenticated shared key.
To achieve the desired authentication properties, Muckle+ utilizes post-quantum digital signatures. However, available instantiations of such signatures schemes are not yet efficient enough compared to their post-quantum key-encapsulation mechanism (KEM) counterparts, particularly in large networks with potentially several connections in a short period of time.
To mitigate this gap, we propose Muckle# that pushes the efficiency boundaries of currently known HAKE constructions. Muckle# uses post-quantum key-encapsulating mechanisms for implicit authentication inspired by recent works done in the area of Transport Layer Security (TLS) protocols, particularly, in KEMTLS (CCS'20).
We port those ideas to the HAKE framework and develop novel proof techniques on the way. Due to our KEM-based approach, the resulting protocol has a slightly different message flow compared to prior work that we carefully align with the HAKE framework and which makes our changes to Muckle+ non-trivial. Lastly, we evaluate the approach by a prototypical implementation and a direct comparison with Muckle+ to highlight the efficiency gains.
Return of the Kummer: a Toolbox for Genus-2 Cryptography
This work expands the machinery we have for isogeny-based cryptography in genus 2 by developing a toolbox of several essential algorithms for Kummer surfaces, the dimension-2 analogue of $x$-only arithmetic on elliptic curves. Kummer surfaces have been suggested in hyper-elliptic curve cryptography since at least the 1980s and recently these surfaces have reappeared to efficiently compute $(2,2)$-isogenies. We construct several essential analogues of techniques used in one-dimensional isogeny-based cryptography, such as pairings, deterministic point sampling and point compression and give an overview of $(2,2)$-isogenies on Kummer surfaces. We furthermore show how Scholten's construction can be used to transform isogeny-based cryptography over elliptic curves over $\mathbb{F}_{p^2}$ into protocols over Kummer surfaces over $\mathbb{F}_{p}$
As an example of this approach, we demonstrate that SQIsign verification can be performed completely on Kummer surfaces, and, therefore, that one-dimensional SQIsign verification can be viewed as a two-dimensional isogeny between products of elliptic curves, now defined over $\mathbb{F}_p$ rather than $\mathbb{F}_{p^2}$.
Improved Key-recovery Attacks on ARADI
ARADI is a low-latency block cipher introduced by the U.S. National Security
Agency (NSA), targeting secure and efficient memory encryption. However, unlike
most academic cipher proposals, the design rationale behind ARADI has not been made
public, leaving its security to be only assessed through independent analysis. In this
work, we present improved key-recovery attacks on up to 12 out of 16 rounds of ARADI
in the single-key setting — advancing the best known attacks by two rounds. Our
techniques build upon the ZeroSum distinguisher framework and leverage the Fast
Hadamard Transform (FHT). A central insight in our attacks is that the linear layer
of ARADI exhibits weak diffusion. This structural property allows partial decryption
with only a subset of the round keys, significantly reducing the key-guessing space.
Rational Censorship Attack: Breaking Blockchain with a Blackboard
Censorship resilience is a fundamental assumption underlying the security of blockchain protocols. Additionally, the analysis of blockchain security from an economic and game theoretic perspective has been growing in popularity in recent years.
In this work, we present a surprising rational censorship attack on blockchain censorship resilience when we adopt the analysis of blockchain security from a game theoretic lens and assume all users are rational.
In our attack, a colluding group with sufficient voting power censors the remainder nodes such that the group alone can gain all the rewards from maintaining the blockchain.
We show that if nodes are rational, coordinating this attack just requires a public read and write blackboard and we formally model the attack using a game theoretic framework.
Furthermore, we note that to ensure the success of the attack, nodes need to know the total true voting power held by the colluding group.
We prove that the strategy to join the rational censorship attack and also for nodes to honestly declare their power is a subgame perfect equilibrium in the corresponding extensive form game induced by our attack.
Finally, we discuss the implications of the attack on blockchain users and protocol designers as well as some potential countermeasures.
Unmasking TRaccoon: A Lattice-Based Threshold Signature with An Efficient Identifiable Abort Protocol
TRaccoon is an efficient 3-round lattice-based T-out-of-N threshold signature, recently introduced by del Pino et al. (Eurocrypt 2024). While the design resembles the classical threshold Schnorr signature, Sparkle (Crites et al., Crypto 2023), one shortfall is that it has no means to identify malicious behavior, a property highly desired in practice. This is because to resist lattice-specific attacks, TRaccoon relies on a technique called masking, informally blinding each partial signature with a one-time additive mask. del Pino et al. left it as an open problem to add a mechanism to identify malicious behaviors to TRaccoon.
In this work, we propose TRaccoon-IA, a TRaccoon with an efficient identifiable abort protocol, allowing to identify malicious signers when the signing protocol fails. The identifiable abort protocol is a simple add-on to TRaccoon, keeping the original design intact, and comes with an added communication cost of 60 + 6.4 |T| KB only when signing fails. Along the way, we provide the first formal security analysis of a variant of LaBRADOR (Beullens et al., Crypto 2023) with zero-knowledge, encountering several hurdles when formalizing it in detail. Moreover, we give a new game-based definition for interactive identifiable abort protocols, extending the popular game-based definition used to prove unforgeability of recent threshold signatures.
Efficient NIZK Arguments with Straight-Line Simulation and Extraction
Non-interactive zero-knowledge (NIZK) arguments allow a prover to convince a verifier about the truthfulness of an NP-statement by sending just one message, without disclosing any additional information. In several practical scenarios, the Fiat-Shamir transform is used to convert an efficient constant-round public-coin honest-verifier zero-knowledge proof system into an efficient NIZK argument system. This approach is provably secure in the random oracle model, crucially requires the programmability of the random oracle and extraction works through rewinds. The works of Lindell [TCC 2015] and Ciampi et al. [TCC 2016] proposed efficient NIZK arguments with non-programmable
random oracles along with a programmable common reference string. In this work we show an efficient NIZK argument with straight-line simulation and extraction that relies on features that alone are insufficient to construct NIZK arguments (regardless of efficiency). More specifically we consider the notion of quasi-polynomial time simulation proposed by Pass in [EUROCRYPT 2003] and combine it with simulation and extraction with non-programmable random
oracles thus obtaining a NIZK argument of knowledge where neither the zero-knowledge simulator, nor the argument of knowledge extractor needs to program the random oracle. Still, both the simulator and the extractor are straight-line. Our construction uses as a building block a modification of the Fischlin’s transform [CRYPTO 2005] and combines it with the concept of dense puzzles introduced by Baldimtsi et al. [ASIACRYPT 2016]. We also argue that our NIZK argument system inherits the efficiency features of Fischlin’s transform, which represents the main advantage of Fischlin’s protocol over existing schemes.
PICS: Private Intersection over Committed (and reusable) Sets
Private Set Intersection (PSI) enables two parties to compute the intersection of their private sets without revealing any additional information. While maliciously secure PSI protocols prevent many attacks, adversaries can still exploit them by using inconsistent inputs across multiple sessions. This limitation stems from the definition of malicious security in secure multiparty computation, but is particularly problematic in PSI because: (1) real-world applications---such as Apple’s PSI protocol for CSAM detection and private contact discovery in messaging apps---often require multiple PSI executions over consistent inputs, and (2) the PSI functionality makes it relatively easy for adversaries to infer additional information.
We propose {\em Private Intersection over Committed Sets (PICS)}, a new framework that enforces input consistency across multiple sessions via committed sets. Building on the state-of-the-art maliciously secure PSI framework (i.e., VOLE-PSI [EUROCRYPT 2021]), we present an efficient instantiation of PICS % in the random oracle model using lightweight cryptographic tools. We implement our protocol to demonstrate concrete efficiency. Compared to VOLE-PSI, our communication overhead is a small constant between $1.57 - 2.04\times$ for set sizes between $2^{16}-2^{24}$, and the total end-to-end running time overhead is $1.22 - 1.98\times$ across various network settings.
$\textbf{Note:}$ The previous version of this paper had a soundness issue in the way we checked the consistency of the sender’s input. This revised draft presents a much simpler and cleaner approach to ensuring input consistency for the sender.
Optimizing Key Recovery in Classic McEliece: Advanced Error Correction for Noisy Side-Channel Measurements
Classic McEliece is one of the code-based Key Encapsulation Mechanism finalists in the ongoing NIST post-quantum cryptography standardization process. Several key-recovery side-channel attacks on the decapsulation algorithm have already been published. However none of them discusses the feasibility and/or efficiency of the attack in the case of noisy side-channel acquisitions. In this paper, we address this issue by proposing two improvements on the recent key-recovery attack published by Drăgoi et al.. First, we introduce an error correction algorithm for the lists of Hamming weights obtained by side-channel measurements, based on the assumption, validated experimentally, that the error on a recovered Hamming weight is bounded to $\pm1$. We then offer a comparison between two decoding efficiency metrics, the theoretical minimal error correction capability and an empirical average correction probability. We show that the minimal error correction capability, widely used for linear codes, is not suitable for the (non-linear) code formed by the lists of Hamming weights. Conversely, experimental results show that out of 1 million random erroneous lists of $2t=128$ Hamming weights, only 2 could not be corrected by the proposed algorithm. This shows that the probability of successfully decoding a list of erroneous Hamming weights is very high, regardless of the error weight. In addition to this algorithm, we describe how the secret Goppa polynomial $g$, recovered during the first step of the attack, can be exploited to reduce both the time and space complexity of recovering the secret permuted support $\mathcal{L}$.
Coalition and Threshold Hash-Based Signatures
We introduce techniques to transform existing stateful hash based signature (HBS) schemes, such as LMS or XMSS, into efficient threshold and distributed signature schemes. Our approach requires a trusted dealer for setup, and uses a large (up to a few GiB, typically) common reference value for each new public key. The dealer generates the keypair and distributes shares of the signing key to the trustees, while creating the CRV. Signing involves an untrusted aggregator communicating point-to-point with a set of trustees. Only the aggregator needs access to the CRV; the trustees need only a PRF key and enough space to remember which one-time keys they have helped to sign with so far. Signing requires two round trips between the aggregator and each participating trustee, and only a little more computation from the trustees and aggregator than is done when signing with the underlying HBS scheme. We reduce the security of our scheme to that of the underlying HBS scheme, assuming the availability of a secure PRF. A dishonest aggregator or tampered CRV can prevent valid signatures from being constructed, but does not allow forgeries. Our techniques offer a powerful practical defense against accidental reuse of a one-time key in stateful HBS schemes by requiring multiple trustees to fail in the same way in order for key reuse to occur.
On Deniable Authentication against Malicious Verifiers
Deniable authentication allows Alice to authenticate a message to Bob, while retaining deniability towards third parties. In particular, not even Bob can convince a third party that Alice authenticated that message. Clearly, in this setting Bob should not be considered trustworthy. Furthermore, deniable authentication is necessary for deniable key exchange, as explicitly desired by Signal and off-the-record (OTR) messaging.
In this work we focus on (publicly verifiable) designated verifier signatures (DVS), which are a widely used primitive to achieve deniable authentication. We propose a definition of deniability against malicious verifiers for DVS. We give a construction that achieves this notion in the random oracle (RO) model. Moreover, we show that our notion is not achievable in the standard model with a concrete attack; thereby giving a non-contrived example of the RO heuristic failing.
All previous protocols that claim to achieve deniable authentication against malicious verifiers (like Signal's initial handshake protocols X3DH and PQXDH) rely on the Extended Knowledge of Diffie–Hellman (EKDH) assumption. We show that this assumption is broken and that these protocols do not achieve deniability against malicious verifiers.
Refined TFHE Leveled Homomorphic Evaluation and Its Application
TFHE is a fully homomorphic encryption scheme over the torus that supports fast bootstrapping. Its primary evaluation mechanism is based on gate bootstrapping and programmable bootstrapping (PBS), which computes functions while simultaneously refreshing noise. PBS-based evaluation is user-friendly and efficient for small circuits; however, the number of bootstrapping operations increases exponentially with the circuit depth. To address the challenge of efficiently evaluating large-scale circuits, Chillotti et al. introduced a leveled homomorphic evaluation (LHE) mode at Asiacrypt 2017. This mode decouples circuit evaluation from bootstrapping, resulting in a speedup of hundreds of times over PBS-based methods. However, the remaining circuit bootstrapping (CBS) becomes a performance bottleneck, even though its frequency is linear with the circuit depth.
In this paper, we refine the LHE mode by mitigating the high cost of CBS. First, we patch the NTT-based CBS algorithm proposed by Wang et al. [WWL+, Eurocrypt 2024], accelerating their algorithm by up to 2.6$\times$. Then, observing the suboptimal parallelism and high complexity of modular reduction in NTT under CBS parameters, we extend WWL+ to an FFT-based algorithm by redesigning the pre-processing method and introducing a split FFT technique. This achieves the fastest CBS implementation with the smallest key size, outperforming the open-source WWL+ implementation by up to 12.1$\times$ (resp. 5.12$\times$ compared to our patched algorithm), and surpassing TFHEpp [MBM+, USENIX 2021] by 3.42$\times$ with a key size reduction of 33.2$\times$. Furthermore, we proposed an improved integer input LHE mode by extending our CBS algorithm to support higher precision and combining it with additional optimizations such as multi-bit extraction. Compared to the previous integer input LHE mode proposed by Bergerat et al. [BBB+, JoC 2023], our approach is up to 10.7$\times$ faster with a key size reduction of up to 4.4$\times$.
To demonstrate the practicality of our improved LHE mode, we apply it to AES transciphering and general homomorphic look-up table (LUT) evaluation. For AES evaluation, our method is 4.8$\times$ faster and reduces the key size by 31.3$\times$ compared to the state-of-the-art method, Thunderbird [WLW+, TCHES 2024]. For LUT evaluation, we compare our results with the recent work of Trama et al. [TCBS, ePrint 2024/1201], which constructs a general 8-bit processor of TFHE. Our method not only achieves faster 8-to-8 LUT evaluation but also improves the efficiency of most heavy 8-bit bivariate instructions by up to 21$\times$ and the 16-bit sigmoid function by more than 26$\times$.
Lattice EPID with Efficient Revocation
Enhanced Privacy Identification (EPID) is one of the anonymous authentication mechanisms that found their way into the industry, being deployed in billions of chips and standardized at ISO. The linchpin of EPID lies in its decentralized revocation procedure that allows to revoke a signer by simply placing one of its signatures on a signature revocation list SRL. Each new signature must then include a proof that it has been generated with a key different from those used to produce the signatures on the SRL. This proof of non-revocation in current post-quantum schemes either relies on general-purpose NIZKs or on regular zero-knowledge proofs (ZKP) but with a witness dimension linear in the size of the SRL, which leads to large size and/or computational complexity.
In this paper, we rethink the standard approach of non-revocation so as to avoid its heavy reliance on ZKP. Our construction indeed combines features from different tools (such as Falcon signatures) that are unusual in this context to pull most elements out of the ZKP, leading to significant performance improvements. Providing all these elements unconcealed creates many security challenges for our construction but we yet manage to address all of them and prove security under well-understood lattice assumptions, and in the strong model of Sanders-Traoré (CT-RSA'21) allowing malicious SRLs.
KLPT²: Algebraic Pathfinding in Dimension Two and Applications
Following Ibukiyama, Katsura and Oort, all principally polarized superspecial abelian surfaces over $\overline{\mathbb{F}}_p$ can be represented by a certain type of $2 \times 2$ matrix $g$, having entries in the quaternion algebra $B_{p,\infty}$. We present a heuristic polynomial-time algorithm which, upon input of two such matrices $g_1, g_2$, finds a "connecting matrix" representing a polarized isogeny of smooth degree between the corresponding surfaces. Our algorithm should be thought of as a two-dimensional analog of the KLPT algorithm from 2014 due to Kohel, Lauter, Petit and Tignol for finding a connecting ideal of smooth norm between two given maximal orders in $B_{p, \infty}$.
The KLPT algorithm has proven to be a versatile tool in isogeny-based cryptography, and our analog has similar applications; we discuss two of them in detail. First, we show that it yields a polynomial-time solution to a two-dimensional analog of the so-called constructive Deuring correspondence: given a matrix $g$ representing a superspecial principally polarized abelian surface, realize the latter as the Jacobian of a genus-$2$ curve (or, exceptionally, as the product of two elliptic curves if it concerns a product polarization). Second, we show that, modulo a plausible assumption, Charles-Goren-Lauter style hash functions from superspecial principally polarized abelian surfaces require a trusted set-up. Concretely, if the matrix $g$ associated with the starting surface is known then collisions can be produced in polynomial time. We deem it plausible that all currently known methods for generating a starting surface indeed reveal the corresponding matrix. As an auxiliary tool, we present an efficient method for converting polarized isogenies of powersmooth degree into the corresponding connecting matrix, a step for which a previous approach by Chu required super-polynomial (but sub-exponential) time.
An Update to ``Polynomial Hashing over Prime Order Fields''
New state-of-the-art assembly implementations show that BRWHash is consistently faster than polyHash and both t-BRWHash and d-2LHash for all message lengths and for both the primes $2^{127}-1$ and $2^{130}-5$.
Efficient Pairings Final Exponentiation Using Cyclotomic Cubing for Odd Embedding Degrees Curves
Uncategorized
Uncategorized
In pairings-based cryptographic applications, final exponentiation with a large fixed exponent ensures distinct outputs for the Tate pairing and its derivatives. Despite notable advancements in optimizing elliptic curves with even embedding degrees, improvements for those with odd embedding degrees, particularly those divisible by $3$, remain underexplored.
This paper introduces three methods for applying cyclotomic cubing in final exponentiation and enhancing computational efficiency. The first allows for the execution of one cyclotomic cubing based on the final exponentiation structure. The second leverages some existing seeds structure to enable the use of cyclotomic cubing and extends this strategy to generate new seeds. The third allows generating sparse ternary representation seeds to apply cyclotomic cubing as an alternative to squaring. These optimizations improve performance by up to $19.3\%$ when computing the final exponentiation for the optimal Ate pairing on $BLS15$ and $BLS27$, the target elliptic curves of this study.
Accelerating Multiparty Noise Generation Using Lookups
There is rising interest in combining Differential Privacy (DP) and Secure Multiparty Computation (MPC) techniques to protect distributed database query evaluations from both adversaries taking part in the computation and those observing the outputs. This requires implementing both the query evaluation and noise generation parts of a DP mechanism directly in MPC. While query evaluation can be done using existing highly optimized MPC techniques for secure function evaluation, efficiently generating the correct noise distribution is a more novel challenge.
Due to the inherent nonlinearity of sampling algorithms for common noise distributions, this challenge is quite non-trivial, as is evident from the substantial number of works proposing protocols for multiparty noise sampling. In this work, we propose a new approach for joint noise sampling that leverages recent advances in multiparty lookup table (LUT) evaluations. The construction we propose is largely agnostic to the target noise distribution and builds on obliviously evaluating the LUT at an index drawn from a distribution that can be very cheaply generated in MPC, thus translating this cheap distribution into the much more complicated target noise distribution. In our instantiation, the index is a concatenation of cheaply biased bits, and we approximate a discrete Laplace distribution to a negligible statistical distance. We demonstrate the concrete efficiency of the construction by implementing it using 3-party replicated secret sharing (RSS) in the honest-majority setting with both semi-honest and malicious security. In particular, we achieve sub-kilobyte communication complexity, being an improvement over the state-of-the-art by several orders of magnitude and a computation time of a few milliseconds. Samples of a discrete Laplace distribution are generated with (amortized over $1000$ samples) 362 bytes of communication and under a millisecond computation time per party in the semi-honest setting. Using recent results for batched multiplication checking, we have an overhead for malicious security that, per sample, amortizes to below a byte of communication and 10 ms of runtime.
Finally, our open-source implementation extends the online-to-total communication trade-off for MAESTRO-style lookup tables which might be of independent interest.
AQQUA: Augmenting Quisquis with Auditability
We present AQQUA, a permissionless, private, and auditable
payment system built on top of Quisquis. Unlike other auditable payment systems, AQQUA supports auditing, while maintaining privacy. It allows users to hold multiple accounts, perform concurrent transactions, and features a non-increasing state. AQQUA achieves auditability by introducing two authorities: one for registration and one for auditing. These authorities cannot censor transactions, thus preserving the decentralized nature of the system. Users create an initial account with the registration authority and then privately transact by using provably unlinkable updates of it. Audits can be voluntarily initiated by the users or requested by the audit authority at any time. Compliance is proved in zero-knowledge against a set of policies which include a maximum limit in the amount sent/received during a time period or in a single transfer, non-participation in a specific transaction or selective disclosure of the value exchanged. To analyze the security of AQQUA we formally define a security model for private and auditable decentralized payment systems. Using this model, we prove that AQQUA satisfies anonymity towards both the public and the auditor, theft prevention, and audit soundness.
Efficient Pseudorandom Correlation Generators over $\mathbb{Z}/p^k\mathbb{Z}$
Modern efficient secure multi-party computation (MPC) protocols typically follow an offline-online design, where offline protocols produce a sufficient amount of correlated randomness that would be consumed during the online phases. The past decades have witnessed maturing of efficient online protocols, for computing circuits over either arbitrary finite fields or rings $\mathbb{Z}_{p^k}$. In particular, protocols tailored for $\mathbb{Z}_{2^k}$ arithmetic have achieved better concrete efficiency in most real-life applications, as it naturally captures modern CPU architectures. On the other hand, a recent paradigm of {\em pseudorandom correlation generator} (PCG) initiated by Boyle et al. (CCS'18, Crypto'19) opens a door to efficient preprocessing with sublinear communication. Since then, PCGs have been extensively studied and developed to produce various types of correlations required from online protocols. Although Li et al. (EuroCrypt'25) recently put a significant step forward and propose efficient PCGs for arbitrary finite fields, the current state of PCGs for rings is not satisfying at all.
Towards the great demand for efficiently generating correlations over rings, we investigate PCGs for general Galois rings, which simultaneously unify finite fields and integer rings modulo $p^k$. In summary, we establish the following results:
(i) We generalize the state-of-the-art PCG constructions for oblivious linear evaluations (OLE) over Galois fields to {\em arbitrary Galois rings}, basing on Galois theory and the Hensel lift. Moreover, our PCGs for Galois rings are as efficient as PCGs for fields. Concretely, for $mN$ OLE correlations over $\mathbb{Z}_{2^k}$, we require $O(m\log{N})$ communication and $O(m^2N\log{N})$ computation, where $m$ is an arbitrary integer $\geq 2$. In comparison, to our best knowledge, previous approaches incur communication at least linear in $N$.
(ii) We extend the above OLE construction to provide various types of correlations over any Galois ring. One of the fascinating applications is an efficient PCG for two-party SPD$\mathbb{Z}_{2^k}$ authenticated multiplication triples (Crypto'18). For $mN$ SPD$\mathbb{Z}_{2^k}$ triples, our approach requires only $O(m\log{N})$ communication and $O(m^2N\log{N})$ computation. Concrete evaluations show that our method significantly outperforms existing schemes based on homomorphic encryption.
(iii) In addition, our PCGs for Galois rings also enable multi-party multiplication triple generation, yielding the first efficient MPC protocol for arithmetic circuits over $\mathbb{Z}_{2^k}$ with \emph{silent} and \emph{sublinear} preprocessing. Additional applications include circuit-dependent preprocessing and matrix multiplication triples, etc, which are of independent interest.
Towards a White-Box Secure Fiat-Shamir Transformation
The Fiat–Shamir transformation is a fundamental cryptographic technique widely used to convert public-coin interactive protocols into non-interactive ones. This transformation is crucial in both theoretical and practical applications, particularly in the construction of succinct non-interactive arguments (SNARKs). While its security is well-established in the random oracle model, practical implementations replace the random oracle with a concrete hash function, where security is merely assumed to carry over.
A growing body of work has given theoretical examples of protocols that remain secure under the Fiat–Shamir transformation in the random oracle model but become insecure when instantiated with any white-box implementation of the hash function. Recent research has shown how these attacks can be applied to natural cryptographic schemes, including real-world systems. These attacks rely on a general diagonalization technique, where the protocol exploits its access to the white-box implementation of the hash function. These attacks cast serious doubt on the security of cryptographic systems deployed in practice today, leaving their soundness uncertain.
We propose a new Fiat–Shamir transformation (XFS) that aims to defend against a broad family of attacks. Our approach is designed to be practical, with minimal impact on the efficiency of the prover and verifier and on the proof length. At a high level, our transformation combines the standard Fiat–Shamir technique with a new type of proof-of-work that we construct.
We provide strong evidence for the security of our transformation by proving its security in a relativized random oracle model. Specifically, we show diagonalization attacks on the standard Fiat–Shamir transformation that can be mapped to analogous attacks within this model, meaning they do not rely on a concrete instantiation of the random oracle. In contrast, we prove unconditionally that our XFS variant of the Fiat–Shamir transformation remains secure within this model. Consequently, any successful attack on XFS must deviate from known techniques and exploit aspects not captured by our model.
We hope that our transformation will help preserve the security of systems relying on the Fiat–Shamir transformation.
Computing Asymptotic Bounds for Small Roots in Coppersmith's Method via Sumset Theory
Coppersmith's method is a well-known and practical method for solving polynomial modular equations involved in some cryptosystems such as RSA. An important and tedious task in this method consists in computing the asymptotic bounds. In this work, we address the challenge of computing such asymptotic bounds by introducing the Sumsets theory from Additive Combinatorics as a new analytical tool, which significantly streamlines manual calculations. More precisely, we develop the first provable algorithm for determining these asymptotic bounds, whereas the recent methods based on simple Lagrange interpolation are heuristic.
Moreover, the experiments showed that our method is much more efficient than the previous method in practice. We also employ our method to improve the cryptanalytic results for the Commutative Isogeny Hidden Number Problem. Our approach may deepen the understanding of Coppersmith's method and inspire more security analysis methodologies.
Tricycle: Private Transformer Inference with Tricyclic Encodings
The growing adoption of Large Language Models in privacy-sensitive domains necessitates secure inference mechanisms that preserve data confidentiality. Homomorphic encryption offers a promising pathway by enabling computation on encrypted inputs, yet existing approaches struggle to scale efficiently to full transformer models due to limitations in packing schemes, which must efficiently support a wide range of operations, including matrix multiplications, row-wise nonlinear operations, and self-attention. In this work, we present Tricycle, a framework for private transformer inference built on our novel packing scheme, called tricyclic encodings, which are designed to efficiently support these core operations. Tricyclic encodings are a generalization of bicyclic encodings, enabling privacy-preserving batch matrix multiplications with optimal multiplicative depth in order to facilitate parallelized multi-head self-attention. We optimize our matrix multiplications by incorporating Baby-Step Giant-Step optimizations to reduce ciphertext rotations and presenting new ciphertext-plaintext matrix multiplication techniques that relax prior limitations. A further contribution of our work is a lightweight and effective approach for stabilizing the softmax function via statistical max estimation. Our end-to-end implementation on a BERT-Tiny model shows that Tricycle achieves a \(1.5 \times\) to \(3 \times\) speedup over previous approaches, marking a step toward practical and scalable private LLM inference without sacrificing model fidelity.
SoK: Reassessing Side-Channel Vulnerabilities and Countermeasures in PQC Implementations
Post-Quantum Cryptography (PQC) algorithms should remain secure even in the presence of quantum computers. Although the security of such schemes is guaranteed at the algorithmic level, real-world implementations often suffer from other vulnerabilities like Side-Channel Attacks (SCA). This Systematization of Knowledge (SoK) paper investigates side-channel attacks targeting implementations of PQC algorithms. This work categorizes attacks from an adversarial perspective to identify the most vulnerable components of the algorithms' implementations and highlights unexplored parts in current implementations. In addition, it reviews and analyzes the efficiency and efficacy of existing countermeasures to SCA in current hardware implementations. This approach helps identify countermeasures that provide broader protection and highlights characteristics needed for future secure implementations. Our findings offer guidance in strengthening existing systems and developing more efficient defenses against side-channel attacks.
EWEMrl: A White-Box Secure Cipher with Longevity
We propose the first updatable white-box secure cipher, EWEMrl, and its natural extension EWEMxl, both achieving longevity against non-adaptive read-only malware. The notion of longevity, introduced by Koike et al., addresses continuous code leakage and is stronger than incompressibility. While Yoroi claimed longevity, but was broken by Isobe and Todo. Given the prevalence of continuous leakage, developing such ciphers is crucial in white-box cryptography. Precisely, we have the following.
• We first introduce EWEMr (Extended WEM against non-adaptive read-only adversaries), a generalization of WEM (White-box Even-Mansour). WEM is the first (and possibly only) white-box cipher based on EM, replacing its key addition layer with a secret Sbox. EWEMr achieves a high space-hardness bound, with a new generic proof strategy, but does not provide longevity. Instead, it serves as the base for EWEMrl.
• We also present EWEMx, which uses EWEMr as subroutines and is secure in the stronger adaptive model. While EWEMx does not achieve longevity, it is the base design for EWEMxl.
• We next propose EWEMrl, which is the first cipher to achieve longevity against non-adaptive read-only adversaries. No existing ciphers, such as SPNbox and SPACE, are designed for longevity. We show that EWEMrl ensures (against non-adaptive read-only adversaries) (1) longevity, (2) high space-hardness in both known-space and chosen-space settings, and (3) security against hybrid code-lifting attacks.
• Finally, we introduce EWEMxl, a natural extension of EWEMrl with a structure similar to EWEMx. EWEMxl achieves (2) and (3) in the stronger adaptive model while maintaining (1) in the same non-adaptive and read-only setting.
In summary, EWEMrl and EWEMxl are the first ciphers providing longevity against non-adaptive read-only malware while ensuring security confidence in the black-box setting.
RingSG: Optimal Secure Vertex-Centric Computation for Collaborative Graph Processing
Collaborative graph processing refers to the joint analysis of inter-connected graphs held by multiple graph owners. To honor data privacy and support various graph processing algorithms, existing approaches employ secure multi-party computation (MPC) protocols to express the vertex-centric abstraction. Yet, due to certain computation-intensive cryptography constructions, state-of-the-art (SOTA) approaches are asymptotically suboptimal, imposing significant overheads in terms of computation and communication. In this paper, we present RingSG, the first system to attain optimal communication/computation complexity within the MPC-based vertex-centric abstraction for collaborative graph processing. This optimal complexity is attributed to Ring-ScatterGather, a novel computation paradigm that can avoid exceedingly expensive cryptography operations (e.g., oblivious sort), and simultaneously ensure the overall workload can be optimally decomposed into parallelizable and mutually exclusive MPC tasks. Within Ring-ScatterGather, RingSG improves the concrete runtime efficiency by incorporating 3-party secure computation via share conversion, and optimizing the most cost-heavy part using a novel oblivious group aggregation protocol. Finally, unlike prior approaches, we instantiate RingSG into two end-to-end applications to effectively obtain application-specific results from the protocol outputs in a privacy-preserving manner. We developed a prototype of RingSG and extensively evaluated it across various graph collaboration settings, including different graph sizes, numbers of parties, and average vertex degrees. The results show RingSG reduces the system running time of SOTA approaches by up to 15.34× and per-party communication by up to 10.36×. Notably, RingSG excels in processing sparse global graphs collectively held by more parties, consistent with our theoretical cost analysis.
RoK and Roll – Verifier-Efficient Random Projection for $\tilde{O}(\lambda)$-size Lattice Arguments
Succinct non-interactive arguments of knowledge (SNARKs) based on lattice assumptions offer a promising post-quantum alternative to pairing-based systems, but have until now suffered from inherently quadratic proof sizes in the security parameter. We introduce RoK and Roll, the first lattice-based SNARK that breaks the quadratic barrier, achieving communication complexity of $\tilde{O}(\lambda)$ together with a succinct verification time. The protocol significantly improves upon the state of the art of fully-succinct argument systems established by ``RoK, Paper, SISsors'' (RPS) [ASIACRYPT'24] and hinges on two key innovations, presented as reductions of knowledge (RoKs):
- Structured random projections: We introduce a new technique for structured random projections that allows us to reduce the witness dimensions while approximately preserving its $\ell_2$ norm and maintaining the desired tensor structure. In order to maintain succinct communication and verification, the projected image is further committed and adjoined to the original relation. This procedure is recursively repeated until dimension of the intermediate witness becomes $\mathsf{poly}(\lambda)$, i.e. independent of the original witness length.
- Unstructured random projection: When the witness is sufficiently small, we let the unstructured projection (over coefficients $\mathbb{Z}_q$) be sent in plain, as in LaBRADOR [CRYPTO'23]. We observe, however, that the strategy from prior works to immediately lift the projection claim to $\mathcal{R}_q$, and into our relation, would impose a quadratic communication cost. Instead, we gradually batch-and-lift the projection a the tower of intermediate ring extensions. This reduces the communication cost to $\tilde{O}(\lambda)$ while maintaining a succinct verification time.
These two techniques, combined with existing RoKs from RPS, yield a succinct argument system with communication complexity $\tilde{O}(\lambda)$ and succinct verification for structured linear relations.
Efficient SPA Countermeasures using Redundant Number Representation with Application to ML-KEM
Simple power analysis (SPA) attacks and their extensions,
profiled and soft-analytical side-channel attacks (SASCA), represent a
significant threat to the security of cryptographic devices and remain
among the most powerful classes of passive side-channel attacks. In this
work, we analyze how numeric representations of secrets can affect the
amount of exploitable information leakage available to the adversary.
We present an analysis of how mutual information changes as a result
of the integer ring size relative to the machine word-size. Furthermore,
we study the Redundant Number Representation (RNR) countermeasure
and show that its application to ML-KEM can resist the most powerful
SASCA attacks and provides a low-cost alternative to shuffling. We eval-
uate the performance of RNR-ML-KEM with both simulated and prac-
tical SASCA experiments on the ARM Cortex-M4 based on a worst-case
attack methodology. We show that RNR-ML-KEM sufficiently renders
these attacks ineffective. Finally, we evaluate the performance of the
RNR-ML-KEM NTT and INTT and show that SPA security can be
achieved with a 62.8% overhead for the NTT and 0% overhead for the
INTT relative to the ARM Cortex-M4 reference implementation used.
A search to distinguish reduction for the isomorphism problem on direct sum lattices
At Eurocrypt 2003, Szydlo presented a search to distinguish reduction for the Lattice Isomorphism Problem (LIP) on the integer lattice $\mathbb{Z}^n$. Here the search problem asks to find an isometry between $\mathbb{Z}^n$ and an isomorphic lattice, while the distinguish variant asks to distinguish between a list of auxiliary lattices related to $\mathbb{Z}^n$.
In this work we generalize Szydlo's search to distinguish reduction in two ways. Firstly, we generalize the reduction to any lattice isomorphic to $\Gamma^n$, where $\Gamma$ is a fixed base lattice. Secondly, we allow $\Gamma$ to be a module lattice over any number field. Assuming the base lattice $\Gamma$ and the number field $K$ are fixed, our reduction is polynomial in $n$.
As a special case we consider the module lattice $\mathcal{O}_K^2$ used in the module-LIP based signature scheme HAWK, and we show that one can solve the search problem, leading to a full key recovery, with less than $2d^2$ distinguishing calls on two lattices each, where $d$ is the degree of the power-of-two cyclotomic number field and $\mathcal{O}_K$ its ring of integers.
Brief Comments on Rijndael-256 and the Standard RISC-V Cryptography Extensions
We evaluate the implementation aspects of Rijndael-256 using the ratified RISC-V Vector Cryptography extension Zvkn. A positive finding is that Rijndael-256 can be implemented in constant time with the existing RISC-V ISA as the critical AES and fixed crossbar permutation instructions are in the DIEL (data-independent execution latency) set. Furthermore, simple tricks can be used to expand the functionality of key expansion instructions to cover the additional round constants required. However, due to the required additional byte shuffle in each round, Rijndael-256 will be significantly slower than AES-256 in terms of throughput. Without additional ISA modifications, the instruction count will be increased by the required switching of the EEW (``effective element width'') parameter in each round between 8 bits (byte shuffle) and 32 bits (AES round instructions). Instruction counts for 1-kilobyte encryption and decryption with Rijndael-256 are factor $2.66\times$ higher than with AES-256. The precise amount of throughput slowdown depends on the microarchitectural details of a particular RISC-V ISA hardware instantiation, but it may be substantial with some high-performance vector AES architectures due to the breakdown of AES pipelining and the relative slowness of crossbar permutation instructions.
Revisiting Module Lattice-based Homomorphic Encryption and Application to Secure-MPC
Homomorphic encryption (HE) schemes have gained significant popularity in modern privacy-preserving applications across various domains. While research on HE constructions based on learning with errors (LWE) and ring-LWE has received major attention from both cryptographers and software-hardware designers alike, their module-LWE-based counterpart has remained comparatively under-explored in the literature. A recent work provides a module-LWE-based instantiation (MLWE-HE) of the Cheon-Kim-Kim-Song (CKKS) scheme and showcases several of its advantages such as parameter flexibility and improved parallelism. However, a primary limitation of this construction is the quadratic growth in the size of the relinearization keys. Our contribution is two-pronged: first, we present a new relinearization key-generation technique that addresses the issue of quadratic key size expansion by reducing it to linear growth. Second, we extend the application of MLWE-HE in a multi-group homomorphic encryption (MGHE) framework, thereby generalizing the favorable properties of the single-keyed HE to a multi-keyed setting as well as investigating additional flexibility attributes of the MGHE framework.
Cymric: Short-tailed but Mighty
Authenticated encryption (AE) is a fundamental tool in today's secure communication. Numerous designs have been proposed, including well-known standards such as GCM. While their performance for long inputs is excellent, that for short inputs is often problematic due to high overhead in computation, showing a gap between the real need for IoT-like protocols where packets are often very short. Existing dedicated short-input AEs are very scarce, the classical Encode-then-encipher (Bellare and Rogaway, Asiacrypt 2000) and Manx (Adomnic\u{a}i et al., CT-RSA 2023), using up to two block cipher calls. They have superior performance for (very) short inputs, however, security is up to $n/2$ bits, where $n$ is the block size of the underlying block cipher.
This paper proposes a new family of short-input AEs, dubbed Cymric, which ensures beyond-birthday-bound (BBB) security. It supports a wider range of input space than EtE and Manx with the help of one additional block cipher call (thus three calls). In terms of the number of block cipher calls, Cymric is the known minimum construction of BBB-secure AEs, and we also prove this is indeed minimal by presenting an impossibility result on BBB-secure AE with two calls. Finally, we show a comprehensive benchmark on microcontrollers to show performance advantage over existing schemes.
Ring-LWR based Commitments and ZK-PoKs with Application to Verifiable Quantum-Safe Searchable Symmetric Encryption
Prior research on ensuring trust in delegated computation through lattice-based zero-knowledge proofs mostly rely on Learning-With-Errors (LWE) assumption. In this work, we propose a zero-knowledge proof of knowledge using the Ring Learning with Rounding (RLWR) assumption for an interesting and useful class of statements: linear relations on polynomials. We begin by proposing, to the best of our knowledge, the first efficient commitment scheme in literature based on the hardness of RLWR assumption. We establish two properties on RLWR that aid in the construction of our commitments: (i) closure under addition with double rounding, and (ii) closure under multiplication with a short polynomial. Building upon our RLWR commitment scheme, we consequently design a RLWR based $\Sigma_2$ protocol for proving knowledge of a single committed message under linear relations with public polynomials.
As an use-case of our proposed $\Sigma_2$ protocol, we showcase a construction of a quantum-safe Searchable Symmetric Encryption (SSE) scheme by plugging a prior LWR based SSE scheme from (EuroS&P 2023) with our $\Sigma_2$ protocol. Concretely, using our $\Sigma_2$ protocol for linear relations, we prove the correctness of an encrypted search result in a zero-knowledge manner. We implement our verifiable SSE framework and show that the overhead of an extra verification round is negligible ($0.0023$ seconds) and retains the asymptotic query execution time complexity of the original SSE scheme.
Our work establishes results on zero-knowledge proof systems that can be of independent interest. By shifting the setting from RLWE to RLWR, we gain significant (i) efficiency improvements in terms of communication complexity by $O(M)$ (since some prior works on RLWE require rejection sampling by a factor of $M$), as well as (ii) very short proof size ($8.4$ KB) and tighter parameters (since RLWR does not explicitly manipulate error polynomials like RLWE).
Highly Scalable Searchable Symmetric Encryption for Boolean Queries from NTRU Lattice Trapdoors
Searchable symmetric encryption (SSE) enables query execution directly over sym-
metrically encrypted databases. To support realistic query executions over encrypted
document collections, one needs SSE schemes capable of supporting both conjunctive
and disjunctive keyword queries. Unfortunately, existing solutions are either practi-
cally inefficient (incur large storage overheads and/or high query processing latency)
or are quantum-unsafe.
In this paper, we present the first practically efficient SSE scheme with fast con-
junctive and disjunctive keyword searches, compact storage, and security based on the
(plausible) quantum-hardness of well-studied lattice-based assumptions. We present
NTRU-OQXT – a highly compact NTRU lattice-based conjunctive SSE scheme that
outperforms all existing conjunctive SSE schemes in terms of search latency. We then
present an extension of NTRU-OQXT that additionally supports disjunctive queries,
we call it NTRU-TWINSSE. Technically, both schemes rely on a novel oblivious search
protocol based on highly optimized Fast-Fourier trapdoor sampling algorithms over
NTRU lattices. While such techniques have been used to design other cryptographic
primitives (such as digital signatures), they have not been applied before in the context
of SSE.
We present prototype implementations of both schemes, and experimentally val-
idate their practical performance over a large real-world dataset. Our experiments
demonstrate that NTRU-OQXT achieves 2× faster conjunctive keyword searches as
compared to all other conjunctive SSE schemes (including the best quantum-unsafe
conjunctive SSE schemes), and substantially outperforms many of these schemes in
terms of storage requirements. These efficiency benefits also translate to NTRU-
TWINSSE, which is practically competitive with the best quantum-unsafe SSE schemes
capable of supporting both conjunctive and disjunctive queries.
Hobbit: Space-Efficient zkSNARK with Optimal Prover Time
Zero-knowledge succinct non-interactive arguments (zkSNARKs) are notorious for their large prover space requirements, which almost prohibits their use for really large instances. Space-efficient zkSNARKs aim to address this by limiting the prover space usage, without critical sacrifices to its runtime. In this work, we introduce Hobbit, the only existing space-efficient zkSNARK that achieves optimal prover time $O(|C|)$ for an arithmetic circuit $C$. At the same time, Hobbit is the first transparent and plausibly post-quantum secure construction of its kind. Moreover, our experimental evaluation shows that Hobbit outperforms all prior general-purpose space-efficient zkSNARKs in the literature across four different applications (arbitrary arithmetic circuits, inference of pruned Multi-Layer Perceptron, batch AES128 evaluation, and select-and-aggregate SQL query) by $\times$8-$\times$$56$ in terms or prover time while requiring up to $\times$23 less total space.
At a technical level, we introduce two new building blocks that may be of independent interest: (i) the first sumcheck protocol for products of polynomials with optimal prover time in the streaming setting, and (ii) a novel multi-linear plausibly post-quantum polynomial commitment that outperforms all prior works in prover time (and can be tuned to work in a space-efficient manner). We build Hobbit by combining the above with a modified version of HyperPlonk, providing an explicit routine to stream access to the circuit evaluation.
Tightly Secure Public-Key Encryption with Equality Test Supporting Flexible Authorization in the Standard Model
We introduce a novel Public Key Encryption with Equality Test supporting Flexible Authorization scheme offering User-Level, Ciphertext-Level, and User-Specific-Ciphertext-Level authorizations. Notably, our construction achieves security under the Decisional Diffie-Hellman assumption with a tight reduction, whereas the existing works are either not tightly secure or rely heavily on the random oracles. By relying solely on the standard DDH assumption, our scheme offers practical implementation without specialized cryptographic structures.
All Proof of Work But No Proof of Play
Speedrunning is a competition that emerged from communities of early video
games such as Doom (1993). Speedrunners try to finish a game in minimal
time. Provably verifying the authenticity of submitted speedruns is an open
problem. Traditionally, best-effort speedrun verification is conducted by on-site
human observers, forensic audio analysis, or a rigorous mathematical analysis
of the game mechanics1. Such methods are tedious, fallible, and, perhaps worst
of all, not cryptographic. Motivated by naivety and the Dunning-Kruger effect,
we attempt to build a system that cryptographically proves the authenticity of
speedruns. This paper describes our attempted solutions and ways to circumvent
them. Through a narration of our failures, we attempt to demonstrate the difficulty
of authenticating live and interactive human input in untrusted environments, as
well as the limits of signature schemes, game integrity, and provable play.
Revisiting SIOT protocol with new security assumptions
Oblivious Transfer is one of the most important building blocks in cryptography and useful for building secure protocols. With the advent of quantum computing there was a boost in research and development of cryptographic protocols resistant to quantum computer processing. Thus, in 2018, the SIOT (Supersingular Isogeny Oblivious Transfer) protocol was presented as the first post-quantum cryptographic OT scheme based on supersingular elliptic curve isogenies. Initially, an OT scheme was created combined with the cryptographic primitives of the SIDH (Supersingular Isogeny Diffie-Hellman) protocol. Furthermore, the SIOT protocol was built in its simplest configuration against semi-honest adversaries. However, it was subjected to scrutiny that resulted in the need to develop new security proofs. Almost in parallel, new and efficient cryptanalytic attacks emerged on the SIDH protocol, which consequently compromised the SIOT security structure. Thus, the new definitions of security proofs encompassed the compatibility of certain parameters of the OT functionality of the SIOT protocol itself with security assumptions of computational isogeny problems. After that, the security countermeasures from M-SIDH (Masked-Supersingular Isogeny Diffie-Hellman) protocol were analysed and implemented into SIOT protocol. Therefore, we propose an OT protocol based on isogenies of elliptic curves and with resistance to quantum attacks.
May the Force $\textit{not}$ Be with you: Brute-Force Resistant Biometric Authentication and Key Reconstruction
The use of biometric-based security protocols is on the steep rise. As
biometrics become more popular, we witness more attacks. For example, recent
BrutePrint/InfinityGauntlet attacks showed how to brute-force fingerprints
stored on an Android phone in about 40 minutes. The attacks are possible because biometrics, like passwords, do not have high entropy. But unlike passwords, brute-force attacks are much more damaging for biometrics, because one cannot easily change biometrics in case of compromise. In this work, we propose a novel provably secure Brute-Force Resistant Biometrics (BFRB) protocol for biometric-based authentication and key reconstruction that protects against brute-force attacks even when the server storing biometric-related data is compromised. Our protocol utilizes a verifiable partially oblivious pseudorandom function, an authenticated encryption scheme, a pseudorandom function, and a hash. We formally define security for a BFRB protocol and reduce the security of our protocol to the security of the building blocks. We implement the protocol and study its performance for the ND-0405 iris dataset.
Arithmetic PCA for Encrypted Data
Reducing the size of large dimensional data is a critical task in machine learning (ML) that often involves using principal component analysis (PCA). In privacy-preserving ML, data confidentiality is of utmost importance, and reducing data size is a crucial way to cut overall costs.
This work focuses on minimizing the number of normalization processes in the PCA algorithm, which is a costly procedure in encrypted PCA. By modifying Krasulina's algorithm, non-polynomial operations were eliminated, except for a single delayed normalization at the end.
Our PCA algorithm demonstrated similar performance to conventional PCA algorithms in face recognition applications. We also implemented it using the CKKS (Cheon-Kim-Kim-Song) homomorphic encryption scheme and obtained the first 6 principal components of a 128$\times$128 real matrix in 7.85 minutes using 8 threads.
End-to-End Encrypted Git Services
Git services such as GitHub, have been widely used to manage projects and enable collaborations among multiple entities. Just as in messaging and cloud storage, where end-to-end security has been gaining increased attention, such a level of security is also demanded for Git services. Content in the repositories (and the data/code supply-chain facilitated by Git services) could be highly valuable, whereas the threat of system breaches has become routine nowadays. However, existing studies of Git security to date (mostly open source projects) suffer in two ways: they provide only very weak security, and they have a large overhead.
In this paper, we initiate the needed study of efficient end-to-end encrypted Git services. Specifically, we formally define the syntax and critical security properties, and then propose two constructions that provably meet those properties. Moreover, our constructions have the important property of platform-compatibility: They are compatible with current Git servers and reserve all basic Git operations, thus can be directly tested and deployed on top of existing platforms. Furthermore, the overhead we achieve is only proportional to the actual difference caused by each edit, instead of the whole file (or even the whole repository) as is the case with existing works. We implemented both constructions and tested them directly on several public GitHub repositories. Our evaluations show (1) the effectiveness of platform-compatibility, and (2) the significant efficiency improvement we got (while provably providing much stronger security than prior ad-hoc treatments).
Stealth and Beyond: Attribute-Driven Accountability in Bitcoin Transactions
Bitcoin enables decentralized, pseudonymous transactions, but balancing privacy with accountability remains a challenge. This paper introduces a novel dual accountability mechanism that enforces both sender and recipient compliance in Bitcoin transactions. Senders are restricted to spending Unspent Transaction Outputs (UTXOs) that meet specific criteria, while recipients must satisfy legal and ethical requirements before receiving funds. We enhance stealth addresses by integrating compliance attributes, preserving privacy while ensuring policy adherence. Our solution introduces a new cryptographic primitive, Identity-Based Matchmaking Signatures (IB-MSS), which supports streamlined auditing. Our approach is fully compatible with existing Bitcoin infrastructure and does not require changes to the core protocol, preserving both privacy and decentralization while enabling transaction auditing and compliance.
Copy-Protection from UPO, Revisited
Quantum copy-protection is a foundational notion in quantum cryptography that leverages the governing principles of quantum mechanics to tackle the problem of software anti-piracy. Despite progress in recent years, precisely characterizing the class of functionalities that can be copy-protected is still not well understood.
Two recent works, by [Coladangelo and Gunn, STOC 2024] and [Ananth and Behera, CRYPTO 2024, showed that puncturable functionalities can be copy-protected. Both works have significant caveats with regard to the underlying cryptographic assumptions and additionally restrict the output length of the functionalities to be copy-protected. In this work, we make progress towards simultaneously addressing both caveats. We show the following:
- Revisiting Unclonable Puncturable Obfuscation (UPO): We revisit the notion of UPO introduced by [Ananth and Behera, CRYPTO 2024]. We present a new approach to construct UPO and a variant of UPO, called independent-secure UPO. Unlike UPO, we show how to base the latter notion on well-studied assumptions.
- Copy-Protection from Independent-secure UPO: Assuming independent-secure UPO, we show that any m-bit, for m ≥ 2, puncturable functionality can be copy-protected.
- Copy-Protection from UPO: Assuming UPO, we show that any 1-bit puncturable functionality can be copy-protected. The security of copy-protection holds against identical challenge distributions.
New Upper and Lower Bounds for Perfectly Secure MPC
We consider perfectly secure MPC for $n$ players and $t$ malicious corruptions. We ask whether requiring only security with abort (rather than guaranteed output delivery, GOD) can help to achieve protocols with better resilience, communication complexity or round complexity. We show that for resilience and communication complexity, abort security does not help, one still needs $3t< n$ for a synchronous network and $4t< n$ in the asynchronous case. And, in both cases, a communication overhead of $O(n)$ bits per gate is necessary.
When $O(n)$ overhead is inevitable, one can explore if this overhead can be pushed to the preprocessing phase and the online phase can be achieved with $O(1)$ overhead. This result was recently achieved in the synchronous setting, in fact, with GOD guarantee. We show this same result in the asynchronous setting. This was previously open since the main standard approach to getting constant overhead in a synchronous on-line phase fails in the asynchronous setting. In particular, this shows that we do not need to settle for abort security to get an asynchronous perfectly secure protocol with overheads $O(n)$ and $O(1)$.
Lastly, in the synchronous setting, we show that perfect secure MPC with abort requires only 2 rounds, in contrast to protocols with GOD that require 4 rounds.
Generic Construction of Threshold Ring Signatures and Lattice-based Instantiations
A t-out-of-n threshold ring signature allows $t$ parties to jointly sign a message on behalf of $n$ parties without revealing the identities of the signers. In this paper, we introduce a new generic construction for threshold ring signature, called GCTRS, which can be built on top of a selection on identification schemes, commitment schemes and a new primitive called t-out-of-n proof protocol which is a special type of zero-knowledge proof. In general, our design enables a group of $t$ signers to first generate an aggregated signature by interacting with each other; then they are able to compute a t-out-of-n proof to convince the verifier that the aggregated signature is indeed produced by $t$ individuals among a particular set. The signature is succinct, as it contains only one aggregated signature and one proof in the final signature. We define all the properties required for the building blocks to capture the security of the GCTRS and provide a detailed security proof. Furthermore, we propose two lattice-based instantiations for the GCTRS, named LTRS and CTRS, respectively. Notably, the CTRS scheme is the first scheme that has a logarithmic signature size relative to the ring size. Additionally, during the instantiation process, we construct two t-out-of-n proof protocols, which may be of independent interest.
Breaking The Authenticated Encryption scheme HiAE
HiAE is the fastest AEAD solution on ARM chips to date, utilizing AES round functions while also setting a new performance benchmark on the latest x86 processors. In this paper, we employ algebraic techniques to investigate the security of HiAE. Our findings reveal that HiAE is vulnerable. Firstly, we employ the meet-in-the-middle technique and guess-and-determine technique to recover the state and derive a key-related equation resulting from two layers of AES round functions. Secondly, by adopting an algebraic approach to study the properties of the round function, we decompose the equation into byte-level equations for divide-and-conquer. Finally, we utilize the guess-and-determine technique to recover the key. Collectively, these techniques enable us to present the first full key-recovery attack on HiAE. Our attack achieves a data complexity of $2^{130}$ and a time complexity of approximately $2^{209}$, leveraging both encryption and decryption oracles with a success probability of 1. In a single-key and nonce-respecting scenario, the attack fully recovers the 256-bit key, breaking the claimed 256-bit security against key-recovery attacks.
t-Probing (In-)Security - Pitfalls on Noise Assumptions
The ongoing transition to post-quantum cryptography has led to a surge of research in side-channel countermeasures tailored to these schemes. A prominent method to prove security in the context of side-channel analysis is the utilization of the well-established t-probing model. However, recent studies by Hermelink et al. at CCS 2024 demonstrate a simple and practical attack on a provably secure implementation of the Fujisaki-Okamoto transform that raises concerns regarding the practical security of t-probing secure schemes.
In this paper, we present an unsupervised single-trace side-channel attack on a tenth order masked implementation of fixed-weight polynomial sampling, which has also been proven to be secure in the t-probing model. Both attacks reveal a mismatch between the correct, well-understood theory of the t-probing model and its practical application, since the security proofs are valid, yet the attacks still succeed at high noise levels. Therefore, we take a closer look at the underlying causes and the assumptions that are made for transferring t-probing security to practice. In particular, we investigate the amount of noise required for this transfer. We find that, depending on the design decisions made, this can be very high and difficult to achieve.
Consequently, we examine the factors impacting the required amount of noise and that should be considered for practically secure implementations. In particular, non-uniformly distributed shares - a setting that is increasingly encountered in post-quantum cryptographic algorithms - could lead to an increased noise requirement, and thus it could reduce the security level of the masking scheme. Our analysis then allows us to provide practical guidelines for implementation designers, thereby facilitating the development of practically secure designs.
Securely Computing One-Sided Matching Markets
Top trading cycles (TTC) is a famous algorithm for trading indivisible goods between a set of agents such that all agents are as happy as possible about the outcome. In this paper, we present a protocol for executing TTC in a privacy preserving way. To the best of our knowledge, it is the first of its kind. As a technical contribution of independent interest, we suggest a new algorithm for determining all nodes in a functional graph that are on a cycle. The algorithm is particularly well suited for secure implementation in that it requires no branching and no random memory access. Finally, we report on a prototype implementation of the protocol based on somewhat homomorphic encryption.
BitBatSPIR: Efficient Batch Symmetric Private Information Retrieval from PSI
Private Information Retrieval (PIR) allows a client to retrieve an entry from a database held by a server without leaking which entry is being requested. Symmetric PIR (SPIR) is a stronger variant of PIR with database privacy so that the client knows nothing about the database other than the retrieved entry.
This work studies SPIR in the batch setting (BatchSPIR), where the client wants to retrieve multiple entries. In particular, we focus on the case of bit entries, which has important real-world applications. We set up the connection between bit-entry information retrieval and set operation, and propose a black-box construction of BatchSPIR from Private Set Intersection (PSI). By applying an efficient PSI protocol with asymmetric set sizes, we obtain our BatchSPIR protocol named $\mathsf{BitBatSPIR}$. We also introduce several optimizations for the underlying PSI. These optimizations improve the efficiency of our concrete BatchSPIR construction as well as the PSI protocol.
We implement $\mathsf{BitBatSPIR}$ and compare the performance with the state-of-the-art PIR protocol in the batch setting. Our experimental results show that $\mathsf{BitBatSPIR}$ not only achieves a stronger security guarantee (symmetric privacy) but also has a better performance for large databases, especially in the Wide Area Network (WAN) setting.
Extending Groth16 for Disjunctive Statements
Two most common ways to design non-interactive zero knowledge (NIZK) proofs are based on Sigma ($\Sigma$)-protocols (an efficient way to prove algebraic statements) and zero-knowledge succinct non-interactive arguments of knowledge (zk-SNARK) protocols (an efficient way to prove arithmetic statements). However, in the applications of cryptocurrencies such as privacy-preserving credentials, privacy-preserving audits, and blockchain-based voting systems, the zk-SNARKs for general statements are usually implemented with encryption, commitment, or other algebraic cryptographic schemes. Moreover, zk-SNARKs for many different arithmetic statements may also be required to be implemented together. Clearly, a typical solution is to extend the zk-SNARK circuit to include the code for algebraic part. However, complex cryptographic operations in the algebraic algorithms will significantly increase the circuit size, which leads to impractically large proving time and CRS size. Thus, we need a flexible enough proof system for composite statements including both algebraic and arithmetic statements. Unfortunately, while the conjunction of zk-SNARKs is relatively natural and numerous effective solutions are currently available (e.g. by utilizing the commit-and-prove technique), the disjunction of zk-SNARKs is rarely discussed in detail.
In this paper, we mainly focus on the disjunctive statements of Groth16, and we propose a Groth16 variant---CompGroth16, which provides a framework for Groth16 to prove the disjunctive statements that consist of a mix of algebraic and arithmetic components. Specifically, we could directly combine CompGroth16 with $\Sigma$-protocol or even CompGroth16 with CompGroth16 just like the logical composition of $\Sigma$-protocols. From this, we can gain many good properties, such as broader expression, better prover's efficiency and shorter CRS. In addition, for the combination of CompGroth16 and $\Sigma$-protocol, we also present two representative application scenarios to demonstrate the practicality of our construction.
HypSCA: A Hyperbolic Embedding Method for Enhanced Side-channel Attack
Deep learning-based side-channel attack (DLSCA) has become the dominant paradigm for extracting sensitive information from hardware implementations due to its ability to learn discriminative features directly from raw side-channel traces. A common design choice in DLSCA involves embedding traces in Euclidean space, where the underlying geometry supports conventional objectives such as classification or contrastive learning. However, Euclidean space is fundamentally limited in capturing the multi-level hierarchical structure of side-channel traces, which often exhibit both coarse-grained clustering patterns (e.g., Hamming weight similarities) and fine-grained distinctions (e.g., instruction-level variations). These limitations adversely affect the discriminability and generalization of learned representations, particularly across diverse datasets and leakage models. In this work, we propose HypSCA, a dual-space representation learning method that embeds traces in hyperbolic space to exploit its natural ability to model hierarchical relationships through exponential volume growth. In contrast to existing approaches, HypSCA jointly combines hyperbolic structure modeling with local discriminative learning in Euclidean space, enabling the preservation of global hierarchies while enhancing fine-grained feature separation. Extensive experiments on multiple public datasets demonstrate that HypSCA achieves up to 51.6% improvement in attack performance over state-of-the-art DLSCA methods, consistently enhancing generalization across diverse datasets and leakage models.
Hydrangea: Optimistic Two-Round Partial Synchrony
We introduce Hydrangea, a partially synchronous Byzantine fault-tolerant state machine replication protocol that offers strong fault tolerance and achieves a fast, two-round commit in optimistic scenarios. Specifically, for a system of $n = 3f + 2c + k + 1$ parties, Hydrangea achieves an optimistic good-case latency of two rounds when the number of faulty parties (Byzantine or crash) is at most $p = \lfloor \frac{c+k}{2} \rfloor$ for a parameter $k \geq 0$. In more adversarial settings with up to $f$ Byzantine faults and $c$ crash faults, \name{} obtains a good-case latency of three rounds. Furthermore, we prove a matching lower bound: no protocol can achieve two-round optimistic commit under this fault model if $p > \lfloor \frac{c+k+2}{2} \rfloor$.