Papers updated in last 183 days (Page 2 of 1901 results)
LegoLog: A configurable transparency log
Transparency logs are critical for a wide range of
applications, from web certificates to end-to-end encrypted
messaging. Today, many transparency log designs exist for
various applications and workloads, and developers must
fully understand the design space to find the best design
for their needs. Worse, if a developer needs a transparency
log for an application and workload without an existing
transparency log, the developer (who might not be an expert)
must design a new log. To address these challenges, we
introduce the paradigm of a configurable transparency log,
which takes as input a description of the application work-
load and constraints of different entities and automatically
outputs a transparency log uniquely suited to the application.
We present the first configurable transparency log design,
LegoLog, which we implement and empirically evaluate end-
to-end for three specialized transparency logs. We also show
that LegoLog can express six different applications, and
we compare the asymptotic complexity of LegoLog and
existing transparency logs tailored to individual applications.
We find that configurability does not come at the cost of
performance: LegoLog can capture a variety of applications
while performing comparably to existing, special-purpose
transparency logs.
Improved Constant-Sized Polynomial Commitment Schemes Without Trusted Setup
Argument systems are a fundamental ingredient in many cryptographic constructions. The best-performing argument systems to date largely rely on a trusted setup, which is undesirable in trust-minimized applications. While transparent argument systems avoid this trust assumption, they have historically been inefficient, typically exhibiting polylogarithmic proof sizes compared to their trusted counterparts. In 2023, Arun et al. (PKC 2023) constructed the first transparent constant-sized polynomial commitment scheme (PCS), leading to transparent constant-sized arguments. However, the evaluation proof still comprises 66 group elements in a group of unknown order (GUO), rendering it rather impractical. In this work, we address this challenge by presenting a set of novel batching and aggregation techniques tailored for proofs of knowledge of ranges in GUOs. These techniques may also be of independent interest and are readily applicable to enhance and shorten other existing schemes in GUOs. Consequently, by applying these techniques, we immediately achieve an improved PCS with an evaluation proof consisting of only 10 group elements---an impressive 85% reduction. To our knowledge, this represents the shortest PCS in the transparent setting. Thus compiling known information-theoretic proof systems using our improved PCS yields highly compact transparent argument systems when instantiated in a class group, which is more practical than prior constant-sized schemes.
How to Model Unitary Oracles
We make the case for modeling unitary oracles by allowing for controlled access to the oracle as well as its conjugate transpose (inverse), but also its conjugate and transpose. Controlling and conjugate transposes are common if even standard, but conjugates and transposes appear to be non-standard. In order to justify our modeling, we give several formal examples of what goes wrong or is missed when using a more restrictive modeling. We also argue that our model is the "right" level of granularity, and that other transformations likely do not correspond to efficient computation. We also discuss other modeling choices, such as ancillas and approximation error.
Through our exploration, we uncover interesting phenomena. Examples include an attack on the recent pseudorandom unitary construction of Ma and Huang (STOC'25) if used incorrectly as a publicly evaluatable unitary, and a quantum complexity-theoretic separation that follows from a purely classical separation.
OMIX: Offline Mixing for Scalable Self-Tallying Elections
In electronic voting systems, guaranteeing voter anonymity is essential. One primary method to ensure this is the use of a mix-net, in which a set of mix-servers sequentially shuffle a set of encrypted votes, and generate proofs that a correct permutation has been applied. Whilst mix-nets offer advantages over alternative approaches, their traditional use during the tallying phase introduces a significant robustness bottleneck: the process is inherently sequential and critically depends on trusted authorities to perform shuffling and decryption. Any disruption can prevent the final result from being revealed.
In this work, we propose offline mixing OMIX, the first voting framework to support a mix-net-based system in which trustees never handle encrypted votes, while also ensuring that each voter's cost is independent of the total number of voters. In particular, the contributions of permutations by mix-servers and decryption shares by trustees are completed and publicly verified before any vote is cast. This eliminates the need for their participation during tallying and enables the first scalable, mix-net-based, and self-tallying voting protocol in the sense of Kiayias and Yung (PKC'02).
At the core of OMIX is a distributed key-generation mechanism: each voter locally generates a private voting key and registers a constant-size set of basis public keys. These are permuted and partially decrypted in an offline phase, resulting in a final public decryption key that reveals votes in shuffled order. Our construction leverages the homomorphic and structure-preserving properties of function-hiding inner-product functional encryption, combined with standard primitives, to achieve self-tallying, client scalability, ballot privacy and other voting properties. To support the new mixing structure introduced by OMIX, we also develop a compact and verifiable offline mix-net, based on an enhanced linearly homomorphic signature scheme. This latter primitive may be of independent interest.
Compressing steganographic payloads with LLM assistance
Steganography is the practice of concealing messages or information within other non-secret text or media to avoid detection. A central challenge in steganography is balancing payload size with detectability and media constraints—larger payloads increase the risk of detection and require proportionally larger or higher-capacity carriers. In this paper, we introduce a novel approach that combines Huffman coding, suitable dictionary identification, and large language models (LLMs) rephrasing techniques to significantly reduce payload size. This enables more efficient use of limited-capacity carriers, such as images, while minimizing the visual or statistical footprint. Our method allows for the embedding of larger payloads into fixed-size media, addressing a key bottleneck in traditional steganographic systems. By optimizing payload compression prior to encoding, we improve both the stealth and scalability of steganographic communication.
Shared OT and Its Applications
We present unconditionally perfectly secure protocols in the semi-honest setting for several functionalities: (1) private elementwise equality; (2) private bitwise integer comparison; and (3) bit-decomposition. These protocols are built upon a new concept called Shared Oblivious Transfer (Shared OT). Shared OT extends the one-out-of-N String OT by replacing strings with integers modulo $M$ and allowing additive secret-sharing of all inputs and outputs. These extensions can be implemented by simple local computations without incurring additional OT invocations. We believe our Shared OT may be of independent interest.
Our protocols demonstrate the best round, communication, and computational complexities compared to all other protocols secure in a similar setting. Moreover, all of our protocols involve either 2 or 3 rounds.
ABE Cubed: Advanced Benchmarking Extensions for ABE Squared
Since attribute-based encryption (ABE) was proposed in 2005, it has established itself as a valuable tool in the enforcement of access control. For practice, it is important that ABE satisfies many desirable properties such as multi-authority and negations support. Nowadays, we can attain these properties simultaneously, but none of these schemes have been implemented. Furthermore, although simpler schemes have been optimized extensively on a structural level, there is still much room for improvement for these more advanced schemes. However, even if we had schemes with such structural improvements, we would not have a way to benchmark and compare them fairly to measure the effect of such improvements. The only framework that aims to achieve this goal, ABE Squared (TCHES '22), was designed with simpler schemes in mind.
In this work, we propose the ABE Cubed framework, which provides advanced benchmarking extensions for ABE Squared. To motivate our framework, we first apply structural improvements to the decentralized ciphertext-policy ABE scheme supporting negations presented by Riepel, Venema and Verma (ACM CCS '24), which results in five new schemes with the same properties. We use these schemes to uncover and bridge the gaps in the ABE Squared framework. In particular, we observe that advanced schemes depend on more "variables" that affect the schemes' efficiency in different dimensions. Whereas ABE Squared only considered one dimension (as was sufficient for the schemes considered there), we devise a benchmarking strategy that allows us to analyze the schemes in multiple dimensions. As a result, we obtain a more complete overview on the computational efficiency of the schemes, and ultimately, this allows us to make better-founded choices about which schemes provide the best efficiency trade-offs for practice.
Truncation Untangled: Scaling Fixed-Point Arithmetic for Privacy-Preserving Machine Learning to Large Models and Datasets
Fixed Point Arithmetic (FPA) is widely used in Privacy-Preserving Machine Learning (PPML) to efficiently handle decimal values. However, repeated multiplications in FPA can lead to overflow, as the fractional part doubles in size with each multiplication. To address this, truncation is applied post-multiplication to maintain precision. Various truncation schemes based on Secure Multiparty Computation (MPC) exist, but trade-offs between accuracy and efficiency in PPML models and datasets remain underexplored. In this work, we analyze and consolidate different truncation approaches from the MPC literature.
We conduct the first large-scale systematic evaluation of PPML inference accuracy across truncation schemes, ring sizes, neural network architectures, and datasets. Our study provides clear guidelines for selecting the optimal truncation scheme and parameters for PPML inference. All evaluations are implemented in the open-source HPMPC MPC framework, facilitating future research and adoption.
Beyond our large scale evaluation, we also present improved constructions for each truncation scheme, achieving up to a fourfold reduction in communication and round complexity over existing schemes. Additionally, we introduce optimizations tailored for PPML, such as strategically fusing different neural network layers. This leads to a mixed-truncation scheme that balances truncation costs with accuracy, eliminating communication overhead in the online phase while matching the accuracy of plaintext floating-point PyTorch inference for VGG-16 on the ImageNet dataset.
NTRU with Hints: Recovering NTRU Secret Keys from Partial Leakage
NTRU-based structured lattices underpin several standardized post-quantum cryptographic schemes, most notably the Falcon signature algorithms. While offering compactness and efficiency, the algebraic structure of NTRU lattices introduces new vulnerabilities under physical attacks, where partial secret key leakage may occur.
This work addresses the problem of full key recovery in NTRU-based schemes when adversaries obtain partial information through side-channel or fault attacks. Existing leakage-aware frameworks, including the DDGR estimator and the approach of May and Nowakowski, either lack scalability or are limited to structured, single-source leakage on one secret vector. These constraints make them ineffective against practical leakage patterns in NTRU settings.
We propose a unified and scalable framework for recovering NTRU secret keys under partial leakage. Our method supports diverse hint types, such as perfect hints, modular hints, and low-bit leakage, and enables joint integration of leakage across both secret polynomials \( f \) and \( g \). At its core, the framework uses a dimension-reduction strategy to eliminate
known coefficients and reduce the problem to a lower-dimensional NTRU instance suitable for lattice reduction. Additionally, we introduce a transformation that converts hints on \( g \) into modular constraints on \( f \), allowing unified hint embedding.
We demonstrate practical attacks on Falcon using NIST reference implementations. Leaking 400 coefficients of $f$ in Falcon-512 reduces the required BKZ block size from over 350 to 38, enabling full key recovery within 6 hours. Compared to MN23, our method achieves significant speedups: $5.83\times$ for Falcon-512 with 400 leaked coefficients, and over $15\times$ for Falcon-1024 with 910 leaked coefficients. These results highlight the efficiency and scalability of our framework and the importance of leakage-resilient design for structured NTRU lattices.
Quantum-Safe Hybrid Key Exchanges with KEM-Based Authentication
Authenticated Key Exchange (AKE) between any two entities is one of the most important security protocols available for securing our digital networks and infrastructures. In PQCrypto 2023, Bruckner, Ramacher and Striecks proposed a novel hybrid AKE (HAKE) protocol dubbed Muckle+ that is particularly useful in large quantum-safe networks consisting of a large number of nodes. Their protocol is hybrid in the sense that it allows key material from conventional, post-quantum, and quantum cryptography primitives to be incorporated into a single end-to-end authenticated shared key.
To achieve the desired authentication properties, Muckle+ utilizes post-quantum digital signatures. However, available instantiations of such signatures schemes are not yet efficient enough compared to their post-quantum key-encapsulation mechanism (KEM) counterparts, particularly in large networks with potentially several connections in a short period of time.
To mitigate this gap, we propose Muckle# that pushes the efficiency boundaries of currently known HAKE constructions. Muckle# uses post-quantum key-encapsulating mechanisms for implicit authentication inspired by recent works done in the area of Transport Layer Security (TLS) protocols, particularly, in KEMTLS (CCS'20).
We port those ideas to the HAKE framework and develop novel proof techniques on the way. Due to our KEM-based approach, the resulting protocol has a slightly different message flow compared to prior work that we carefully align with the HAKE framework and which makes our changes to Muckle+ non-trivial. Lastly, we evaluate the approach by a prototypical implementation and a direct comparison with Muckle+ to highlight the efficiency gains.
Return of the Kummer: a Toolbox for Genus-2 Cryptography
This work expands the machinery we have for isogeny-based cryptography in genus 2 by developing a toolbox of several essential algorithms for Kummer surfaces, the dimension-2 analogue of $x$-only arithmetic on elliptic curves. Kummer surfaces have been suggested in hyper-elliptic curve cryptography since at least the 1980s and recently these surfaces have reappeared to efficiently compute $(2,2)$-isogenies. We construct several essential analogues of techniques used in one-dimensional isogeny-based cryptography, such as pairings, deterministic point sampling and point compression and give an overview of $(2,2)$-isogenies on Kummer surfaces. We furthermore show how Scholten's construction can be used to transform isogeny-based cryptography over elliptic curves over $\mathbb{F}_{p^2}$ into protocols over Kummer surfaces over $\mathbb{F}_{p}$
As an example of this approach, we demonstrate that SQIsign verification can be performed completely on Kummer surfaces, and, therefore, that one-dimensional SQIsign verification can be viewed as a two-dimensional isogeny between products of elliptic curves, now defined over $\mathbb{F}_p$ rather than $\mathbb{F}_{p^2}$.
Improved Key-recovery Attacks on ARADI
ARADI is a low-latency block cipher introduced by the U.S. National Security
Agency (NSA), targeting secure and efficient memory encryption. However, unlike
most academic cipher proposals, the design rationale behind ARADI has not been made
public, leaving its security to be only assessed through independent analysis. In this
work, we present improved key-recovery attacks on up to 12 out of 16 rounds of ARADI
in the single-key setting — advancing the best known attacks by two rounds. Our
techniques build upon the ZeroSum distinguisher framework and leverage the Fast
Hadamard Transform (FHT). A central insight in our attacks is that the linear layer
of ARADI exhibits weak diffusion. This structural property allows partial decryption
with only a subset of the round keys, significantly reducing the key-guessing space.
Rational Censorship Attack: Breaking Blockchain with a Blackboard
Censorship resilience is a fundamental assumption underlying the security of blockchain protocols. Additionally, the analysis of blockchain security from an economic and game theoretic perspective has been growing in popularity in recent years.
In this work, we present a surprising rational censorship attack on blockchain censorship resilience when we adopt the analysis of blockchain security from a game theoretic lens and assume all users are rational.
In our attack, a colluding group with sufficient voting power censors the remainder nodes such that the group alone can gain all the rewards from maintaining the blockchain.
We show that if nodes are rational, coordinating this attack just requires a public read and write blackboard and we formally model the attack using a game theoretic framework.
Furthermore, we note that to ensure the success of the attack, nodes need to know the total true voting power held by the colluding group.
We prove that the strategy to join the rational censorship attack and also for nodes to honestly declare their power is a subgame perfect equilibrium in the corresponding extensive form game induced by our attack.
Finally, we discuss the implications of the attack on blockchain users and protocol designers as well as some potential countermeasures.
Unmasking TRaccoon: A Lattice-Based Threshold Signature with An Efficient Identifiable Abort Protocol
TRaccoon is an efficient 3-round lattice-based T-out-of-N threshold signature, recently introduced by del Pino et al. (Eurocrypt 2024). While the design resembles the classical threshold Schnorr signature, Sparkle (Crites et al., Crypto 2023), one shortfall is that it has no means to identify malicious behavior, a property highly desired in practice. This is because to resist lattice-specific attacks, TRaccoon relies on a technique called masking, informally blinding each partial signature with a one-time additive mask. del Pino et al. left it as an open problem to add a mechanism to identify malicious behaviors to TRaccoon.
In this work, we propose TRaccoon-IA, a TRaccoon with an efficient identifiable abort protocol, allowing to identify malicious signers when the signing protocol fails. The identifiable abort protocol is a simple add-on to TRaccoon, keeping the original design intact, and comes with an added communication cost of 60 + 6.4 |T| KB only when signing fails. Along the way, we provide the first formal security analysis of a variant of LaBRADOR (Beullens et al., Crypto 2023) with zero-knowledge, encountering several hurdles when formalizing it in detail. Moreover, we give a new game-based definition for interactive identifiable abort protocols, extending the popular game-based definition used to prove unforgeability of recent threshold signatures.
Efficient NIZK Arguments with Straight-Line Simulation and Extraction
Non-interactive zero-knowledge (NIZK) arguments allow a prover to convince a verifier about the truthfulness of an NP-statement by sending just one message, without disclosing any additional information. In several practical scenarios, the Fiat-Shamir transform is used to convert an efficient constant-round public-coin honest-verifier zero-knowledge proof system into an efficient NIZK argument system. This approach is provably secure in the random oracle model, crucially requires the programmability of the random oracle and extraction works through rewinds. The works of Lindell [TCC 2015] and Ciampi et al. [TCC 2016] proposed efficient NIZK arguments with non-programmable
random oracles along with a programmable common reference string. In this work we show an efficient NIZK argument with straight-line simulation and extraction that relies on features that alone are insufficient to construct NIZK arguments (regardless of efficiency). More specifically we consider the notion of quasi-polynomial time simulation proposed by Pass in [EUROCRYPT 2003] and combine it with simulation and extraction with non-programmable random
oracles thus obtaining a NIZK argument of knowledge where neither the zero-knowledge simulator, nor the argument of knowledge extractor needs to program the random oracle. Still, both the simulator and the extractor are straight-line. Our construction uses as a building block a modification of the Fischlin’s transform [CRYPTO 2005] and combines it with the concept of dense puzzles introduced by Baldimtsi et al. [ASIACRYPT 2016]. We also argue that our NIZK argument system inherits the efficiency features of Fischlin’s transform, which represents the main advantage of Fischlin’s protocol over existing schemes.
PICS: Private Intersection over Committed (and reusable) Sets
Private Set Intersection (PSI) enables two parties to compute the intersection of their private sets without revealing any additional information. While maliciously secure PSI protocols prevent many attacks, adversaries can still exploit them by using inconsistent inputs across multiple sessions. This limitation stems from the definition of malicious security in secure multiparty computation, but is particularly problematic in PSI because: (1) real-world applications---such as Apple’s PSI protocol for CSAM detection and private contact discovery in messaging apps---often require multiple PSI executions over consistent inputs, and (2) the PSI functionality makes it relatively easy for adversaries to infer additional information.
We propose {\em Private Intersection over Committed Sets (PICS)}, a new framework that enforces input consistency across multiple sessions via committed sets. Building on the state-of-the-art maliciously secure PSI framework (i.e., VOLE-PSI [EUROCRYPT 2021]), we present an efficient instantiation of PICS % in the random oracle model using lightweight cryptographic tools. We implement our protocol to demonstrate concrete efficiency. Compared to VOLE-PSI, our communication overhead is a small constant between $1.57 - 2.04\times$ for set sizes between $2^{16}-2^{24}$, and the total end-to-end running time overhead is $1.22 - 1.98\times$ across various network settings.
$\textbf{Note:}$ The previous version of this paper had a soundness issue in the way we checked the consistency of the sender’s input. This revised draft presents a much simpler and cleaner approach to ensuring input consistency for the sender.
Optimizing Key Recovery in Classic McEliece: Advanced Error Correction for Noisy Side-Channel Measurements
Classic McEliece is one of the code-based Key Encapsulation Mechanism finalists in the ongoing NIST post-quantum cryptography standardization process. Several key-recovery side-channel attacks on the decapsulation algorithm have already been published. However none of them discusses the feasibility and/or efficiency of the attack in the case of noisy side-channel acquisitions. In this paper, we address this issue by proposing two improvements on the recent key-recovery attack published by Drăgoi et al.. First, we introduce an error correction algorithm for the lists of Hamming weights obtained by side-channel measurements, based on the assumption, validated experimentally, that the error on a recovered Hamming weight is bounded to $\pm1$. We then offer a comparison between two decoding efficiency metrics, the theoretical minimal error correction capability and an empirical average correction probability. We show that the minimal error correction capability, widely used for linear codes, is not suitable for the (non-linear) code formed by the lists of Hamming weights. Conversely, experimental results show that out of 1 million random erroneous lists of $2t=128$ Hamming weights, only 2 could not be corrected by the proposed algorithm. This shows that the probability of successfully decoding a list of erroneous Hamming weights is very high, regardless of the error weight. In addition to this algorithm, we describe how the secret Goppa polynomial $g$, recovered during the first step of the attack, can be exploited to reduce both the time and space complexity of recovering the secret permuted support $\mathcal{L}$.
Coalition and Threshold Hash-Based Signatures
We introduce techniques to transform existing stateful hash based signature (HBS) schemes, such as LMS or XMSS, into efficient threshold and distributed signature schemes. Our approach requires a trusted dealer for setup, and uses a large (up to a few GiB, typically) common reference value for each new public key. The dealer generates the keypair and distributes shares of the signing key to the trustees, while creating the CRV. Signing involves an untrusted aggregator communicating point-to-point with a set of trustees. Only the aggregator needs access to the CRV; the trustees need only a PRF key and enough space to remember which one-time keys they have helped to sign with so far. Signing requires two round trips between the aggregator and each participating trustee, and only a little more computation from the trustees and aggregator than is done when signing with the underlying HBS scheme. We reduce the security of our scheme to that of the underlying HBS scheme, assuming the availability of a secure PRF. A dishonest aggregator or tampered CRV can prevent valid signatures from being constructed, but does not allow forgeries. Our techniques offer a powerful practical defense against accidental reuse of a one-time key in stateful HBS schemes by requiring multiple trustees to fail in the same way in order for key reuse to occur.
On Deniable Authentication against Malicious Verifiers
Deniable authentication allows Alice to authenticate a message to Bob, while retaining deniability towards third parties. In particular, not even Bob can convince a third party that Alice authenticated that message. Clearly, in this setting Bob should not be considered trustworthy. Furthermore, deniable authentication is necessary for deniable key exchange, as explicitly desired by Signal and off-the-record (OTR) messaging.
In this work we focus on (publicly verifiable) designated verifier signatures (DVS), which are a widely used primitive to achieve deniable authentication. We propose a definition of deniability against malicious verifiers for DVS. We give a construction that achieves this notion in the random oracle (RO) model. Moreover, we show that our notion is not achievable in the standard model with a concrete attack; thereby giving a non-contrived example of the RO heuristic failing.
All previous protocols that claim to achieve deniable authentication against malicious verifiers (like Signal's initial handshake protocols X3DH and PQXDH) rely on the Extended Knowledge of Diffie–Hellman (EKDH) assumption. We show that this assumption is broken and that these protocols do not achieve deniability against malicious verifiers.
Refined TFHE Leveled Homomorphic Evaluation and Its Application
TFHE is a fully homomorphic encryption scheme over the torus that supports fast bootstrapping. Its primary evaluation mechanism is based on gate bootstrapping and programmable bootstrapping (PBS), which computes functions while simultaneously refreshing noise. PBS-based evaluation is user-friendly and efficient for small circuits; however, the number of bootstrapping operations increases exponentially with the circuit depth. To address the challenge of efficiently evaluating large-scale circuits, Chillotti et al. introduced a leveled homomorphic evaluation (LHE) mode at Asiacrypt 2017. This mode decouples circuit evaluation from bootstrapping, resulting in a speedup of hundreds of times over PBS-based methods. However, the remaining circuit bootstrapping (CBS) becomes a performance bottleneck, even though its frequency is linear with the circuit depth.
In this paper, we refine the LHE mode by mitigating the high cost of CBS. First, we patch the NTT-based CBS algorithm proposed by Wang et al. [WWL+, Eurocrypt 2024], accelerating their algorithm by up to 2.6$\times$. Then, observing the suboptimal parallelism and high complexity of modular reduction in NTT under CBS parameters, we extend WWL+ to an FFT-based algorithm by redesigning the pre-processing method and introducing a split FFT technique. This achieves the fastest CBS implementation with the smallest key size, outperforming the open-source WWL+ implementation by up to 12.1$\times$ (resp. 5.12$\times$ compared to our patched algorithm), and surpassing TFHEpp [MBM+, USENIX 2021] by 3.42$\times$ with a key size reduction of 33.2$\times$. Furthermore, we proposed an improved integer input LHE mode by extending our CBS algorithm to support higher precision and combining it with additional optimizations such as multi-bit extraction. Compared to the previous integer input LHE mode proposed by Bergerat et al. [BBB+, JoC 2023], our approach is up to 10.7$\times$ faster with a key size reduction of up to 4.4$\times$.
To demonstrate the practicality of our improved LHE mode, we apply it to AES transciphering and general homomorphic look-up table (LUT) evaluation. For AES evaluation, our method is 4.8$\times$ faster and reduces the key size by 31.3$\times$ compared to the state-of-the-art method, Thunderbird [WLW+, TCHES 2024]. For LUT evaluation, we compare our results with the recent work of Trama et al. [TCBS, ePrint 2024/1201], which constructs a general 8-bit processor of TFHE. Our method not only achieves faster 8-to-8 LUT evaluation but also improves the efficiency of most heavy 8-bit bivariate instructions by up to 21$\times$ and the 16-bit sigmoid function by more than 26$\times$.
Lattice EPID with Efficient Revocation
Enhanced Privacy Identification (EPID) is one of the anonymous authentication mechanisms that found their way into the industry, being deployed in billions of chips and standardized at ISO. The linchpin of EPID lies in its decentralized revocation procedure that allows to revoke a signer by simply placing one of its signatures on a signature revocation list SRL. Each new signature must then include a proof that it has been generated with a key different from those used to produce the signatures on the SRL. This proof of non-revocation in current post-quantum schemes either relies on general-purpose NIZKs or on regular zero-knowledge proofs (ZKP) but with a witness dimension linear in the size of the SRL, which leads to large size and/or computational complexity.
In this paper, we rethink the standard approach of non-revocation so as to avoid its heavy reliance on ZKP. Our construction indeed combines features from different tools (such as Falcon signatures) that are unusual in this context to pull most elements out of the ZKP, leading to significant performance improvements. Providing all these elements unconcealed creates many security challenges for our construction but we yet manage to address all of them and prove security under well-understood lattice assumptions, and in the strong model of Sanders-Traoré (CT-RSA'21) allowing malicious SRLs.
KLPT²: Algebraic Pathfinding in Dimension Two and Applications
Following Ibukiyama, Katsura and Oort, all principally polarized superspecial abelian surfaces over $\overline{\mathbb{F}}_p$ can be represented by a certain type of $2 \times 2$ matrix $g$, having entries in the quaternion algebra $B_{p,\infty}$. We present a heuristic polynomial-time algorithm which, upon input of two such matrices $g_1, g_2$, finds a "connecting matrix" representing a polarized isogeny of smooth degree between the corresponding surfaces. Our algorithm should be thought of as a two-dimensional analog of the KLPT algorithm from 2014 due to Kohel, Lauter, Petit and Tignol for finding a connecting ideal of smooth norm between two given maximal orders in $B_{p, \infty}$.
The KLPT algorithm has proven to be a versatile tool in isogeny-based cryptography, and our analog has similar applications; we discuss two of them in detail. First, we show that it yields a polynomial-time solution to a two-dimensional analog of the so-called constructive Deuring correspondence: given a matrix $g$ representing a superspecial principally polarized abelian surface, realize the latter as the Jacobian of a genus-$2$ curve (or, exceptionally, as the product of two elliptic curves if it concerns a product polarization). Second, we show that, modulo a plausible assumption, Charles-Goren-Lauter style hash functions from superspecial principally polarized abelian surfaces require a trusted set-up. Concretely, if the matrix $g$ associated with the starting surface is known then collisions can be produced in polynomial time. We deem it plausible that all currently known methods for generating a starting surface indeed reveal the corresponding matrix. As an auxiliary tool, we present an efficient method for converting polarized isogenies of powersmooth degree into the corresponding connecting matrix, a step for which a previous approach by Chu required super-polynomial (but sub-exponential) time.
An Update to ``Polynomial Hashing over Prime Order Fields''
New state-of-the-art assembly implementations show that BRWHash is consistently faster than polyHash and both t-BRWHash and d-2LHash for all message lengths and for both the primes $2^{127}-1$ and $2^{130}-5$.
Efficient Pairings Final Exponentiation Using Cyclotomic Cubing for Odd Embedding Degrees Curves
Uncategorized
Uncategorized
In pairings-based cryptographic applications, final exponentiation with a large fixed exponent ensures distinct outputs for the Tate pairing and its derivatives. Despite notable advancements in optimizing elliptic curves with even embedding degrees, improvements for those with odd embedding degrees, particularly those divisible by $3$, remain underexplored.
This paper introduces three methods for applying cyclotomic cubing in final exponentiation and enhancing computational efficiency. The first allows for the execution of one cyclotomic cubing based on the final exponentiation structure. The second leverages some existing seeds structure to enable the use of cyclotomic cubing and extends this strategy to generate new seeds. The third allows generating sparse ternary representation seeds to apply cyclotomic cubing as an alternative to squaring. These optimizations improve performance by up to $19.3\%$ when computing the final exponentiation for the optimal Ate pairing on $BLS15$ and $BLS27$, the target elliptic curves of this study.
Accelerating Multiparty Noise Generation Using Lookups
There is rising interest in combining Differential Privacy (DP) and Secure Multiparty Computation (MPC) techniques to protect distributed database query evaluations from both adversaries taking part in the computation and those observing the outputs. This requires implementing both the query evaluation and noise generation parts of a DP mechanism directly in MPC. While query evaluation can be done using existing highly optimized MPC techniques for secure function evaluation, efficiently generating the correct noise distribution is a more novel challenge.
Due to the inherent nonlinearity of sampling algorithms for common noise distributions, this challenge is quite non-trivial, as is evident from the substantial number of works proposing protocols for multiparty noise sampling. In this work, we propose a new approach for joint noise sampling that leverages recent advances in multiparty lookup table (LUT) evaluations. The construction we propose is largely agnostic to the target noise distribution and builds on obliviously evaluating the LUT at an index drawn from a distribution that can be very cheaply generated in MPC, thus translating this cheap distribution into the much more complicated target noise distribution. In our instantiation, the index is a concatenation of cheaply biased bits, and we approximate a discrete Laplace distribution to a negligible statistical distance. We demonstrate the concrete efficiency of the construction by implementing it using 3-party replicated secret sharing (RSS) in the honest-majority setting with both semi-honest and malicious security. In particular, we achieve sub-kilobyte communication complexity, being an improvement over the state-of-the-art by several orders of magnitude and a computation time of a few milliseconds. Samples of a discrete Laplace distribution are generated with (amortized over $1000$ samples) 362 bytes of communication and under a millisecond computation time per party in the semi-honest setting. Using recent results for batched multiplication checking, we have an overhead for malicious security that, per sample, amortizes to below a byte of communication and 10 ms of runtime.
Finally, our open-source implementation extends the online-to-total communication trade-off for MAESTRO-style lookup tables which might be of independent interest.
AQQUA: Augmenting Quisquis with Auditability
We present AQQUA, a permissionless, private, and auditable
payment system built on top of Quisquis. Unlike other auditable payment systems, AQQUA supports auditing, while maintaining privacy. It allows users to hold multiple accounts, perform concurrent transactions, and features a non-increasing state. AQQUA achieves auditability by introducing two authorities: one for registration and one for auditing. These authorities cannot censor transactions, thus preserving the decentralized nature of the system. Users create an initial account with the registration authority and then privately transact by using provably unlinkable updates of it. Audits can be voluntarily initiated by the users or requested by the audit authority at any time. Compliance is proved in zero-knowledge against a set of policies which include a maximum limit in the amount sent/received during a time period or in a single transfer, non-participation in a specific transaction or selective disclosure of the value exchanged. To analyze the security of AQQUA we formally define a security model for private and auditable decentralized payment systems. Using this model, we prove that AQQUA satisfies anonymity towards both the public and the auditor, theft prevention, and audit soundness.
Efficient Pseudorandom Correlation Generators over $\mathbb{Z}/p^k\mathbb{Z}$
Modern efficient secure multi-party computation (MPC) protocols typically follow an offline-online design, where offline protocols produce a sufficient amount of correlated randomness that would be consumed during the online phases. The past decades have witnessed maturing of efficient online protocols, for computing circuits over either arbitrary finite fields or rings $\mathbb{Z}_{p^k}$. In particular, protocols tailored for $\mathbb{Z}_{2^k}$ arithmetic have achieved better concrete efficiency in most real-life applications, as it naturally captures modern CPU architectures. On the other hand, a recent paradigm of {\em pseudorandom correlation generator} (PCG) initiated by Boyle et al. (CCS'18, Crypto'19) opens a door to efficient preprocessing with sublinear communication. Since then, PCGs have been extensively studied and developed to produce various types of correlations required from online protocols. Although Li et al. (EuroCrypt'25) recently put a significant step forward and propose efficient PCGs for arbitrary finite fields, the current state of PCGs for rings is not satisfying at all.
Towards the great demand for efficiently generating correlations over rings, we investigate PCGs for general Galois rings, which simultaneously unify finite fields and integer rings modulo $p^k$. In summary, we establish the following results:
(i) We generalize the state-of-the-art PCG constructions for oblivious linear evaluations (OLE) over Galois fields to {\em arbitrary Galois rings}, basing on Galois theory and the Hensel lift. Moreover, our PCGs for Galois rings are as efficient as PCGs for fields. Concretely, for $mN$ OLE correlations over $\mathbb{Z}_{2^k}$, we require $O(m\log{N})$ communication and $O(m^2N\log{N})$ computation, where $m$ is an arbitrary integer $\geq 2$. In comparison, to our best knowledge, previous approaches incur communication at least linear in $N$.
(ii) We extend the above OLE construction to provide various types of correlations over any Galois ring. One of the fascinating applications is an efficient PCG for two-party SPD$\mathbb{Z}_{2^k}$ authenticated multiplication triples (Crypto'18). For $mN$ SPD$\mathbb{Z}_{2^k}$ triples, our approach requires only $O(m\log{N})$ communication and $O(m^2N\log{N})$ computation. Concrete evaluations show that our method significantly outperforms existing schemes based on homomorphic encryption.
(iii) In addition, our PCGs for Galois rings also enable multi-party multiplication triple generation, yielding the first efficient MPC protocol for arithmetic circuits over $\mathbb{Z}_{2^k}$ with \emph{silent} and \emph{sublinear} preprocessing. Additional applications include circuit-dependent preprocessing and matrix multiplication triples, etc, which are of independent interest.
Towards a White-Box Secure Fiat-Shamir Transformation
The Fiat–Shamir transformation is a fundamental cryptographic technique widely used to convert public-coin interactive protocols into non-interactive ones. This transformation is crucial in both theoretical and practical applications, particularly in the construction of succinct non-interactive arguments (SNARKs). While its security is well-established in the random oracle model, practical implementations replace the random oracle with a concrete hash function, where security is merely assumed to carry over.
A growing body of work has given theoretical examples of protocols that remain secure under the Fiat–Shamir transformation in the random oracle model but become insecure when instantiated with any white-box implementation of the hash function. Recent research has shown how these attacks can be applied to natural cryptographic schemes, including real-world systems. These attacks rely on a general diagonalization technique, where the protocol exploits its access to the white-box implementation of the hash function. These attacks cast serious doubt on the security of cryptographic systems deployed in practice today, leaving their soundness uncertain.
We propose a new Fiat–Shamir transformation (XFS) that aims to defend against a broad family of attacks. Our approach is designed to be practical, with minimal impact on the efficiency of the prover and verifier and on the proof length. At a high level, our transformation combines the standard Fiat–Shamir technique with a new type of proof-of-work that we construct.
We provide strong evidence for the security of our transformation by proving its security in a relativized random oracle model. Specifically, we show diagonalization attacks on the standard Fiat–Shamir transformation that can be mapped to analogous attacks within this model, meaning they do not rely on a concrete instantiation of the random oracle. In contrast, we prove unconditionally that our XFS variant of the Fiat–Shamir transformation remains secure within this model. Consequently, any successful attack on XFS must deviate from known techniques and exploit aspects not captured by our model.
We hope that our transformation will help preserve the security of systems relying on the Fiat–Shamir transformation.
Computing Asymptotic Bounds for Small Roots in Coppersmith's Method via Sumset Theory
Coppersmith's method is a well-known and practical method for solving polynomial modular equations involved in some cryptosystems such as RSA. An important and tedious task in this method consists in computing the asymptotic bounds. In this work, we address the challenge of computing such asymptotic bounds by introducing the Sumsets theory from Additive Combinatorics as a new analytical tool, which significantly streamlines manual calculations. More precisely, we develop the first provable algorithm for determining these asymptotic bounds, whereas the recent methods based on simple Lagrange interpolation are heuristic.
Moreover, the experiments showed that our method is much more efficient than the previous method in practice. We also employ our method to improve the cryptanalytic results for the Commutative Isogeny Hidden Number Problem. Our approach may deepen the understanding of Coppersmith's method and inspire more security analysis methodologies.
Tricycle: Private Transformer Inference with Tricyclic Encodings
The growing adoption of Large Language Models in privacy-sensitive domains necessitates secure inference mechanisms that preserve data confidentiality. Homomorphic encryption offers a promising pathway by enabling computation on encrypted inputs, yet existing approaches struggle to scale efficiently to full transformer models due to limitations in packing schemes, which must efficiently support a wide range of operations, including matrix multiplications, row-wise nonlinear operations, and self-attention. In this work, we present Tricycle, a framework for private transformer inference built on our novel packing scheme, called tricyclic encodings, which are designed to efficiently support these core operations. Tricyclic encodings are a generalization of bicyclic encodings, enabling privacy-preserving batch matrix multiplications with optimal multiplicative depth in order to facilitate parallelized multi-head self-attention. We optimize our matrix multiplications by incorporating Baby-Step Giant-Step optimizations to reduce ciphertext rotations and presenting new ciphertext-plaintext matrix multiplication techniques that relax prior limitations. A further contribution of our work is a lightweight and effective approach for stabilizing the softmax function via statistical max estimation. Our end-to-end implementation on a BERT-Tiny model shows that Tricycle achieves a \(1.5 \times\) to \(3 \times\) speedup over previous approaches, marking a step toward practical and scalable private LLM inference without sacrificing model fidelity.
SoK: Reassessing Side-Channel Vulnerabilities and Countermeasures in PQC Implementations
Post-Quantum Cryptography (PQC) algorithms should remain secure even in the presence of quantum computers. Although the security of such schemes is guaranteed at the algorithmic level, real-world implementations often suffer from other vulnerabilities like Side-Channel Attacks (SCA). This Systematization of Knowledge (SoK) paper investigates side-channel attacks targeting implementations of PQC algorithms. This work categorizes attacks from an adversarial perspective to identify the most vulnerable components of the algorithms' implementations and highlights unexplored parts in current implementations. In addition, it reviews and analyzes the efficiency and efficacy of existing countermeasures to SCA in current hardware implementations. This approach helps identify countermeasures that provide broader protection and highlights characteristics needed for future secure implementations. Our findings offer guidance in strengthening existing systems and developing more efficient defenses against side-channel attacks.
EWEMrl: A White-Box Secure Cipher with Longevity
We propose the first updatable white-box secure cipher, EWEMrl, and its natural extension EWEMxl, both achieving longevity against non-adaptive read-only malware. The notion of longevity, introduced by Koike et al., addresses continuous code leakage and is stronger than incompressibility. While Yoroi claimed longevity, but was broken by Isobe and Todo. Given the prevalence of continuous leakage, developing such ciphers is crucial in white-box cryptography. Precisely, we have the following.
• We first introduce EWEMr (Extended WEM against non-adaptive read-only adversaries), a generalization of WEM (White-box Even-Mansour). WEM is the first (and possibly only) white-box cipher based on EM, replacing its key addition layer with a secret Sbox. EWEMr achieves a high space-hardness bound, with a new generic proof strategy, but does not provide longevity. Instead, it serves as the base for EWEMrl.
• We also present EWEMx, which uses EWEMr as subroutines and is secure in the stronger adaptive model. While EWEMx does not achieve longevity, it is the base design for EWEMxl.
• We next propose EWEMrl, which is the first cipher to achieve longevity against non-adaptive read-only adversaries. No existing ciphers, such as SPNbox and SPACE, are designed for longevity. We show that EWEMrl ensures (against non-adaptive read-only adversaries) (1) longevity, (2) high space-hardness in both known-space and chosen-space settings, and (3) security against hybrid code-lifting attacks.
• Finally, we introduce EWEMxl, a natural extension of EWEMrl with a structure similar to EWEMx. EWEMxl achieves (2) and (3) in the stronger adaptive model while maintaining (1) in the same non-adaptive and read-only setting.
In summary, EWEMrl and EWEMxl are the first ciphers providing longevity against non-adaptive read-only malware while ensuring security confidence in the black-box setting.
RingSG: Optimal Secure Vertex-Centric Computation for Collaborative Graph Processing
Collaborative graph processing refers to the joint analysis of inter-connected graphs held by multiple graph owners. To honor data privacy and support various graph processing algorithms, existing approaches employ secure multi-party computation (MPC) protocols to express the vertex-centric abstraction. Yet, due to certain computation-intensive cryptography constructions, state-of-the-art (SOTA) approaches are asymptotically suboptimal, imposing significant overheads in terms of computation and communication. In this paper, we present RingSG, the first system to attain optimal communication/computation complexity within the MPC-based vertex-centric abstraction for collaborative graph processing. This optimal complexity is attributed to Ring-ScatterGather, a novel computation paradigm that can avoid exceedingly expensive cryptography operations (e.g., oblivious sort), and simultaneously ensure the overall workload can be optimally decomposed into parallelizable and mutually exclusive MPC tasks. Within Ring-ScatterGather, RingSG improves the concrete runtime efficiency by incorporating 3-party secure computation via share conversion, and optimizing the most cost-heavy part using a novel oblivious group aggregation protocol. Finally, unlike prior approaches, we instantiate RingSG into two end-to-end applications to effectively obtain application-specific results from the protocol outputs in a privacy-preserving manner. We developed a prototype of RingSG and extensively evaluated it across various graph collaboration settings, including different graph sizes, numbers of parties, and average vertex degrees. The results show RingSG reduces the system running time of SOTA approaches by up to 15.34× and per-party communication by up to 10.36×. Notably, RingSG excels in processing sparse global graphs collectively held by more parties, consistent with our theoretical cost analysis.
RoK and Roll – Verifier-Efficient Random Projection for $\tilde{O}(\lambda)$-size Lattice Arguments
Succinct non-interactive arguments of knowledge (SNARKs) based on lattice assumptions offer a promising post-quantum alternative to pairing-based systems, but have until now suffered from inherently quadratic proof sizes in the security parameter. We introduce RoK and Roll, the first lattice-based SNARK that breaks the quadratic barrier, achieving communication complexity of $\tilde{O}(\lambda)$ together with a succinct verification time. The protocol significantly improves upon the state of the art of fully-succinct argument systems established by ``RoK, Paper, SISsors'' (RPS) [ASIACRYPT'24] and hinges on two key innovations, presented as reductions of knowledge (RoKs):
- Structured random projections: We introduce a new technique for structured random projections that allows us to reduce the witness dimensions while approximately preserving its $\ell_2$ norm and maintaining the desired tensor structure. In order to maintain succinct communication and verification, the projected image is further committed and adjoined to the original relation. This procedure is recursively repeated until dimension of the intermediate witness becomes $\mathsf{poly}(\lambda)$, i.e. independent of the original witness length.
- Unstructured random projection: When the witness is sufficiently small, we let the unstructured projection (over coefficients $\mathbb{Z}_q$) be sent in plain, as in LaBRADOR [CRYPTO'23]. We observe, however, that the strategy from prior works to immediately lift the projection claim to $\mathcal{R}_q$, and into our relation, would impose a quadratic communication cost. Instead, we gradually batch-and-lift the projection a the tower of intermediate ring extensions. This reduces the communication cost to $\tilde{O}(\lambda)$ while maintaining a succinct verification time.
These two techniques, combined with existing RoKs from RPS, yield a succinct argument system with communication complexity $\tilde{O}(\lambda)$ and succinct verification for structured linear relations.
Efficient SPA Countermeasures using Redundant Number Representation with Application to ML-KEM
Simple power analysis (SPA) attacks and their extensions,
profiled and soft-analytical side-channel attacks (SASCA), represent a
significant threat to the security of cryptographic devices and remain
among the most powerful classes of passive side-channel attacks. In this
work, we analyze how numeric representations of secrets can affect the
amount of exploitable information leakage available to the adversary.
We present an analysis of how mutual information changes as a result
of the integer ring size relative to the machine word-size. Furthermore,
we study the Redundant Number Representation (RNR) countermeasure
and show that its application to ML-KEM can resist the most powerful
SASCA attacks and provides a low-cost alternative to shuffling. We eval-
uate the performance of RNR-ML-KEM with both simulated and prac-
tical SASCA experiments on the ARM Cortex-M4 based on a worst-case
attack methodology. We show that RNR-ML-KEM sufficiently renders
these attacks ineffective. Finally, we evaluate the performance of the
RNR-ML-KEM NTT and INTT and show that SPA security can be
achieved with a 62.8% overhead for the NTT and 0% overhead for the
INTT relative to the ARM Cortex-M4 reference implementation used.
A search to distinguish reduction for the isomorphism problem on direct sum lattices
At Eurocrypt 2003, Szydlo presented a search to distinguish reduction for the Lattice Isomorphism Problem (LIP) on the integer lattice $\mathbb{Z}^n$. Here the search problem asks to find an isometry between $\mathbb{Z}^n$ and an isomorphic lattice, while the distinguish variant asks to distinguish between a list of auxiliary lattices related to $\mathbb{Z}^n$.
In this work we generalize Szydlo's search to distinguish reduction in two ways. Firstly, we generalize the reduction to any lattice isomorphic to $\Gamma^n$, where $\Gamma$ is a fixed base lattice. Secondly, we allow $\Gamma$ to be a module lattice over any number field. Assuming the base lattice $\Gamma$ and the number field $K$ are fixed, our reduction is polynomial in $n$.
As a special case we consider the module lattice $\mathcal{O}_K^2$ used in the module-LIP based signature scheme HAWK, and we show that one can solve the search problem, leading to a full key recovery, with less than $2d^2$ distinguishing calls on two lattices each, where $d$ is the degree of the power-of-two cyclotomic number field and $\mathcal{O}_K$ its ring of integers.
Brief Comments on Rijndael-256 and the Standard RISC-V Cryptography Extensions
We evaluate the implementation aspects of Rijndael-256 using the ratified RISC-V Vector Cryptography extension Zvkn. A positive finding is that Rijndael-256 can be implemented in constant time with the existing RISC-V ISA as the critical AES and fixed crossbar permutation instructions are in the DIEL (data-independent execution latency) set. Furthermore, simple tricks can be used to expand the functionality of key expansion instructions to cover the additional round constants required. However, due to the required additional byte shuffle in each round, Rijndael-256 will be significantly slower than AES-256 in terms of throughput. Without additional ISA modifications, the instruction count will be increased by the required switching of the EEW (``effective element width'') parameter in each round between 8 bits (byte shuffle) and 32 bits (AES round instructions). Instruction counts for 1-kilobyte encryption and decryption with Rijndael-256 are factor $2.66\times$ higher than with AES-256. The precise amount of throughput slowdown depends on the microarchitectural details of a particular RISC-V ISA hardware instantiation, but it may be substantial with some high-performance vector AES architectures due to the breakdown of AES pipelining and the relative slowness of crossbar permutation instructions.
Revisiting Module Lattice-based Homomorphic Encryption and Application to Secure-MPC
Homomorphic encryption (HE) schemes have gained significant popularity in modern privacy-preserving applications across various domains. While research on HE constructions based on learning with errors (LWE) and ring-LWE has received major attention from both cryptographers and software-hardware designers alike, their module-LWE-based counterpart has remained comparatively under-explored in the literature. A recent work provides a module-LWE-based instantiation (MLWE-HE) of the Cheon-Kim-Kim-Song (CKKS) scheme and showcases several of its advantages such as parameter flexibility and improved parallelism. However, a primary limitation of this construction is the quadratic growth in the size of the relinearization keys. Our contribution is two-pronged: first, we present a new relinearization key-generation technique that addresses the issue of quadratic key size expansion by reducing it to linear growth. Second, we extend the application of MLWE-HE in a multi-group homomorphic encryption (MGHE) framework, thereby generalizing the favorable properties of the single-keyed HE to a multi-keyed setting as well as investigating additional flexibility attributes of the MGHE framework.
Cymric: Short-tailed but Mighty
Authenticated encryption (AE) is a fundamental tool in today's secure communication. Numerous designs have been proposed, including well-known standards such as GCM. While their performance for long inputs is excellent, that for short inputs is often problematic due to high overhead in computation, showing a gap between the real need for IoT-like protocols where packets are often very short. Existing dedicated short-input AEs are very scarce, the classical Encode-then-encipher (Bellare and Rogaway, Asiacrypt 2000) and Manx (Adomnic\u{a}i et al., CT-RSA 2023), using up to two block cipher calls. They have superior performance for (very) short inputs, however, security is up to $n/2$ bits, where $n$ is the block size of the underlying block cipher.
This paper proposes a new family of short-input AEs, dubbed Cymric, which ensures beyond-birthday-bound (BBB) security. It supports a wider range of input space than EtE and Manx with the help of one additional block cipher call (thus three calls). In terms of the number of block cipher calls, Cymric is the known minimum construction of BBB-secure AEs, and we also prove this is indeed minimal by presenting an impossibility result on BBB-secure AE with two calls. Finally, we show a comprehensive benchmark on microcontrollers to show performance advantage over existing schemes.
Ring-LWR based Commitments and ZK-PoKs with Application to Verifiable Quantum-Safe Searchable Symmetric Encryption
Prior research on ensuring trust in delegated computation through lattice-based zero-knowledge proofs mostly rely on Learning-With-Errors (LWE) assumption. In this work, we propose a zero-knowledge proof of knowledge using the Ring Learning with Rounding (RLWR) assumption for an interesting and useful class of statements: linear relations on polynomials. We begin by proposing, to the best of our knowledge, the first efficient commitment scheme in literature based on the hardness of RLWR assumption. We establish two properties on RLWR that aid in the construction of our commitments: (i) closure under addition with double rounding, and (ii) closure under multiplication with a short polynomial. Building upon our RLWR commitment scheme, we consequently design a RLWR based $\Sigma_2$ protocol for proving knowledge of a single committed message under linear relations with public polynomials.
As an use-case of our proposed $\Sigma_2$ protocol, we showcase a construction of a quantum-safe Searchable Symmetric Encryption (SSE) scheme by plugging a prior LWR based SSE scheme from (EuroS&P 2023) with our $\Sigma_2$ protocol. Concretely, using our $\Sigma_2$ protocol for linear relations, we prove the correctness of an encrypted search result in a zero-knowledge manner. We implement our verifiable SSE framework and show that the overhead of an extra verification round is negligible ($0.0023$ seconds) and retains the asymptotic query execution time complexity of the original SSE scheme.
Our work establishes results on zero-knowledge proof systems that can be of independent interest. By shifting the setting from RLWE to RLWR, we gain significant (i) efficiency improvements in terms of communication complexity by $O(M)$ (since some prior works on RLWE require rejection sampling by a factor of $M$), as well as (ii) very short proof size ($8.4$ KB) and tighter parameters (since RLWR does not explicitly manipulate error polynomials like RLWE).
Highly Scalable Searchable Symmetric Encryption for Boolean Queries from NTRU Lattice Trapdoors
Searchable symmetric encryption (SSE) enables query execution directly over sym-
metrically encrypted databases. To support realistic query executions over encrypted
document collections, one needs SSE schemes capable of supporting both conjunctive
and disjunctive keyword queries. Unfortunately, existing solutions are either practi-
cally inefficient (incur large storage overheads and/or high query processing latency)
or are quantum-unsafe.
In this paper, we present the first practically efficient SSE scheme with fast con-
junctive and disjunctive keyword searches, compact storage, and security based on the
(plausible) quantum-hardness of well-studied lattice-based assumptions. We present
NTRU-OQXT – a highly compact NTRU lattice-based conjunctive SSE scheme that
outperforms all existing conjunctive SSE schemes in terms of search latency. We then
present an extension of NTRU-OQXT that additionally supports disjunctive queries,
we call it NTRU-TWINSSE. Technically, both schemes rely on a novel oblivious search
protocol based on highly optimized Fast-Fourier trapdoor sampling algorithms over
NTRU lattices. While such techniques have been used to design other cryptographic
primitives (such as digital signatures), they have not been applied before in the context
of SSE.
We present prototype implementations of both schemes, and experimentally val-
idate their practical performance over a large real-world dataset. Our experiments
demonstrate that NTRU-OQXT achieves 2× faster conjunctive keyword searches as
compared to all other conjunctive SSE schemes (including the best quantum-unsafe
conjunctive SSE schemes), and substantially outperforms many of these schemes in
terms of storage requirements. These efficiency benefits also translate to NTRU-
TWINSSE, which is practically competitive with the best quantum-unsafe SSE schemes
capable of supporting both conjunctive and disjunctive queries.
Hobbit: Space-Efficient zkSNARK with Optimal Prover Time
Zero-knowledge succinct non-interactive arguments (zkSNARKs) are notorious for their large prover space requirements, which almost prohibits their use for really large instances. Space-efficient zkSNARKs aim to address this by limiting the prover space usage, without critical sacrifices to its runtime. In this work, we introduce Hobbit, the only existing space-efficient zkSNARK that achieves optimal prover time $O(|C|)$ for an arithmetic circuit $C$. At the same time, Hobbit is the first transparent and plausibly post-quantum secure construction of its kind. Moreover, our experimental evaluation shows that Hobbit outperforms all prior general-purpose space-efficient zkSNARKs in the literature across four different applications (arbitrary arithmetic circuits, inference of pruned Multi-Layer Perceptron, batch AES128 evaluation, and select-and-aggregate SQL query) by $\times$8-$\times$$56$ in terms or prover time while requiring up to $\times$23 less total space.
At a technical level, we introduce two new building blocks that may be of independent interest: (i) the first sumcheck protocol for products of polynomials with optimal prover time in the streaming setting, and (ii) a novel multi-linear plausibly post-quantum polynomial commitment that outperforms all prior works in prover time (and can be tuned to work in a space-efficient manner). We build Hobbit by combining the above with a modified version of HyperPlonk, providing an explicit routine to stream access to the circuit evaluation.
Tightly Secure Public-Key Encryption with Equality Test Supporting Flexible Authorization in the Standard Model
We introduce a novel Public Key Encryption with Equality Test supporting Flexible Authorization scheme offering User-Level, Ciphertext-Level, and User-Specific-Ciphertext-Level authorizations. Notably, our construction achieves security under the Decisional Diffie-Hellman assumption with a tight reduction, whereas the existing works are either not tightly secure or rely heavily on the random oracles. By relying solely on the standard DDH assumption, our scheme offers practical implementation without specialized cryptographic structures.
All Proof of Work But No Proof of Play
Speedrunning is a competition that emerged from communities of early video
games such as Doom (1993). Speedrunners try to finish a game in minimal
time. Provably verifying the authenticity of submitted speedruns is an open
problem. Traditionally, best-effort speedrun verification is conducted by on-site
human observers, forensic audio analysis, or a rigorous mathematical analysis
of the game mechanics1. Such methods are tedious, fallible, and, perhaps worst
of all, not cryptographic. Motivated by naivety and the Dunning-Kruger effect,
we attempt to build a system that cryptographically proves the authenticity of
speedruns. This paper describes our attempted solutions and ways to circumvent
them. Through a narration of our failures, we attempt to demonstrate the difficulty
of authenticating live and interactive human input in untrusted environments, as
well as the limits of signature schemes, game integrity, and provable play.
Revisiting SIOT protocol with new security assumptions
Oblivious Transfer is one of the most important building blocks in cryptography and useful for building secure protocols. With the advent of quantum computing there was a boost in research and development of cryptographic protocols resistant to quantum computer processing. Thus, in 2018, the SIOT (Supersingular Isogeny Oblivious Transfer) protocol was presented as the first post-quantum cryptographic OT scheme based on supersingular elliptic curve isogenies. Initially, an OT scheme was created combined with the cryptographic primitives of the SIDH (Supersingular Isogeny Diffie-Hellman) protocol. Furthermore, the SIOT protocol was built in its simplest configuration against semi-honest adversaries. However, it was subjected to scrutiny that resulted in the need to develop new security proofs. Almost in parallel, new and efficient cryptanalytic attacks emerged on the SIDH protocol, which consequently compromised the SIOT security structure. Thus, the new definitions of security proofs encompassed the compatibility of certain parameters of the OT functionality of the SIOT protocol itself with security assumptions of computational isogeny problems. After that, the security countermeasures from M-SIDH (Masked-Supersingular Isogeny Diffie-Hellman) protocol were analysed and implemented into SIOT protocol. Therefore, we propose an OT protocol based on isogenies of elliptic curves and with resistance to quantum attacks.
May the Force $\textit{not}$ Be with you: Brute-Force Resistant Biometric Authentication and Key Reconstruction
The use of biometric-based security protocols is on the steep rise. As
biometrics become more popular, we witness more attacks. For example, recent
BrutePrint/InfinityGauntlet attacks showed how to brute-force fingerprints
stored on an Android phone in about 40 minutes. The attacks are possible because biometrics, like passwords, do not have high entropy. But unlike passwords, brute-force attacks are much more damaging for biometrics, because one cannot easily change biometrics in case of compromise. In this work, we propose a novel provably secure Brute-Force Resistant Biometrics (BFRB) protocol for biometric-based authentication and key reconstruction that protects against brute-force attacks even when the server storing biometric-related data is compromised. Our protocol utilizes a verifiable partially oblivious pseudorandom function, an authenticated encryption scheme, a pseudorandom function, and a hash. We formally define security for a BFRB protocol and reduce the security of our protocol to the security of the building blocks. We implement the protocol and study its performance for the ND-0405 iris dataset.
Arithmetic PCA for Encrypted Data
Reducing the size of large dimensional data is a critical task in machine learning (ML) that often involves using principal component analysis (PCA). In privacy-preserving ML, data confidentiality is of utmost importance, and reducing data size is a crucial way to cut overall costs.
This work focuses on minimizing the number of normalization processes in the PCA algorithm, which is a costly procedure in encrypted PCA. By modifying Krasulina's algorithm, non-polynomial operations were eliminated, except for a single delayed normalization at the end.
Our PCA algorithm demonstrated similar performance to conventional PCA algorithms in face recognition applications. We also implemented it using the CKKS (Cheon-Kim-Kim-Song) homomorphic encryption scheme and obtained the first 6 principal components of a 128$\times$128 real matrix in 7.85 minutes using 8 threads.
End-to-End Encrypted Git Services
Git services such as GitHub, have been widely used to manage projects and enable collaborations among multiple entities. Just as in messaging and cloud storage, where end-to-end security has been gaining increased attention, such a level of security is also demanded for Git services. Content in the repositories (and the data/code supply-chain facilitated by Git services) could be highly valuable, whereas the threat of system breaches has become routine nowadays. However, existing studies of Git security to date (mostly open source projects) suffer in two ways: they provide only very weak security, and they have a large overhead.
In this paper, we initiate the needed study of efficient end-to-end encrypted Git services. Specifically, we formally define the syntax and critical security properties, and then propose two constructions that provably meet those properties. Moreover, our constructions have the important property of platform-compatibility: They are compatible with current Git servers and reserve all basic Git operations, thus can be directly tested and deployed on top of existing platforms. Furthermore, the overhead we achieve is only proportional to the actual difference caused by each edit, instead of the whole file (or even the whole repository) as is the case with existing works. We implemented both constructions and tested them directly on several public GitHub repositories. Our evaluations show (1) the effectiveness of platform-compatibility, and (2) the significant efficiency improvement we got (while provably providing much stronger security than prior ad-hoc treatments).
Stealth and Beyond: Attribute-Driven Accountability in Bitcoin Transactions
Bitcoin enables decentralized, pseudonymous transactions, but balancing privacy with accountability remains a challenge. This paper introduces a novel dual accountability mechanism that enforces both sender and recipient compliance in Bitcoin transactions. Senders are restricted to spending Unspent Transaction Outputs (UTXOs) that meet specific criteria, while recipients must satisfy legal and ethical requirements before receiving funds. We enhance stealth addresses by integrating compliance attributes, preserving privacy while ensuring policy adherence. Our solution introduces a new cryptographic primitive, Identity-Based Matchmaking Signatures (IB-MSS), which supports streamlined auditing. Our approach is fully compatible with existing Bitcoin infrastructure and does not require changes to the core protocol, preserving both privacy and decentralization while enabling transaction auditing and compliance.
Copy-Protection from UPO, Revisited
Quantum copy-protection is a foundational notion in quantum cryptography that leverages the governing principles of quantum mechanics to tackle the problem of software anti-piracy. Despite progress in recent years, precisely characterizing the class of functionalities that can be copy-protected is still not well understood.
Two recent works, by [Coladangelo and Gunn, STOC 2024] and [Ananth and Behera, CRYPTO 2024, showed that puncturable functionalities can be copy-protected. Both works have significant caveats with regard to the underlying cryptographic assumptions and additionally restrict the output length of the functionalities to be copy-protected. In this work, we make progress towards simultaneously addressing both caveats. We show the following:
- Revisiting Unclonable Puncturable Obfuscation (UPO): We revisit the notion of UPO introduced by [Ananth and Behera, CRYPTO 2024]. We present a new approach to construct UPO and a variant of UPO, called independent-secure UPO. Unlike UPO, we show how to base the latter notion on well-studied assumptions.
- Copy-Protection from Independent-secure UPO: Assuming independent-secure UPO, we show that any m-bit, for m ≥ 2, puncturable functionality can be copy-protected.
- Copy-Protection from UPO: Assuming UPO, we show that any 1-bit puncturable functionality can be copy-protected. The security of copy-protection holds against identical challenge distributions.
New Upper and Lower Bounds for Perfectly Secure MPC
We consider perfectly secure MPC for $n$ players and $t$ malicious corruptions. We ask whether requiring only security with abort (rather than guaranteed output delivery, GOD) can help to achieve protocols with better resilience, communication complexity or round complexity. We show that for resilience and communication complexity, abort security does not help, one still needs $3t< n$ for a synchronous network and $4t< n$ in the asynchronous case. And, in both cases, a communication overhead of $O(n)$ bits per gate is necessary.
When $O(n)$ overhead is inevitable, one can explore if this overhead can be pushed to the preprocessing phase and the online phase can be achieved with $O(1)$ overhead. This result was recently achieved in the synchronous setting, in fact, with GOD guarantee. We show this same result in the asynchronous setting. This was previously open since the main standard approach to getting constant overhead in a synchronous on-line phase fails in the asynchronous setting. In particular, this shows that we do not need to settle for abort security to get an asynchronous perfectly secure protocol with overheads $O(n)$ and $O(1)$.
Lastly, in the synchronous setting, we show that perfect secure MPC with abort requires only 2 rounds, in contrast to protocols with GOD that require 4 rounds.
Generic Construction of Threshold Ring Signatures and Lattice-based Instantiations
A t-out-of-n threshold ring signature allows $t$ parties to jointly sign a message on behalf of $n$ parties without revealing the identities of the signers. In this paper, we introduce a new generic construction for threshold ring signature, called GCTRS, which can be built on top of a selection on identification schemes, commitment schemes and a new primitive called t-out-of-n proof protocol which is a special type of zero-knowledge proof. In general, our design enables a group of $t$ signers to first generate an aggregated signature by interacting with each other; then they are able to compute a t-out-of-n proof to convince the verifier that the aggregated signature is indeed produced by $t$ individuals among a particular set. The signature is succinct, as it contains only one aggregated signature and one proof in the final signature. We define all the properties required for the building blocks to capture the security of the GCTRS and provide a detailed security proof. Furthermore, we propose two lattice-based instantiations for the GCTRS, named LTRS and CTRS, respectively. Notably, the CTRS scheme is the first scheme that has a logarithmic signature size relative to the ring size. Additionally, during the instantiation process, we construct two t-out-of-n proof protocols, which may be of independent interest.
Breaking The Authenticated Encryption scheme HiAE
HiAE is the fastest AEAD solution on ARM chips to date, utilizing AES round functions while also setting a new performance benchmark on the latest x86 processors. In this paper, we employ algebraic techniques to investigate the security of HiAE. Our findings reveal that HiAE is vulnerable. Firstly, we employ the meet-in-the-middle technique and guess-and-determine technique to recover the state and derive a key-related equation resulting from two layers of AES round functions. Secondly, by adopting an algebraic approach to study the properties of the round function, we decompose the equation into byte-level equations for divide-and-conquer. Finally, we utilize the guess-and-determine technique to recover the key. Collectively, these techniques enable us to present the first full key-recovery attack on HiAE. Our attack achieves a data complexity of $2^{130}$ and a time complexity of approximately $2^{209}$, leveraging both encryption and decryption oracles with a success probability of 1. In a single-key and nonce-respecting scenario, the attack fully recovers the 256-bit key, breaking the claimed 256-bit security against key-recovery attacks.
t-Probing (In-)Security - Pitfalls on Noise Assumptions
The ongoing transition to post-quantum cryptography has led to a surge of research in side-channel countermeasures tailored to these schemes. A prominent method to prove security in the context of side-channel analysis is the utilization of the well-established t-probing model. However, recent studies by Hermelink et al. at CCS 2024 demonstrate a simple and practical attack on a provably secure implementation of the Fujisaki-Okamoto transform that raises concerns regarding the practical security of t-probing secure schemes.
In this paper, we present an unsupervised single-trace side-channel attack on a tenth order masked implementation of fixed-weight polynomial sampling, which has also been proven to be secure in the t-probing model. Both attacks reveal a mismatch between the correct, well-understood theory of the t-probing model and its practical application, since the security proofs are valid, yet the attacks still succeed at high noise levels. Therefore, we take a closer look at the underlying causes and the assumptions that are made for transferring t-probing security to practice. In particular, we investigate the amount of noise required for this transfer. We find that, depending on the design decisions made, this can be very high and difficult to achieve.
Consequently, we examine the factors impacting the required amount of noise and that should be considered for practically secure implementations. In particular, non-uniformly distributed shares - a setting that is increasingly encountered in post-quantum cryptographic algorithms - could lead to an increased noise requirement, and thus it could reduce the security level of the masking scheme. Our analysis then allows us to provide practical guidelines for implementation designers, thereby facilitating the development of practically secure designs.
Securely Computing One-Sided Matching Markets
Top trading cycles (TTC) is a famous algorithm for trading indivisible goods between a set of agents such that all agents are as happy as possible about the outcome. In this paper, we present a protocol for executing TTC in a privacy preserving way. To the best of our knowledge, it is the first of its kind. As a technical contribution of independent interest, we suggest a new algorithm for determining all nodes in a functional graph that are on a cycle. The algorithm is particularly well suited for secure implementation in that it requires no branching and no random memory access. Finally, we report on a prototype implementation of the protocol based on somewhat homomorphic encryption.
BitBatSPIR: Efficient Batch Symmetric Private Information Retrieval from PSI
Private Information Retrieval (PIR) allows a client to retrieve an entry from a database held by a server without leaking which entry is being requested. Symmetric PIR (SPIR) is a stronger variant of PIR with database privacy so that the client knows nothing about the database other than the retrieved entry.
This work studies SPIR in the batch setting (BatchSPIR), where the client wants to retrieve multiple entries. In particular, we focus on the case of bit entries, which has important real-world applications. We set up the connection between bit-entry information retrieval and set operation, and propose a black-box construction of BatchSPIR from Private Set Intersection (PSI). By applying an efficient PSI protocol with asymmetric set sizes, we obtain our BatchSPIR protocol named $\mathsf{BitBatSPIR}$. We also introduce several optimizations for the underlying PSI. These optimizations improve the efficiency of our concrete BatchSPIR construction as well as the PSI protocol.
We implement $\mathsf{BitBatSPIR}$ and compare the performance with the state-of-the-art PIR protocol in the batch setting. Our experimental results show that $\mathsf{BitBatSPIR}$ not only achieves a stronger security guarantee (symmetric privacy) but also has a better performance for large databases, especially in the Wide Area Network (WAN) setting.
Extending Groth16 for Disjunctive Statements
Two most common ways to design non-interactive zero knowledge (NIZK) proofs are based on Sigma ($\Sigma$)-protocols (an efficient way to prove algebraic statements) and zero-knowledge succinct non-interactive arguments of knowledge (zk-SNARK) protocols (an efficient way to prove arithmetic statements). However, in the applications of cryptocurrencies such as privacy-preserving credentials, privacy-preserving audits, and blockchain-based voting systems, the zk-SNARKs for general statements are usually implemented with encryption, commitment, or other algebraic cryptographic schemes. Moreover, zk-SNARKs for many different arithmetic statements may also be required to be implemented together. Clearly, a typical solution is to extend the zk-SNARK circuit to include the code for algebraic part. However, complex cryptographic operations in the algebraic algorithms will significantly increase the circuit size, which leads to impractically large proving time and CRS size. Thus, we need a flexible enough proof system for composite statements including both algebraic and arithmetic statements. Unfortunately, while the conjunction of zk-SNARKs is relatively natural and numerous effective solutions are currently available (e.g. by utilizing the commit-and-prove technique), the disjunction of zk-SNARKs is rarely discussed in detail.
In this paper, we mainly focus on the disjunctive statements of Groth16, and we propose a Groth16 variant---CompGroth16, which provides a framework for Groth16 to prove the disjunctive statements that consist of a mix of algebraic and arithmetic components. Specifically, we could directly combine CompGroth16 with $\Sigma$-protocol or even CompGroth16 with CompGroth16 just like the logical composition of $\Sigma$-protocols. From this, we can gain many good properties, such as broader expression, better prover's efficiency and shorter CRS. In addition, for the combination of CompGroth16 and $\Sigma$-protocol, we also present two representative application scenarios to demonstrate the practicality of our construction.
HypSCA: A Hyperbolic Embedding Method for Enhanced Side-channel Attack
Deep learning-based side-channel attack (DLSCA) has become the dominant paradigm for extracting sensitive information from hardware implementations due to its ability to learn discriminative features directly from raw side-channel traces. A common design choice in DLSCA involves embedding traces in Euclidean space, where the underlying geometry supports conventional objectives such as classification or contrastive learning. However, Euclidean space is fundamentally limited in capturing the multi-level hierarchical structure of side-channel traces, which often exhibit both coarse-grained clustering patterns (e.g., Hamming weight similarities) and fine-grained distinctions (e.g., instruction-level variations). These limitations adversely affect the discriminability and generalization of learned representations, particularly across diverse datasets and leakage models. In this work, we propose HypSCA, a dual-space representation learning method that embeds traces in hyperbolic space to exploit its natural ability to model hierarchical relationships through exponential volume growth. In contrast to existing approaches, HypSCA jointly combines hyperbolic structure modeling with local discriminative learning in Euclidean space, enabling the preservation of global hierarchies while enhancing fine-grained feature separation. Extensive experiments on multiple public datasets demonstrate that HypSCA achieves up to 51.6% improvement in attack performance over state-of-the-art DLSCA methods, consistently enhancing generalization across diverse datasets and leakage models.
SubLogarithmic Linear Time SNARKs from Compressed Sum-Check
We leverage recently proposed multilinear polynomial commitment schemes, with linear time prover and constant proof size to reduce the communication complexity of the classical sum-check protocol for multivariate polynomials. Specifically, we consider degree $d$ multivariate polynomials in $\mu$ variables which can be decomposed into $\ell$ multilinear polynomials. We exhibit a new multivariate sum-check protocol with $O(\ell + d\log \log n)$ communication for $n = 2^\mu$. Our protocol retains the $O(n)$ prover cost~(where the precise constant depends on $\ell$, $d$ and the multivariate form). Thus we improve on the $O(\log n)$ communication inherent in all applications of existing multivariate sum-check protocol.
Multivariate sum-check is a key ingredient in the design of several prover-efficient SNARKs, such as HyperPlonk (EUROCRYPT 2023), Spartan (CRYPTO 2020), Hyrax (IEEE S&P 2018), Libra (CRYPTO 2019), Gemini (EUROCRYPT 2022), Virgo (S&P 2020) etc. All of these SNARKS incur at least $O(\log n)$ proof size, with the smallest concrete proof size being $\approx 7$ KB for circuits of size $2^{25}$. Our improved multivariate sum-check protocol improves the proof size of all of the above SNARKs while retaining the $O(n)$ prover cost. In particular, plugging our sum-check protocol into the HyperPlonk multilinear PIOP yields $\mathsf{HybridPlonk}$ -- the first SNARK that simultaneously achieves $O(n)$ prover, sublogarithmic proof size of $O(\log\log n)$, and $O(\log n)$ verifier. Concretely, the proof size of $\mathsf{HybridPlonk}$ is about 2 KB for circuit sizes up to $2^{30}$. We note that SNARKs with smaller proof size than $\mathsf{HybridPlonk}$ are based on univariate polynomials, and are not prover-efficient as they inherently incur $O(n\log n)$ prover cost due to polynomial multiplications. Moreover, $\mathsf{HybridPlonk}$ avoids proof recursion techniques and non-black-box usage of cryptographic primitives.
We believe that our improved multivariate sum-check protocol is of independent interest, and could have applications beyond SNARKs.
How to Copy-Protect All Puncturable Functionalities Without Conjectures: A Unified Solution to Quantum Protection
Quantum copy-protection (Aaronson, CCC'09) is the problem of encoding a functionality/key into a quantum state to achieve an anti-piracy security notion that guarantees that the key cannot be split into two keys that both still work. Most works so far has focused on constructing copy-protection for specific functionalities. The only exceptions are the work of Aaronson, Liu, Liu, Zhandry, Zhang (CRYPTO'21) and Ananth and Behera (CRYPTO'24). The former constructs copy-protection for all functionalities in the classical oracle model and the latter constructs copy-protection for all circuits that can be punctured at a uniformly random point with negligible security, assuming a new unproven conjecture about simultaneous extraction from entangled quantum adversaries, on top of assuming subexponentially-secure indistinguishability obfuscation (iO) and hardness of Learning with Errors (LWE).
In this work, we show that the construction of Aaronson et al (CRYPTO'21), when the oracles are instantiated with iO, satisfies copy-protection security in the plain model for all cryptographically puncturable functionalities (instead of only puncturable circuits) with arbitrary success threshold (e.g. we get CPA-style security rather than unpredictability for encryption schemes), without any unproven conjectures, assuming only subexponentially secure iO and one-way functions (we do not assume LWE). Thus, our work resolves the five-year-old open question of Aaronson et al, and further, our work encompasses/supersedes and significantly improves upon all existing plain-model copy-protection results.
Since puncturability has a long history of being studied in cryptography, our result immediately allows us to obtain copy-protection schemes for a large set of advanced functionalities for which no previous copy-protection scheme existed. Further, even for any functionality F that has not already been considered, through our result, constructing copy-protection for F essentially becomes a classical cryptographer's problem.
Going further, we show that our scheme also satisfies secure leasing (Ananth and La Placa, EUROCRYPT'21), unbounded/LOCC leakage-resilience and intrusion-detection security (Cakan, Goyal, Liu-Zhang, Ribeiro, TCC'24), giving a unified solution to the problem of quantum protection.
Limits on the Power of Private Constrained PRFs
Private constrained PRFs are constrained PRFs where the constrained key hides information about the predicate circuit. Although there are many constructions and applications of PCPRF, its relationship to basic cryptographic primitives, such as one-way functions and public-key encryptions, has been unclear. For example, we don't know whether one-way functions imply PCPRFs for general predicates, nor do we know whether 1-key secure PCPRF for all polynomial-sized predicates imply public-key primitives such as public-key encryption and secret-key agreement.
In this work, we prove the black-box separation between a 1-key secure PCPRF for any predicate and a secret-key agreement, which is the first black-box separation result about PCPRF. Specifically, we prove that there exists an oracle relative to which 1-key secure PCPRFs exist while secret-key agreement does not. Our proof is based on the simulation-based technique proposed by Impagliazzo and Rudich (STOC 89). The main technical challenge in generalizing the simulation-based technique to PCPRF is the issue of \textit{unfaithfulness} of Eve's simulation to the real world because our oracle is more complicated than a random oracle. We introduce a new technique which we call the ``weighting" technique and show how to leverage it to circumvent the issue of unfaithfulness in the proof framework of Impagliazzo and Rudich.
A Theoretical Take on a Practical Consensus Protocol
The Asynchronous Common Subset (ACS) problem is a fundamental problem in distributed computing. Very recently, Das et al. (2024) developed a new ACS protocol with several desirable properties: (i) it provides optimal resilience, tolerating up to $t < n/3$ corrupt parties out of $n$ parties in total, (ii) it does not rely on a trusted set up, (iii) it utilizes only "lighweight" cryptography, which can be instantiated using just a hash function, and (iv) it has expected round complexity $O(1)$ and expected communication complexity $O(\kappa n^3)$, where $\kappa$ is the output-length of the hash function. The purpose of this paper is to give a detailed, self-contained exposition and analysis of this protocol from the point of view of modern theoretcal cryptography, fleshing out a number of details of the definitions and proofs, providing a complete security analysis based on concrete security assumptions on the hash function (i.e., without relying on random oracles), and developing all of the underlying theory in the universal composability framework.
Drifting Towards Better Error Probabilities in Fully Homomorphic Encryption Schemes
There are two security notions for FHE schemes the traditional notion of IND-CPA, and a more stringent notion of IND-CPA$^D$. The notions are equivalent if the FHE schemes are perfectly correct, however for schemes with negligible failure probability the FHE parameters needed to obtain IND-CPA$^D$ security can be much larger than those needed to obtain IND-CPA security. This paper uses the notion of ciphertext drift in order to understand the practical difference between IND-CPA and IND-CPA$^D$ security in schemes such as FHEW, TFHE, and FINAL. This notion allows us to define a modulus switching operation (the main culprit for the difference in parameters) such that one does not require adapting IND-CPA cryptographic parameters to meet the IND-CPA$^D$ security level. Further, the extra cost incurred by the new techniques has no noticeable performance impact in practical applications. The paper also formally defines a stronger version for IND-CPA$^D$ security called sIND-CPA$^D$, which is proved to be strictly separated from the IND-CPA$^D$ notion. Criterion for turning an IND-CPA$^D$ secure public-key encryption into an sIND-CPA$^D$ one is also provided.
Breaking Parallel ROS: Implication for Isogeny and Lattice-based Blind Signatures
Many of the three-round blind signatures based on identification protocols are only proven to be $\ell$-concurrently unforgeable for $\ell = \mathsf{polylog}(\lambda)$. It was only recently shown in a seminal work by Benhamouda et al. (EUROCRYPT'21) that this is not just a limitation of the proof technique. They proposed an elegant polynomial time attack against the $\ell$-concurrently unforgeability of the classical blind Schnorr protocol for $\ell = \mathsf{poly}(\lambda)$.
However, there are still many blind signatures following a similar recipe to blind Schnorr where the attack by Benhamouda et al. does not apply. This includes for instance the isogeny-based blind signature CSI-Otter by Katsumata et al (CRYPTO'23), the lattice-based blind signatures Blaze+ by Alkeilani et al. (ACISP'20) and BlindOR by Alkeilani et al. (CANS'20).
In this work, we provide a simple and novel attack on blind signatures based on identification protocols performing parallel repetition to reduce the soundness error. Our attack translates to a polynomial time break for the $\ell$-concurrent unforgeability of CSI-Otter, Blaze+, and BlindOR for $\ell = \mathsf{poly}(\lambda)$.
More formally, we define an intermediate problem called Parallel Random inhomogeneities in an Overdetermined Solvable system of linear equations (pROS) problem and show that an attack against pROS implies an attack to the above blind signatures.
One takeaway of our finding is that while parallel repetition allows to exponentially reduce the soundness error of an identification protocol, this has minimal effect on the resulting blind signature. Our attack is concretely very efficient and for instance breaks $4$-concurrent unforgeability of CSI-Otter in time roughly $2^{34}$ hash computations.
XBOOT: Free-XOR Gates for CKKS with Applications to Transciphering
The CKKS scheme is traditionally recognized for approximate homomorphic encryption of real numbers, but BLEACH (Drucker et al., JoC 2024) extends its capabilities to handle exact computations on binary or small integer numbers.
Despite this advancement, BLEACH's approach of simulating XOR gates via $(a-b)^2$ incurs one multiplication per gate, which is computationally expensive in homomorphic encryption. To this end, we introduce XBOOT, a new framework built upon BLEACH's blueprint but allows for almost free evaluation of XOR gates. The core concept of XBOOT involves lazy reduction, where XOR operations are simulated with the less costly addition operation, $a+b$, leaving the management of potential overflows to later stages. We carefully handle the modulus chain and scale factors to ensure that the overflows would be conveniently rounded during the CKKS bootstrapping phase without extra cost. We use AES-CKKS transciphering as a benchmark to test the capability of XBOOT, and achieve a throughput exceeding one kilobyte per second, which represents a $2.5\times$ improvement over the state-of-the-art (Aharoni et al., HES 2023). Moreover, XBOOT enables the practical execution of tasks with extensive XOR operations that were previously challenging for CKKS. For example, we can do Rasta-CKKS transciphering at over two kilobytes per second, more than $10\times$ faster than the baseline without XBOOT.
On symbolic computations and Post Quantum Cryptography with Lie Geometries.
Assume that the global density of multivariate map over the commutative ring is the total number of its coefficients. In the case of finite commutative ring K with the multiplicative group K* containing more than 2 elements we suggest multivariate public keys in n variables with the public rule of global density O(n) and degree O(1). Another public keys use public rule of global density O(n) and degree O(n) together with the space of plaintexts (K*)^n and the space of ciphertext K^n . We consider examples of protocols of Noncommutative Cryptography implemented on the platform of endomorphisms of which allow the con-version of mentioned above multivariate public keys into protocol based cryptosystems of El Gamal type. The cryptosystems and protocols are designed in terms of analogue of geometries of Chevalley groups over commutative rings and their temporal versions.
Private coins extension with verifiable encryption
This paper introduces a protocol for verifiable encryption of values committed using Pedersen commitments. It enables a recipient to decrypt the hidden amount while proving its consistency with the original commitment, without revealing the value publicly. The construction combines symmetric encryption with zero-knowledge proofs and is made non-interactive via the Fiat-Shamir heuristic. The protocol is particularly useful in blockchain settings where confidential but verifiable value transfers are required.
Non-Homomorphic Key Blinding from Symmetric Primitives
Key Blinding Signature Schemes allow to derive so-called
blinded keys from public keys, which can be used to verify signatures
created with the secret key. At the same time, neither the blinded keys
nor their signatures disclose from which public key they were derived,
effectively implementing pseudonyms for one’s identity.
In search of conservative schemes, we deviate from the homomorphism-
based re-randomization approach in favor of a novel proof of knowledge-
based approach. To authenticate a message, a signer proves that they
know an original keypair and a valid way to commit to the corresponding
verification key to derive a given blinded key. We provide a framework
for such constructions and indicate how MPC-friendly block ciphers and
one-way functions may be used for efficient instantiations. While the
general framework’s security arguments are stated in the random oracle
model, we show a natural instantiation approach whose security can be
based on collision-resistance and pseudorandomness instead. The result
is the first standard model construction of key blinding.
Using our framework, we identify a shortcoming in the usual definition
of unlinkability for key blinding signature schemes, which we rectify by
considering an additional notion called targeted unlinkability.
PrivacyGo: Privacy-Preserving Ad Measurement with Multidimensional Intersection
In digital advertising, accurate measurement is essential for optimiz- ing ad performance, requiring collaboration between advertisers and publishers to compute aggregate statistics—such as total conver- sions—while preserving user privacy. Traditional secure two-party computation methods allow joint computation on single-identifier data without revealing raw inputs, but they fall short when mul- tidimensional matching is needed and leak the intersection size, exposing sensitive information to privacy attacks.
This paper tackles the challenging and practical problem of multi- identifier private user profile matching for privacy-preserving ad measurement, a cornerstone of modern advertising analytics. We introduce a comprehensive cryptographic framework leveraging re- versed Oblivious Pseudorandom Functions (OPRF) and novel blind key rotation techniques to support secure matching across multiple identifiers. Our design prevents cross-identifier linkages and in- cludes a differentially private mechanism to obfuscate intersection sizes, mitigating risks such as membership inference attacks.
We present a concrete construction of our protocol that achieves both strong privacy guarantees and high efficiency. It scales to large datasets, offering a practical and scalable solution for privacy- centric applications like secure ad conversion tracking. By combin- ing rigorous cryptographic principles with differential privacy, our work addresses a critical need in the advertising industry, setting a new standard for privacy-preserving ad measurement frameworks.
Khatam: Reducing the Communication Complexity of Code-Based SNARKs
Two techniques have recently emerged in the construction of Succinct Non-Interactive Arguments of Knowledge (SNARKs) that yield extremely fast provers; The use of multilinear (instead of univariate) polynomial commitment schemes (PCS) and the construction of PCS from error-correcting codes. Recently, BaseFold (Crypto 2024) introduced a family of PCS that combine these two techniques, thereby achieving a better trade-off between prover time and verifier costs than the state of the art. Despite its impressive overall efficiency, BaseFold suffered from larger proof sizes than its univariate counterparts, due to unproven claims about linear codes, which were not relevant in the univariate setting.
This work closes this gap by proving a new fact about linear codes -- that if $\pi_L, \pi_R$ are two vectors in $\mathbb{F}^{n}$ and if $\pi_L + r \pi_R$ is close to a codeword in $C$, then $\pi_L, \pi_R$ and $(\pi_L + r \pi_R)$ all agree with codewords at positions in the same set $S \subset [n]$, except with negligible probability over $r \leftarrow \mathbb{F}$. Our result holds as long as $|S| > (1 - \Delta_C + \epsilon)^{1/3} + \eta$, for $\epsilon, \eta \in [0,1]$ and with failure probability smaller than $\frac{1}{\epsilon\eta |\mathbb{F}|}$, where $\Delta_C$ is the minimum distance of the code. Furthermore, our results extend to any finite field and any linear code.
Rhombus: Fast Homomorphic Matrix-Vector Multiplication for Secure Two-Party Inference
We present $\textit{Rhombus}$, a new secure matrix-vector multiplication (MVM) protocol in the semi-honest two-party setting, which is able to be seamlessly integrated into existing privacy-preserving machine learning (PPML) frameworks and serve as the basis of secure computation in linear layers.
$\textit{Rhombus}$ adopts RLWE-based homomorphic encryption (HE) with coefficient encoding, which allows messages to be chosen from not only a field $\mathbb{F}_p$ but also a ring $\mathbb{Z}_{2^\ell}$, where the latter supports faster computation in non-linear layers. To achieve better efficiency, we develop an input-output packing technique that reduces the communication cost incurred by HE with coefficient encoding by about $21\times$, and propose a split-point picking technique that reduces the number of rotations to that sublinear in the matrix dimension. Compared to the recent protocol $\textit{HELiKs}$ by Balla and Koushanfar (CCS'23), our implementation demonstrates that $\textit{Rhombus}$ improves the whole performance of an MVM protocol by a factor of $7.4\times \sim 8\times$, and improves the end-to-end performance of secure two-party inference of ResNet50 by a factor of $4.6\times \sim 18\times$.
Rapidash: Atomic Swaps Secure under User-Miner Collusion
Cross-chain trading is fundamental to blockchains and Decentralized Finance (DeFi). A way to achieve such trading in a truly decentralized manner, i.e., without trusted third parties, is by using atomic swaps. However, recent works revealed that Hashed Time-Lock Contract, a key building block of the existing atomic swaps, is entirely insecure in the presence of user-miner collusion. Specifically, a user can bribe the miners of the blockchain to help it cheat.
In this work, we give the first and rigorous formal treatment of fair trading on blockchains, where users and miners may enter arbitrary binding contracts on the side. We propose Rapidash, a new atomic swap protocol, and prove its incentive-compatibility in the presence of user-miner collusion. Specifically, we show that Rapidash satisfies a coalition-resistant Nash equilibrium absent external incentives. We give instantiations of Rapidash that are compatible with Bitcoin and Ethereum, and incur only minimal overheads in terms of costs for the users.
A Polynomial Public-Key Cryptosystem Based on Jacobian-Preserving Composition
We propose a public-key cryptosystem based on Jacobian-preserving polynomial compositions, utilizing algebraically invertible polynomial maps with hard-to-invert composition. The construction utilizes polynomial maps over $\mathbb{Z}_p$, where $p$ is a prime number, with Jacobian determinant equal to 1 to ensure invertibility. The public key function $H : \mathbb{Z}_p^n \to \mathbb{Z}_p^n$ is defined as the composition of invertible polynomial maps $f_1, f_2, \dots, f_k$, each with Jacobian determinant 1, while the private key consists of the individual components used in the composition. Although inverting the composition is possible, inverting without the knowledge of the factors is computationally infeasible. This system incorporates both triangular and affine polynomial maps. We discuss the construction, provide formal correctness proofs, analyze hardness assumptions, and present a Python-based prototype with benchmark results.
Towards AI-driven Optimization of Robust Probing Model-compliant Masked Hardware Gadgets Using Evolutionary Algorithms
Side-channel analysis (SCA) is a persistent threat to security-critical systems, enabling attackers to exploit information leakage. To mitigate its harmful impacts, masking serves as a provably secure countermeasure that performs computing on random shares of secret values. As masking complexity, required effort, and cost increase dramatically with design complexity, recent techniques rely on designing and implementing smaller building blocks, so-called “gadgets.” Existing work on optimizing gadgets has primarily focused on latency, area, and power as their objectives. To the best of our knowledge, the most up-to-date ASIC-specific masking gadget optimization frameworks require significant manual effort. This paper is inspired by previous work introducing open-source academic tools to leverage aspects of artificial intelligence (AI) in electronic design automation (EDA) to attempt to optimize and enhance existing gadgets and overall designs. We concentrate on evolutionary algorithms (EA), optimization techniques inspired by biological evolution and natural selection, to find optimal or near-optimal solutions. In this regard, our goal is to improve gadgets in terms of power and area metrics. The primary objective is to demonstrate the effectiveness of our methods by integrating compatible gates from a technology library to generate an optimized and functional design without compromising security. Our results show a significant reduction in power consumption and promising area improvements, with values reduced by 15% in some cases, compared to the naïve synthesis of masked designs. We evaluate our results using industry-standard synthesis and pre-silicon side-channel verification tools.
Anamorphic Encryption, Revisited
An anamorphic encryption scheme allows two parties who share a so-called double key to embed covert messages in ciphertexts of an established PKE scheme. This protects against a dictator that can force the receiver to reveal the secret keys for the PKE scheme, but who is oblivious about the existence of the double key. We identify two limitations of the original model by Persiano, Phan, and Yung (EUROCRYPT 2022). First, in their definition a double key can only be generated once, together with a key-pair. This has the drawback that a receiver who wants to use the anamorphic mode after a dictator comes to power, needs to deploy a new key-pair, a potentially suspicious act. Second, a receiver cannot distinguish whether or not a ciphertext contains a covert message.
In this work we propose a new model that overcomes these limitations. First, we allow to associate multiple double keys to a key-pair, after its deployment. This also enables deniability in case the double key only depends on the public key. Second, we propose a natural robustness notion, which guarantees that anamorphically decrypting a regularly encrypted message results in a special symbol indicating that no covert message is contained, which also eliminates certain attacks.
Finally, to instantiate our new, stronger definition of anamorphic encryption, we provide generic and concrete constructions. Concretely, we show that ElGamal and Cramer-Shoup satisfy a new condition, selective randomness recoverability, which enables robust anamorphic extensions, and we also provide a robust anamorphic extension for RSA-OAEP.
Outsourced Cloud Data Privacy-Preserving Framework: An Efficient Broadcast Encrypted Search Realization
The development of cloud networks facilitates data outsourcing, sharing, and storage, but it has also raised several security concerns. Public key authenticated encryption with keyword search (PAEKS) enables the encrypted search over cloud data while resisting the insider keyword guessing attacks (IKGAs). However, existing PAEKS schemes are limited to a single receiver, restricting application prospects in cloud networks. In addition, quantum computing attacks and key leakage issues further threaten data security, which has attracted extensive attention from researchers. Therefore, designing an encrypted search scheme to resist the above-mentioned attacks is still far-reaching. In this paper, we first propose BroSearch, an outsourced data privacy-preserving framework through efficient broadcast encrypted search for cloud networks. It utilizes lattice sampling algorithms to authenticate the keyword and offers searchability over broadcasting ciphertext while enjoying IKGAs-resistance in a quantum setting. To get around key leakage issues, we then incorporate the minimal cover set technique and lattice basis extension algorithm to construct FS-BroSearch as an enhanced version. Furthermore, we give a rigorous security analysis and a comprehensive performance evaluation of BroSearch and FS-BroSearch. Specifically, BroSearch consumes only 61.11%, 81.82%, and 83.33% of the execution time compared to prior art in terms of ciphertext calculation, trapdoor generation, and search procedures, which is practical and efficient in cloud networks.
Robust Non-Interactive Zero-Knowledge Combiners
A $t$-out-of-$n$ robust non-interactive zero-knowledge (NIZK) combiner is a construction that, given access to $n$ candidate instantiations of a NIZK for some language, itself implements a NIZK for the same language. Moreover, the combiner is secure, assuming at least $t$ of the given candidates are secure.
In this work, we provide the first definition of combiners for NIZK, and prove that no robust NIZK combiner exists assuming $t \le \lfloor n/2 \rfloor$ (unless the polynomial hierarchy collapses).
On the positive side, we provide different constructions of robust NIZK combiners for $t > \lfloor n/2 \rfloor$. In particular, we show how to obtain:
1) A black-box combiner working for a special class of {\em homomorphic} languages where $n,t$ are polynomial and $t > \lfloor n/2 \rfloor$.
2) A non-black-box combiner working for any language, where $n,t$ are constant and $t > \lfloor n/2 \rfloor$.
3) A non-black-box combiner working for any language, where $n,t$ are polynomial and $t > \lfloor 2n/3 \rfloor$.
Easy-ABE: An Easy Ciphertext-Policy Attribute-Based Encryption
Attribute-Based Encryption is widely recognized as a leap forward in the field of public key encryption. It allows to enforce an access control on encrypted data. Decryption time in ABE schemes can be long depending on the number of attributes and pairing operations. This drawback hinders their adoption on a broader scale.
In this paper, we propose a non-monotone CP-ABE scheme that has no restrictions on the size of attribute sets and policies, allows fast decryption and is adaptively secure under the CBDH-3 assumption. To achieve this, we approached the problem from a new angle, namely using a set membership relation for access structure. We have implemented our scheme using the Charm framework and the source code is available on GitHub. Easy-ABE performs better than FAME an FABEO.
Carousel: Fully Homomorphic Encryption with Bootstrapping over Automorphism Group
Fully Homomorphic Encryption (FHE) enables the secure computation of functions on ciphertexts without requiring decryption. Specifically, AP-like HE schemes exploit an intrinsic bootstrapping method called blind rotation. In existing blind rotation methods, a look-up table is homomorphically evaluated on the input ciphertext through iterative multiplication of monomials. However, the algebraic structure of the multiplicative group of monomials imposes certain limitation on the input plaintext space, as it can bootstrap only a fraction of the input plaintext space.
In this work, we introduce a new FHE scheme, Carousel, that solves this problem. The key idea of our approach is to utilize the automorphism group instead of monomials. More specifically, the look-up table is encoded into a single polynomial that can be rotated via a series of homomorphic multiplications and automorphisms. We instantiate Carousel with subring encoding proposed by Arita and Handa (ICISC ’17) and provide a proof-of-concept implementation. Our benchmark result shows that Carousel can bootstrap 4-bit integer under 30ms.
Performance and Privacy: A Low-Latency Secure Anonymous Authentication Protocol with OPRF
erforming privacy-preserving queries, particularly anonymous authentication, against large-scale datasets presents critical tradeoffs between security, latency, scalability. Existing cryptographic solutions often impose linear
computation or communication overheads. This paper introduces a novel,
efficient protocol for secure anonymous authentication, uniquely combining matrix partitioning via hash prefixes with Oblivious Pseudorandom Functions in a
three-server semi-honest model. Crucially, compared to our previous work published at TrustCom 2024, this enhanced protocol eliminates the dependency on a
designated fully trusted server, achieving security when any single server is corrupted. Furthermore, our protocol demonstrates significant performance improvements over current state-of-the-art methods. It achieves sub-linear online
communication complexity. Evaluations show that for datasets of size 𝑚 ≈ 106
,
our protocol reduces online communication by at least 30% compared to other
sub-linear schemes, while maintaining competitive online computation times. Security is proven via simulation, and comprehensive experiments confirm practicality for datasets up to 𝑚 = 10^8
Depth-Optimized Quantum Implementation of CHAM
Security weaknesses in the symmetric-key components of a cipher can compromise its overall security assurances. With the rapid progress in quantum computing in recent years, there is a growing focus on assessing the resilience of symmetric-key cryptography against possible quantum attacks (e.g., Grover's algorithm).
This paper is dedicated to examining the quantum attack resistance of CHAM, a family of lightweight block ciphers developed by a Korean research group. We provide an optimized quantum circuit implementation of CHAM and evaluate its complexity metrics, such as the number of qubits, gate count, and circuit depth, within the context of Grover's search algorithm.
For Grover's key search, minimizing the quantum circuit depth is the key optimization goal, particularly when parallel search capabilities are taken into account. Our approach enhances parallelism for a low-depth quantum circuit of the CHAM block cipher, significantly reducing the full circuit depth compared to previous works. For example, in the case of CHAM-128/128, our implementation achieves a full depth of 14,772, compared to 37,768 depth in the best known prior work. This highlights the substantial depth reduction enabled by our parallelism-oriented design, which facilitates more practical quantum attacks.
Ligerito: A Small and Concretely Fast Polynomial Commitment Scheme
In this note we present Ligerito, a small and practically fast polynomial commitment and inner product scheme. For the case of univariate and multilinear polynomial evaluations, the scheme has a proof size of $\sim \log(N)^2/\log\log(N)$ up to constants and for a large enough field, where $N$ is the size of the input. Ligerito is also fast on consumer hardware: when run on an M1 MacBook Pro for a polynomial with $2^{24}$ coefficients over a 32-bit binary field, our Julia prover implementation has a proving time of 1.3 seconds and a proof size of 255 KiB. Ligerito is also relatively flexible: any linear code for which the rows of the generator matrix can be efficiently evaluated can be used. Such codes include Reed–Solomon codes, Reed–Muller codes, among others. This, in turn, allows for a high degree of flexibility on the choice of field and can likely give further efficiency gains in specific applications.
Unconditional Individual Verifiability with Receipt Freeness via Post-Cast Isolation
We introduce a trapdoorless tracker construction for electronic voting that fundamentally reimagines verifiability through information flow control. Unlike existing E2E verifiable systems where receipt-freeness compromises individual verifiability, our approach achieves both simultaneously by requiring only temporary isolation of the voting calculator between ballot casting and verification—when voters enter unique challenges to compute trackers for locating their votes on the public tally board. Our construction leverages perfectly hiding Pedersen commitments and a unique tracker challenge mechanism to simultaneously achieve unconditional individual verifiability, practical everlasting privacy, and receipt-freeness while relying only on standard cryptographic assumptions. When verification failures occur, our system provides transparent accountability by precisely identifying whether the voting calculator or voting device is responsible. The system maintains security even with partial compliance with isolation procedures and offers robust protection against various adversaries while requiring minimal trust assumptions.
Analysis of REDOG: The Pad Thai Attack
This paper introduces the Pad Thai message recovery attack
on REDOG, a rank-metric code-based encryption scheme selected for the
second round of evaluation in the Korean Post-Quantum Cryptography
(KPQC) competition. The attack exploits the low rank weight of a portion of the ciphertext to construct multiple systems of linear equations,
one of which is noise-free and can be solved to recover the secret message.
The Pad Thai attack significantly undermines the security of REDOG,
revealing that its provided security is much lower than originally claimed.
Faster Hash-based Multi-valued Validated Asynchronous Byzantine Agreement
Multi-valued Validated Byzantine Agreement (MVBA) is vital for asynchronous distributed protocols like asynchronous BFT consensus and distributed key generation, making performance improvements a long-standing goal. Existing communication-optimal MVBA protocols rely on computationally intensive public-key cryptographic tools, such as non-interactive threshold signatures, which are also vulnerable to quantum attacks. While hash-based MVBA protocols have been proposed to address these challenges, their higher communication overhead has raised concerns about practical performance.
We present a novel MVBA protocol with adaptive security, relying exclusively on hash functions to achieve post-quantum security. Our protocol delivers near-optimal communication, constant round complexity, and significantly reduced latency compared to existing schemes, though it has sub-optimal resilience, tolerating up to 20% Byzantine corruptions instead of the typical 33%. For example, with $n=201$ and input size 1.75 MB, it reduces latency by 81% over previous hash-based approaches.
From Worst-Case Hardness of $\mathsf{NP}$ to Quantum Cryptography via Quantum Indistinguishability Obfuscation
Indistinguishability obfuscation (iO) has emerged as a powerful cryptographic primitive with many implications. While classical iO, combined with the infinitely-often worst-case hardness of $\mathsf{NP}$, is known to imply one-way functions (OWFs) and a range of advanced cryptographic primitives, the cryptographic implications of quantum iO remain poorly understood. In this work, we initiate a study of the power of quantum iO. We define several natural variants of quantum iO, distinguished by whether the obfuscation algorithm, evaluation algorithm, and description of obfuscated program are classical or quantum. For each variant, we identify quantum cryptographic primitives that can be constructed under the assumption of quantum iO and the infinitely-often quantum worst-case hardness of $\mathsf{NP}$ (i.e., $\mathsf{NP}\not\subseteq \mathsf{i.o.BQP}$). In particular, we construct pseudorandom unitaries, QCCC quantum public-key encryption and (QCCC) quantum symmetric-key encryption, and several primitives implied by them such as one-way state generators, (efficiently-verifiable) one-way puzzles, and EFI pairs, etc. While our main focus is on quantum iO, even in the classical setting, our techniques yield a new and arguably simpler construction of OWFs from classical (imperfect) iO and the infinitely-often worst-case hardness of $\mathsf{NP}$.
Hash-Based Multi-Signatures for Post-Quantum Ethereum
With the threat posed by quantum computers on the horizon, systems like Ethereum must transition to cryptographic primitives resistant to quantum attacks. One of the most critical of these primitives is the non-interactive multi-signature scheme used in Ethereum's proof-of-stake consensus, currently implemented with BLS signatures. This primitive enables validators to independently sign blocks, with their signatures then publicly aggregated into a compact aggregate signature.
In this work, we introduce a family of hash-based signature schemes as post-quantum alternatives to BLS. We consider the folklore method of aggregating signatures via (hash-based) succinct arguments, and our work is focused on instantiating the underlying signature scheme. The proposed schemes are variants of the XMSS signature scheme, analyzed within a novel and unified framework. While being generic, this framework is designed to minimize security loss, facilitating efficient parameter selection. A key feature of our work is the avoidance of random oracles in the security proof. Instead, we define explicit standard model requirements for the underlying hash functions. This eliminates the paradox of simultaneously treating hash functions as random oracles and as explicit circuits for aggregation. Furthermore, this provides cryptanalysts with clearly defined targets for evaluating the security of hash functions. Finally, we provide recommendations for practical instantiations of hash functions and concrete parameter settings, supported by known and novel heuristic bounds on the standard model properties.
zkGPT: An Efficient Non-interactive Zero-knowledge Proof Framework for LLM Inference
Large Language Models (LLMs) are widely employed for their ability to generate human-like text. However, service providers may deploy smaller models to reduce costs, potentially deceiving users. Zero-Knowledge Proofs (ZKPs) offer a solution by allowing providers to prove LLM inference without compromising the privacy of model parameters. Existing solutions either do not support LLM architectures or suffer from significant inefficiency and tremendous overhead. To address this issue, this paper introduces several new techniques. We propose new methods to efficiently prove linear and non-linear layers in LLMs, reducing computation overhead by orders of magnitude. To further enhance efficiency, we propose constraint fusion to reduce the overhead of proving non-linear layers and circuit squeeze to improve parallelism. We implement our efficient protocol, specifically tailored for popular LLM architectures like GPT-2, and deploy optimizations to enhance performance. Experiments show that our scheme can prove GPT-2 inference in less than 25 seconds. Compared with state-of-the-art systems such as Hao et al. (USENIX Security'24) and ZKML (Eurosys'24), our work achieves nearly $279\times$ and $185\times$ speedup, respectively.
PA1 Security on Release of Unverified Plaintext in Encrypt-then-MAC AE Schemes
At ASIACRYPT 2014, Andreeva et al. put forward a definition for security of authenticated encryption under release of unverified plaintext. They introduced two notions of plaintext awareness (PA1 and its stronger sibling PA2), suggested to be used in conjunction with confidentiality in case of release of unverified plaintext, as well as the notion of integrity under release of unverified plaintext (INT-RUP). Various efforts have been made to develop a unified model (e.g., Ashur et al., CRYPTO 2017, Chang et al., ToSC 2019(4)). With respect to the analysis of existing and new modes under release of unverified plaintext, most research however has focused on INT-RUP security only. Plaintext awareness is less studied and understood.
In this work, we take a detailed look at the original definitions of PA1 and PA2 security. We observe that the definitions leave too much room for interpretation, and claimed results such as PA1 security of Encrypt-then-MAC are unjustified. The core of the issue lies in the fact that PA1 security is necessarily tied to the implementation of the scheme. To resolve this, we present refined definitions of PA1 and PA2 security. We argue that even for these refined definitions, there is no implementation of Encrypt-and-MAC that is PA1 (nor PA2) secure. For MAC-then-Encrypt, results depend on the actual scheme, as we demonstrate using a negative result and a positive result (from literature, on Romulus-M). Furthermore, we formally prove for Encrypt-then-MAC that (i) there exist implementations that are PA1 insecure and (ii) there exist implementations that are PA1 secure. In other words, Encrypt-then-MAC is insecure under the old definition but secure under the new definition, provided a proper implementation is used. We apply this observation to Isap v2, finalist in the NIST Lightweight Cryptography competition, where we additionally deal with the complication that the same key is used for encryption and authentication.
Efficient Constant-Size Linkable Ring Signatures for Ad-Hoc Rings via Pairing-Based Set Membership Arguments
Linkable Ring Signatures (LRS) allow users to anonymously sign messages on behalf of ad-hoc rings, while ensuring that multiple signatures from the same user can be linked. This feature makes LRS widely used in privacy-preserving applications like e-voting and e-cash. To scale to systems with large user groups, efficient schemes with short signatures and fast verification are essential. Recent works, such as DualDory (ESORICS’22) and LLRing (ESORICS’24), improve verification efficiency through offline precomputations but rely on static rings, limiting their applicability in ad-hoc ring scenarios. Similarly, constant-size ring signature schemes based on accumulators face the same limitation.
In this paper, we propose a framework for constructing constant-size LRS suitable for large ad-hoc rings. We introduce a novel pairing-based Set Membership Argument (SMA) with a proof size of only three group elements. By leveraging KZG polynomial commitments, we optimize the verification to require only constant group exponentiations and pairings, as well as linear field multiplications. Utilizing the SMA, our framework achieves constant-size signatures with verification dominated by linear field operations, outperforming existing schemes that require linear group exponentiations in ad-hoc ring settings. Moreover, it exhibits strong scalability: (i) compatibility with any PKI-based cryptosystem and (ii) scoped linkability, enabling flexible definitions of linking scope.
We instantiate our framework using a discrete logarithm public key structure. On the $BN254$ curve, our signature size is fixed at 687 bytes, which to our best knowledge is the shortest LRS for ring sizes larger than 32. For a ring size of 1024, our verification cost is only 10.4 ms, achieving 48.6×, 2.6×–467×, 7.9×–13.2×, and 2.2×–102.5× improvements over Omniring (CCS’19), DualDory (with and without precomputation), LLRing-DL (with and without precomputation), and LLRing-P (with and without precomputation), respectively. Moreover, this performance gap continues to grow as the ring size increases.
Pseudorandom Correlation Generators for Multiparty Beaver Triples over $\mathbb{F}_2$
We construct an efficient pseudorandom correlation generator (PCG) (Boyle et al., Crypto'19) for two-party programmable oblivious linear evaluation (OLE) functionality over $\mathbb{F}_2$. Our construction (i) has an efficient seed expansion phase, and (ii) comes with a concretely efficient protocol for distributing the seeds that makes black-box use of cryptography and runs in a constant number of rounds.
PCGs for programmable OLE are known to imply PCGs for generating $n$-party Beaver triples over $\mathbb{F}_2$. The resultant PCG has a seed setup phase whose communication cost is $n(n-1)$ times than that of the programmable OLE protocol. The per-party seed size and the seed expansion time have a multiplicative overhead of $2(n-1)$. Prior constructions for efficiently generating multiparty Beaver triples only worked for finite fields $\mathbb{F}_q$ where $q \geq 3$ or required one bit of per-party communication for each triple generated (and hence, do not satisfy the PCG definition). Thus, ours is the first concretely efficient PCG for generating Beaver triples over $\mathbb{F}_2$ in the multiparty setting.
Our distributed seed generation protocol generates $N = 2^{30}$ two-party programmable OLEs in 3.5 minutes with 255 MB of communication over a LAN network. The PCG seed size is around 55 MB and the expansion phase requires 10 PRG calls and around 229 thousand XOR and AND operations per triple, producing roughly 31,000 triples per second.
Our PCG for generating multiparty Beaver triples has lower concrete communication cost than the state-of-the-art for small number of parties. When compared to the FOLEAGE protocol (Bombar et al, Asiacrypt 2024) which requires one bit of per-party communication per triple that is generated, our communication cost is lower by $2.4\times$ when generating $N = 2^{36}$ triples between three parties and is $1.2 \times $ lower for the case of five parties.
At a conceptual level, our protocol deviates from the prior approaches which relied on variants of dual learning parity with noise (LPN) assumption. Instead, our construction combines both the primal and dual versions of LPN to achieve the aforementioned efficiency.
UOV-Based Verifiable Timed Signature Scheme
Verifiable Timed Signatures (VTS) are cryptographic primitives that enable the creation of a signature that can only be retrieved after a specific time delay, while also providing verifiable evidence of its existence. This framework is particularly useful in blockchain applications. Current VTS schemes rely on signature algorithms such as BLS, Schnorr, and ECDSA, which are vulnerable to quantum attacks due to the vulnerability of the discrete logarithm problem to Shor's Algorithm. We introduce VT-UOV, a novel VTS scheme based on the Salt-Unbalanced Oil and Vinegar (Salt-UOV) Digital Signature Algorithm. As a multivariate polynomial-based cryptographic primitive, Salt-UOV provides strong security against both classical and quantum adversaries. Adapting Salt-UOV into the VTS framework requires addressing challenges such as complex parameters instead of a integer, the computational complexity of solving multivariate equations, and the integration of Time-Lock Puzzles (TLPs) for enforcing delayed signature generation. Our experimental results show that VT-UOV exhibits a unique performance profile among existing VTS constructions. This paper offers a detailed exploration of the VT-UOV scheme and its overall security and performance properties.
FICS and FACS: Fast IOPPs and Accumulation via Code-Switching
Recent work on IOP-based succinct arguments has focused on developing IOPs that improve prover efficiency by relying on linear-time encodable codes. We present two new schemes for improving the efficiency of such succinct arguments:
$\quad \bullet$ $\mathsf{FICS}$, an IOP of proximity for multilinear polynomial evaluation that, like prior work Blaze [EUROCRYPT 2025] achieves linear prover time, but additionally reduces the verifier oracle query complexity to $O(\lambda \log \log n + \log n)$ for codewords of length $n$.
$\quad \bullet$ $\mathsf{FACS}$, an accumulation scheme for NP that achieves linear prover time and $O(\lambda)$ oracle queries per step of the accumulation.
Both schemes support a large class of linear-time encodable codes, including systematic LDPC codes and tensor codes of linear-time encodable codes.
We obtain our results by extending and formalizing the framework of Interactive Oracle Reductions (IORs) introduced by Ben-Sasson et al. [TCC 2019]. In particular, we develop new IORs for "codeswitching" tensor codes (Ron-Zewi and Rothblum [JACM 2024]), and also develop a new notion of knowledge soundness for IORs that allows us to easily compose IORs and to prove the security of our schemes in the non-interactive setting, even if the underlying codes are not known to be decodable in polynomial time.
Cryptanalysis of HiAE
We describe key recovery attacks on the authenticated stream cipher HiAE, which was recently proposed for future high-throughput communication networks such as 6G by Huawei. HiAE uses a 2048-bit state, a 256-bit key and produces 128-bit tags, targeting 256-bit security against key and state recovery. As a nonce-based AEAD scheme, it relies on the uniqueness of the nonce per key for these security claims. Our analysis indicates that a complete recovery of the 256-bit key of HiAE is possible with a complexity of $2^{128}$ data and at most $2^{129.585}$ time in the nonce-respecting attack setting, with various small tradeoffs concerning the data and time complexity. While infeasible in practice, this attack therefore violates the 256-bit security claim for HiAE. We describe further complete key-recovery attacks in the nonce-misuse and release of unverfied plaintext (RUP) settings which require only a small constant number of repeated nonces or unverified decryption queries, respectively.
Downlink (T)FHE ciphertexts compression
This paper focuses on the issue of reducing the bandwidth requirement for FHE ciphertext transmission. While this issue has been extensively studied from the uplink viewpoint (transmission of encrypted inputs towards a FHE calculation), where several approaches exist to essentially cancel FHE ciphertext expansion, the downlink case (transmission of encrypted results towards an end-user) has been the object of much less attention. In this paper, we address this latter issue with a particular focus on the TFHE scheme, for which we revisit a number of folklore methods, including several approaches for switching to more compact linearly homomorphic schemes, reducing the precision of T(R)LWE coefficients (while maintaining acceptable probabilities of decryption errors), and others. We also investigate how to use these methods in combination, depending on the number of encrypted results to transmit. We further perform extensive experiments demonstrating that the downlink TFHE ciphertext expansion factor can be practically reduced to values below 10, depending on the setup, with little additional computational burden.
A Tale of Two Worlds, a Formal Story of WireGuard Hybridization
PQ-WireGuard is a post-quantum variant of WireGuard
Virtual Private Network (VPN), where Diffie-Hellman-based key exchange is
replaced by post-quantum Key Encapsulation Mechanisms-based key
exchange. In this paper, we first conduct a thorough formal analysis
of PQ-WireGuard's original design, in which we point out and fix a
number of weaknesses. This leads us to an improved construction
PQ-WireGuard*. Secondly, we propose and formally analyze a new
protocol, based on both WireGuard and PQ-WireGuard*, named
Hybrid-WireGuard, compliant with current best practices for
post-quantum transition about hybridization techniques. For our
analysis, we use the SAPIC+ framework that enables the generation of
three state-of-the-art protocol models for the verification tools
ProVerif, DeepSec and Tamarin from a single specification,
leveraging the strengths of each tool. We formally prove that
Hybrid-WireGuard is secure. Eventually, we propose a generic, efficient and usable Rust implementation of our new protocol.
The Pipes Model for Latency and Throughput Analysis
Protocols for State-Machine-Replication (sometimes called 'blockchain' protocols) generally make use of rotating leaders to drive consensus. In typical protocols (henceforth called 'single-sender' protocols), the leader is a single processor responsible for making and disseminating proposals to others. Since the leader acts as a bottleneck, apparently limiting throughput, a recent line of research has investigated the use of 'multi-sender' protocols in which many processors distribute proposals in parallel. Examples include DAG-based protocols such as DAG-Rider, Bullshark, Sailfish, Cordial Miners, Mysticeti, and variants such as Autobahn. However, existing models do not allow for a formal analysis to determine whether these protocols can actually handle higher throughputs than single-sender protocols such as PBFT, Tendermint, and HotStuff.
In this paper, we describe a very simple model that allows for such an analysis. For any given protocol, the model allows one to calculate latency as a function of network bandwidth, network delays, the number of processors $n$, and the incoming transaction rate. Each protocol has a latency bottleneck: an incoming transaction rate at which latency becomes unbounded over the protocol execution, i.e., a maximum throughput that the protocol can handle without unbounded latency.
With the aim of building to an analysis for state-of-the-art State-Machine-Replication (SMR) protocols, we begin by considering protocols for simpler primitives, such as Best-effort Broadcast and Reliable Broadcast. For Best-effort Broadcast, we establish a tight lower bound on latency for single-sender and multi-sender protocols when blocks are distributed without the use of techniques such as erasure coding. Perhaps unsurprisingly, a key difference between the single-sender and multi-sender approaches in this case is a factor $n$ in the point at which the latency bottleneck appears. However, for other primitives such as Reliable Broadcast, our results may be more surprising: the factor $n$ difference now disappears, and maximum throughput for the two approaches differs by a constant factor, while multi-sender approaches will generally have latency that grows more quickly with $n$. For state-of-the-art SMR protocols, the picture that emerges is one with seemingly inherent trade-offs. If one compares single-sender protocols that use pipelining and erasure coding, such as DispersedSimplex, with DAG-based protocols such as Sailfish or Bullshark, the former are seen to have lower latency for a wide range of throughputs, while the benefit of the latter protocols is that they have a latency bottleneck which is higher by a constant factor.
Engel p-adic Supersingular Isogeny-based Cryptography over Laurent series
This paper builds the foundation for a cryptosystem based on p-adic representations of supersingular elliptic curve isogenies generated through Engel expansions of Laurent series. This mathematical framework manifests as a lightweight encryption scheme implemented on ESP32 microcontrollers for IoT applications. Efficient isogeny paths are constructed for quantum-resistant primitives secured against Shor's algorithm by decomposing elements into Engel sequences. Performance analysis confirms linear computational scaling with message size and speed gain at a higher clock rate, along with power trace signatures corroborating theoretical computational models. Consequently, we confirm the practical feasibility of our proposed p-adic isogeny cryptography on resource-constrained embedded systems while offering rigorous post-quantum security assurances.
MT-TMVP: Modular Tiled TMVP-based Polynomial Multiplication for Post-Quantum Cryptography on FPGAs
As quantum technology advances, developing cryptographic solutions resistant to quantum attacks is crucial. Post-Quantum Cryptography (PQC) provides a practical approach by running on classical computers. They rely on hard mathematical problems, with lattice-based being one of the National Institute of Standards and Technology (NIST)-recognized schemes known for its small key sizes. Hardware implementation of these schemes faces challenges due to the computational intensity of operations like polynomial multiplication, especially for resource-constrained devices. This paper proposes a novel Modular Tiled Toeplitz Matrix-Vector Polynomial Multiplication (MT-TMVP) for lattice-based PQC algorithms and presents a resource-optimized Field Programmable Gate Array (FPGA) architecture. The proposed implementation significantly reduces resource utilization and Area-Delay Product (ADP) compared to state-of-the-art polynomial multipliers. It utilizes 99.68% and 84.22% fewer Look-Up Tables (LUTs) on Artix-7 and Zynq Ultrascale+ FPGAs, respectively, and achieves 99.94% and 80.02% ADP improvements on these FPGAs compared to the best results in the literature. By leveraging Block RAM (BRAM), the proposed architecture offers robustness against timing-based Side-Channel Attacks (SCAs), and the design is modular and scalable to any polynomial degree.
On the BUFF Security of ECDSA with Key Recovery
In the usual syntax of digital signatures, the verification algorithm takes a verification key in addition to a signature and a message, whereas in ECDSA with key recovery, which is used in Ethereum, no verification key is input to the verification algorithm. Instead, a verification key is recovered from a signature and a message. In this paper, we explore BUFF security of ECDSA with key recovery (KR-ECDSA), where BUFF stands for Beyond UnForgeability Features (Cremers et al., IEEE S&P 2021). As a result, we show that KR-ECDSA provides BUFF security, except weak non-resignability (wNR). We pay attention to that the verification algorithm of KR-ECDSA takes an Ethereum address addr as input, which is defined as the rightmost 160-bits of the Keccak-256 hash of the corresponding ECDSA verification key, and checks the hash value of the recovered verification key is equal to addr. Our security analysis shows that this procedure is mandatory to provide BUFF security. We also discuss whether wNR is mandatory in Ethereum or not. To clarify the above equality check is mandatory to provide BUFF security in KR-ECDSA, we show that the original ECDSA does not provide any BUFF security. As a by-product of the analysis, we show that one of our BUFF attacks also works against the Aumayr et al.'s ECDSA-based adaptor signature scheme (ASIACRYPT 2021) and the Qin et al.'s blind adaptor signature scheme (IEEE S&P 2023), which is based on the Aumayr et al.'s scheme. We emphasize that the attack is positioned outside of their security models.
- « Previous
- 1
- 2
- 3
- ...
- 20
- Next »