Papers updated in last 183 days (1695 results)
Post-Quantum Privacy for Traceable Receipt-Free Encryption
Traceable Receipt-free Encryption (TREnc) has recently been introduced (Asiacrypt’22) as a verifiable public-key encryption primitive allowing to randomize ciphertexts in transit in order to remove any subliminal information up to a public trace which prevents the malleability of the underlying plaintexts. This unique feature generically enables the construction of voting systems by allowing voters to encrypt their votes, tracing whether a published ballot takes their choices into account, and preventing them from proving how they voted. While being a very promising primitive, the few existing TREnc mechanisms solely rely on discrete-logarithm related assumptions making them vulnerable to the well-known record-now/decrypt-later attack in the wait of quantum computers.
In this article, we address this limitation by building the first TREnc which can be safely used today until the advent of quantum adversaries. More precisely, based on the observation that security must hold at the time the primitive is used while only privacy should withstand in the post-quantum era, our solution relies on a mix of pre-quantum and post-quantum cryptography. As a first contribution, we generalize the original TREnc primitive that is too restrictive to be easily compatible with built-in lattice-based semantically-secure encryption. Our more flexible model keeps all the ingredients generically implying receipt-free voting. Next, we design our construction with the following essential properties for trustworthy elections: (i) it is provably-secure in the standard model; (ii) it relies on standard assumptions, namely Ring Learning With Errors (RLWE) coupled with pairing-based statistical zero-knowledge simulation-sound SXDH-based proofs; and (iii) it further enjoys a public-coin common reference string removing the need of a trusted setup.
A New Generalized Attack on RSA-like Cryptosystems
Rivest, Shamir, and Adleman published the RSA cryptosystem in 1978, which has been widely used over the last four decades. The security of RSA is based on the difficulty of factoring large integers $N = pq$, where $p$ and $q$ are prime numbers. The public exponent $e$ and the private exponent $d$ are related by the equation $ed - k(p-1)(q-1) = 1$. Recently, Cotan and Teseleanu (NordSec 2023) introduced a variant of RSA, where the public exponent $e$ and the private exponent $d$ satisfy the equation $ed - k(p^n-1)(q^n-1) = 1$ for some positive integer $n$. In this paper, we study the general equation $eu - (p^n - 1)(q^n - 1)v = w$ with positive integers $u$ and $v$, and $w\in \mathbb{Z}$. We show that, given the public parameters $N$ and $e$, one can recover $u$ and $v$ and factor the modulus $N$ in polynomial time by combining continued fractions with Coppersmith's algorithm which relies on lattice reduction techniques, under specific conditions on $u$, $v$, and $w$. Furthermore, we show that if the private exponent $d$ in an RSA-like cryptosystem is either small or too large, then $N$ can be factored in polynomial time. This attack applies to the standard RSA cryptosystem.
Packed Sumcheck over Fields of Small Characteristic with Application to Verifiable FHE
Verifiable computation over encrypted data is gaining increasing attention, and using SNARKs to provide proofs for FHE operations has emerged as a promising approach. However, the mismatch between FHE's typically small prime fields and SNARKs' larger prime fields creates verifiable FHE challenges. In this work, we construct a packed sumcheck protocol specifically designed for small fields. This approach leverages folding and repetition techniques to maintain security without field expansion, with all operations performed on the base field. For a field $\mathbb{F}_p$ requiring $k$-fold expansion, our sumcheck protocol operates with $(\log k + l)$ variables, where the sumcheck statement consists of $d$ multiplied multilinear polynomial statements. The prover can complete the proof in $O(kd^2 + k^{1.807}d) \cdot 2^l$ modular multiplications and $O(k^2d^2)\cdot 2^l$ modular additions over $\mathbb{F}_p$.
We futher propose a proof system for bit-wise bootstrapping by integrating the packed sumcheck protocol with the Brakedown (CRYPTO 2023) and Binius (EUROCRYPT 2025) commitment schemes. Our construction exploits the highly repetitive structure of bit-wise bootstrapping by decomposing the procedure into a sequence of vector operations, enabling the design of an efficient proof protocol through the verification of these vector relations. The resulting system has linear prover time while performing all computations over the base field.
Context-Dependent Threshold Decryption and its Applications
We initiate the study of high-threshold public-key decryption, along with an enhanced security feature called context-dependent decryption.
Our study includes definitions, constructions, security proofs, and applications.
The notion of high-threshold decryption has received almost no attention in the literature. The enhanced security feature of context-dependent encryption is entirely new, and plays an important role in many natural applications of threshold decryption.
An Efficient Variant of F4 Algorithm for Solving MQ Problem
In this paper, we propose an enhanced variant of the F4 algorithm specifically designed for efficiently solving multivariate quadratic (MQ) problems, which are central to many post-quantum cryptographic schemes. Our approach overcomes a major inefficiency of conventional F4 by integrating a Hilbert-driven strategy that determines the optimal number of S-polynomials to generate at each degree, thereby reducing unnecessary zero reductions. We further introduce refined pair selection techniques that prioritize candidates yielding S-polynomials with smaller leading terms, which in turn minimizes the dimensions of intermediate matrices used during reduction. Experimental results show that our implementation outperforms state-of-the-art systems such as M4GB and Magma's F4 in both single-core and multi-core environments. Notably, our method sets new records in the Fukuoka MQ Challenge for Type VI problems over $\mathbb{F}_{31}$ with $m = 21$ and $m = 22$, demonstrating the robustness and practical impact of our approach in solving highly challenging MQ instances.
Somewhat Homomorphic Encryption from Linear Homomorphism and Sparse LPN
We construct somewhat homomorphic encryption from the sparse learning-parities-with-noise problem, along with an assumption that implies linearly homomorphic encryption (e.g., the decisional Diffie-Hellman or decisional composite residuosity assumptions). Our resulting schemes support an a-priori bounded number of homomorphic operations: $O(\log \lambda / \log \log \lambda)$ multiplications followed by poly($\lambda$) additions, where $\lambda$ is a security parameter. These schemes have compact ciphertexts: before and after homomorphic evaluation, the bit length of each ciphertext is a fixed polynomial in the security parameter $\lambda$, independent of the number of homomorphic operations that the scheme supports. This gives the first constructions of somewhat homomorphic encryption that can evaluate the class of bounded-degree polynomials without relying on lattice assumptions or bilinear maps.
Our new encryption schemes are conceptually simple: much as in Gentry, Sahai, and Waters’ fully homomorphic encryption scheme, ciphertexts in our scheme are matrices, homomorphic addition is matrix addition, and homomorphic multiplication is matrix multiplication. Moreover, when encrypting many messages at once and performing many homomorphic evaluations at once, the bit length of the ciphertexts in (some of) our schemes can be made arbitrarily close to the bit length of the plaintexts. The main limitation of our schemes is that they require a large evaluation key, whose size scales with the complexity of the homomorphic computation performed, though this key can be re-used across any polynomial number of encryptions and evaluations. Our construction builds on recent work of Dao, Ishai, Jain, and Lin, who construct a homomorphic secret-sharing scheme from the sparse-LPN assumption.
KZH-Fold: Accountable Voting from Sublinear Accumulation
Accumulation schemes are powerful primitives that enable distributed and incremental verifiable computation with less overhead than recursive SNARKs. However, most existing schemes with constant-size accumulation verifiers suffer from linear-sized accumulators and deciders, leading to unsuitable linear-sized proofs in distributed settings such as accountable voting protocols. Our contributions are as follows:
I) We introduce KZH, a novel multilinear polynomial commitment scheme (PCS) with sublinear opening and KZH-fold, a polynomial accumulation (PA) scheme where the verifier only does $3$ group scalar multiplications and $O(n^{1/2})$ accumulator size and decider time. Our scheme generalizes to achieve accumulator and decider complexity of $O(k \cdot n^{1/k})$ while a verifier complexity $O(k)$.
2) As an orthogonal contribution to KZH-fold, we build an IVC (PCD) scheme for R1CS via Spartan+PA, in which instantiated with KZH-fold, i.e. Spartan+KZH-fold results in a sublinear proof and decider. With the recipe of Spartan+PA, we build non-uniform IVC and non-uniform PCD. Our non-uniform PCD is the first approach in which the prover's computation and communication at each step and grow sublinearly with the combined circuit size of all instructions. This approach can be instantiated with any PA and doesn't depend on KZH-fold.
3) We demonstrate the power of Spartan+KZH-fold by implementing an accountable voting scheme using a novel signature aggregation protocol supporting millions of participants, significantly reducing communication overhead and verifier time compared to BLS-based aggregation. We implemented and benchmarked our protocols, Spartan+KZH-fold achieves a 2000x reduction in communication and a 50x improvement in decider time over Nova when proving 2000 Poseidon hashes, at the cost of 3x the prover time.
Shaking up authenticated encryption
Authenticated encryption (AE) is a cryptographic mechanism that allows communicating parties to protect the confidentiality and integrity of messages exchanged over a public channel, provided they share a secret key. In this work, we present new AE schemes leveraging the SHA-3 standard functions SHAKE128 and SHAKE256, offering 128 and 256 bits of security strength, respectively, and their “Turbo” counterparts. They support session-based communication, where a ciphertext authenticates the sequence of messages since the start of the session. The chaining in the session allows decryption in segments, avoiding the need to buffer the entire deciphered cryptogram between decryption and validation. And, thanks to the collision resistance of (Turbo)SHAKE, they provide so-called CMT-4 committing security, meaning that they provide strong guarantees that a ciphertext uniquely binds to the key, plaintext and associated data. The AE schemes we propose have a unique combination of advantages. The most important are that 1) their security is based on the security claim of SHAKE, that has received a large amount of public scrutiny, that 2) they make use of the standard Keccak-$p$ permutation that not only receives more and more dedicated hardware support, but also allows competitive software-only implementations thanks to the TurboSHAKE instances, and that 3) they do not suffer from a 64-bit birthday bound like most AES-based schemes. Of independent interest, we introduce the deck cipher as the stateful counterpart of the deck function and the duplex cipher generalizing keyed duplex and harmonize their security notions. Finally, we provide an elegant solution for multi-layer domain separation.
Asymptotically Optimal Early Termination for Dishonest Majority Broadcast
Deterministic broadcast protocols among $n$ parties tolerating $t$ corruptions require $\min\{f+2, t+1\}$ rounds, where $f \le t$ is the actual number of corruptions in an execution of the protocol. We provide the first protocol which is optimally resilient, adaptively secure, and asymptotically matches this lower bound for any $t<(1-\varepsilon)n$. By contrast, the best known algorithm in this regime by Loss and Nielsen (EUROCRYPT'24) always requires $O(\min\{f^2, t\})$ rounds. Our main technical tool is a generalization of the notion of polarizer introduced by Loss and Nielsen, which allows parties to obtain transferable cryptographic evidence of missing messages with fewer rounds of interaction.
Adversary Resilient Learned Bloom Filters
A learned Bloom filter (LBF) combines a classical Bloom filter (CBF) with a learning model to reduce the amount of memory needed to represent a given set while achieving a target false positive rate (FPR). Provable security against adaptive adversaries that advertently attempt to increase FPR has been studied for CBFs. However, achieving adaptive security for LBFs is an open problem. In this paper, we close this gap and show how to achieve adaptive security for LBFs. In particular, we define several adaptive security notions capturing varying degrees of adversarial control, including full and partial adaptivity, in addition to LBF extensions of existing adversarial models for CBFs, including the Always-Bet and Bet-or-Pass notions. We propose two secure LBF constructions, \bloomname{} and \betpassbloomname{}, and formally prove their security under these models, assuming the existence of one-way functions. Based on our analysis and use case evaluations, our constructions achieve strong security guarantees while maintaining competitive FPR and memory overhead.
Related-Key Cryptanalysis of FUTURE
At Africacrypt 2022, Gupta et al. introduced FUTURE, a 64-bit lightweight block cipher based on an MDS matrix and designed in an SPN structure, with a focus on achieving single-cycle encryption and low implementation cost, especially in unrolled architectures. While the designers evaluated its security under various attack models, they did not consider related-key cryptanalysis. In this work, we address this gap by analyzing the security of FUTURE in the related-key setting using techniques based on Mixed Integer Linear Programming (MILP). We first propose a simplified and generalizable approach for applying MILP to model any MDS or near-MDS-based cipher that follows the substitution-permutation paradigm. Using our MILP framework, we construct an 8-round related-key distinguisher on FUTURE, requiring $2^{58}$ plaintexts, $2^{58}$ \xor operations, and negligible memory. We further identify a full-round (i.e., 10 rounds) boomerang distinguisher with a probability of $2^{-45.8}$, enabling a distinguishing attack with $2^{48}$ data and time complexity. In addition, we develop a full-round key recovery attack on FUTURE with data, time, and memory complexities of $2^{18}$, $2^{70}$, and $2^{64}$, respectively. Although all known single-key attacks remain impractical (with time complexities of at least $2^{112}$), our results demonstrate a full-round cryptanalysis of FUTURE in the related-key setting, thereby challenging its claimed security guarantees.
Efficient Permutation Correlations and Batched Random Access for Two-Party Computation
In this work we formalize the notion of a two-party permutation correlation $(A, B), (C, \pi)$ s.t. $\pi(A)=B+C$ for a random permutation $\pi$ of $n$ elements and vectors $A,B,C\in \mathbb{F}^n$. This correlation can be viewed as an abstraction and generalization of the Chase et al. (Asiacrypt 2020) share translation protocol. We give a systematization of knowledge for how such a permutation correlation can be derandomized to allow the parties to perform a wide range of oblivious permutations of secret-shared data. This systematization immediately enables the translation of various popular honest-majority protocols to be efficiently instantiated in the two-party setting, e.g. collaborative filtering, sorting, database joins, graph algorithms, and many more.
We give two novel protocols for efficiently generating a random permutation correlation. The first uses MPC-friendly PRFs to generate a correlation of $n$ elements, each of size $\ell=\log|\mathbb{F}|$ bits, with $O(n\ell)$ bit-OTs, time, communication, and only 3 rounds including setup. Similar asymptotics previously required relatively expensive public-key cryptography, e.g. Paillier or LWE. Our protocol implementation for $n=2^{20},\ell=128$ requires just 7 seconds & $\sim2\ell n$ OTs for communication, a respective 40 & $1.1\times$ improvement on the LWE solution of Juvekar at al. (CCS 2018). The second protocol is based on pseudo-random correlation generators and achieves an overhead that is sublinear in the string length $\ell$, i.e. the communication and number of OTs is $O(n\log \ell)$. The overhead of the latter protocol has larger hidden constants, and therefore is more efficient only when long strings are permuted, e.g. in graph algorithms.
Finally, we present a suite of highly efficient protocols based on permutations for performing various batched random access operations. These include the ability to extract a hidden subset of a secret-shared list. More generally, we give ORAM-like protocols for obliviously reading and writing from a list in a batched manner. We argue that this suite of batched random access protocols should be a first class primitive in the MPC practitioner's toolbox.
Signatures with Tight Adaptive Corruptions from Search Assumptions
We construct the \emph{first} tightly secure signature schemes in the multi-user setting with adaptive corruptions from static search assumptions, such as classical discrete logarithm, RSA, factoring, or post-quantum group action discrete logarithm assumptions. In contrast to our scheme, the previous tightly secure schemes are based on decisional assumptions (e.g., (group action) DDH) or interactive search assumptions (e.g., one-more CDH). The security of our schemes is independent of the number of users, signing queries, and random oracle queries, and forging our signatures is as hard as solving the underlying static search problems. Our signature schemes are based on an identification scheme with multiple secret keys per public key and ``second-key recovery resistance,'' difficulty of finding another secret key of a given public and secret key pair (e.g., Okamoto identification (CRYPTO'92) and Parallel-OR identification (CRYPTO'94)). These properties allow a reduction in solving a search problem while answering signing and corruption queries for all users in the signature security game. To convert such an identification scheme into a signature scheme tightly, we employ randomized Fischlin transformation introduced by Kondi and shelat (Asiacrypt 2022) that provides straight-line extraction. Klooss et al. (Asiacrypt 2024) showed that randomized Fischlin transformation satisfies the zero-knowledge property in the programmable ROM if an exponential-size challenge space is used. This fact intuitively implies that the transformation with a large challenge space guarantees the tight security of our signature scheme in the programmable random oracle model, but we successfully prove its tight security in the \emph{non-programmable} random oracle model \emph{without enlarging the challenge size}. Also, as a side contribution, we reconsider the zero-knowledge property of randomized Fischlin transformation, and show that the transformation with a polynomial size challenge space has zero-knowledge if the underlying Sigma protocol satisfies certain properties.
FLUENT: A Tool for Efficient Mixed-Protocol Semi-Private Function Evaluation
In modern business to customer interactions, handling private or confidential data is essential. Private Function Evaluation (PFE) protocols ensure the privacy of both the customers' input data and the business' function evaluated on it which is often sensitive intellectual property (IP). However, fully hiding the function in PFE results in high performance overhead. Semi-Private Function Evaluation (SPFE) is a generalization of PFE to only partially hide the function, whereas specific non-critical components remain public. Our paper introduces a novel framework designed to make SPFE accessible to non-experts and practical for real-world deployments.
To achieve this, we improve on previous SPFE solutions in two aspects.
First, we enhance the developer experience by leveraging High-Level Synthesis (HLS), making our tool more user-friendly than previous SPFE frameworks.
Second, we achieve a \(2 \times\) speedup compared to the previous state-of-the-art through more efficient underlying constructions and the usage of Lookup Tables (LUTs).
We evaluate the performance of our framework in terms of communication and runtime efficiency. Our final implementation is available as an open-source project, aiming to bridge the gap between advanced cryptographic protocols and their practical application in industry scenarios.
Improved Subfield Curve Search For Specific Field Characteristics
Isogeny-based cryptography relies its security on the hardness of the supersingular isogeny problem: finding an isogeny between two supersingular curves defined over a quadratic field.
The Delfs-Galbraith algorithm is the most efficient procedure for solving the supersingular isogeny problem with a time complexity of $\tilde{\mathcal{O}}(p^{1/2})$ operations. The bottleneck of the Delfs-Galbraith algorithm is the so-called subfield curve search (i.e., finding an isogenous supersingular elliptic curve defined over the prime field), which determines the time complexity of the aforementioned algorithm.
Given that, for efficiency, most recent isogeny-based constructions propose using finite fields with field characteristics equal to $p = 2^a \cdot f - 1$ for some positive integers $a$ and $f$.
This work focuses on primes of that particular form, and presents two new algorithms for finding subfield curves with a time complexity of $\mathcal{O}(p^{1/2})$ operations and a memory complexity polynomial in $\log_2{p}$. Such algorithms exploit the existence of large $2^a$-torsion points and extend the subfield root detection algorithm of Corte-Real Santos, Costello, and Shi (Crypto, 2022) to a projective scenario where one can work over the curve coefficients instead of the explicit j-invariants of the curves.
These algorithms easily extend to primes of the form $p =2^a \cdot f + 1$ and $p = \ell^a \cdot f - 1$ with $\ell$ being a small integer.
This study also examines the usage of radical $3$-isogenies with the proposed extended subfield root detection algorithm. In this context, the results indicate that the radical $3$-isogeny approach is competitive compared to the state-of-the-art algorithms.
How Fast Does the Inverse Walk Approximate a Random Permutation?
For a finite field $\mathbb{F}$ of size $n$, the (patched) inverse permutation $\operatorname{INV}: \mathbb{F} \to \mathbb{F}$ computes the inverse of $x$ over $\mathbb{F}$ when $x\neq 0$ and outputs $0$ when $x=0$, and the $\operatorname{ARK}_K$ (AddRoundKey) permutation adds a fixed constant $K$ to its input, i.e.,
$$\operatorname{INV}(x) = x^{n-2} \hspace{.1in} \mbox{and} \hspace{.1in}
\operatorname{ARK}_K(x) = x + K \;.$$
We study the process of alternately applying the $\operatorname{INV}$ permutation followed by a random linear permutation $\operatorname{ARK}_K$, which is a random walk over the alternating (or symmetric) group that we call the {\em inverse walk}.
We show both lower and upper bounds on the number of rounds it takes for this process to approximate a random permutation over $\mathbb{F}$. We show that $r$ rounds of the inverse walk over the field of size $n$ with $$r = \Theta\left(n\log n + n\log \frac{1}{\epsilon}\right)$$ rounds generate a permutation that is $\epsilon$-close (in variation distance) to a uniformly random even permutation (i.e. a permutation from the alternating group $A_{n}$). Our result is provided with explicit constants, and is tight, up to logarithmic factors.
Our result answers an open question from the work of Liu, Pelecanos, Tessaro, and Vaikuntanathan (CRYPTO 2023) by proving the $t$-wise independence of (a variant of) AES for $t$ up to the square root of the field size, compared to the original result that only held for $t=2$. It also constitutes a significant improvement on a result of Carlitz (Proc. American Mathematical Society, 1953) who showed a {\em reachability result}: namely, that every even permutation can be generated {\em eventually} by composing $\operatorname{INV}$ and $\operatorname{ARK}$. We show a {\em tight convergence result}, namely a tight quantitative bound on the number of rounds to reach a random (even) permutation.
Our work brings to the forefront the view of block ciphers as random walks and uses an entirely new set of (functional analytic) tools to study their pseudorandomness, both of which we hope will prove useful in the study of block ciphers.
Quantum Periodic Distinguisher Construction: Symbolization Method and Automated Tool
As one of the famous quantum algorithms, Simon's algorithm enables the efficient derivation of the period of periodic functions in polynomial time. However, the complexity of constructing periodic functions has hindered the widespread application of Simon's algorithm in symmetric-key cryptanalysis. Currently, aside from the exhaustive search-based testing method introduced by Canale et al. at CRYPTO 2022, there is no unified model for effectively searching for periodic distinguishers. Although Xiang et al. established a link between periodic functions and truncated differential theory at ToSC 2024, their approach lacks the ability to construct periods using unknown differentials and does not provide automated tools. This limitation underscores the inadequacy of existing methods in identifying periodic distinguishers for complex structures. In this paper, we address the challenge of advancing periodic distinguishers for symmetric-key ciphers. First, we propose a more generalized method for constructing periodic distinguishers, addressing the limitations of Xiang et al.'s theory in handling unknown differences. We further extend our method to probabilistic periodic distinguishers, thereby extending the separability property proposed by Hodžić et al. in 2020. As a result, our method can cover a wider range of periodic distinguishers. Second, we introduce a novel symbolic representation to simplify the search of periodic distinguishers. Based on this representation, we propose the first fully automated SMT-based search model, which efficiently addresses the challenges of manual searching in complex structures. Finally, we extend the model to SPN structures based on our new method. Our model has broad applicability through significant advancements in analyzing generalized Feistel structures (GFSs) and SPN-based ciphers. We have achieved new quantum distinguishers with the following round configurations: 10 rounds for GFS-4F, 10 rounds for LBlock, 10 rounds for TWINE, and 16 rounds for Skipjack-B, improving the previous best results by 1, 2, 2, and 3 rounds, respectively. In the domain of SPN-based ciphers, our model has enabled the identification of novel periodic distinguishers, including the first 9-round distinguisher for SKINNY and the first 12-round distinguisher for CRAFT. These achievements lay the foundation for quantum cryptanalysis of SPN-based ciphers using Simon’s algorithm.
DeepFold: Efficient Multilinear Polynomial Commitment from Reed-Solomon Code and Its Application to Zero-knowledge Proofs
This work presents Deepfold, a novel multilinear polynomial commitment scheme (PCS) based on Reed-Solomon code that offers optimal prover time and a more concise proof size. For the first time, Deepfold adapts the FRI-based multilinear PCS to the list decoding radius setting, requiring significantly fewer query repetitions and thereby achieving a 3$\times$ reduction in proof size compared to Basefold (Crypto'24), while preserving its advantages in prover time. Compared with PolyFRIM (USENIX Security'24), Deepfold achieves a 2$\times$ improvement in prover time, verifier time, and proof size. Another contribution of this work is a batch evaluation scheme, which enables the FRI-based multilinear PCS to handle polynomials whose size is not a power of two more efficiently.
Our scheme has broad applications in zk-SNARKs, since PCS is a key component in modern zk-SNARK constructions. For example, when replacing the PCS component of Virgo (S&P'20) with Deepfold, our scheme achieves a 2.5$\times$ faster prover time when proving the knowledge of a Merkle tree with 256 leaves, while maintaining the similar proof size. When replacing the PCS component of HyperPlonk (Eurocrypt'23) with Deepfold, our scheme has about 3.6$\times$ faster prover time. Additionally, when applying our arbitrary length input commitment to verifiable matrix multiplications for matrices of size
1200$\times$768 and 768$\times$2304, which are actual use cases in GPT-2 model, the performance showcases a 2.4$\times$ reduction in prover time compared to previous approaches.
Critical Rounds in Multi-Round Proofs: Proof of Partial Knowledge, Trapdoor Commitments, and Advanced Signatures
Zero-knowledge simulators and witness extractors, initially developed for proving the security of proof systems, turned out to be also useful in constructing advanced protocols from simple three-move interactive proofs. However, in the context of multi-round public-coin protocols, the interfaces of these auxiliary algorithms become more complex, introducing a range of technical challenges that hinder the generalization of these constructions.
We introduce a framework to enhance the usability of zero-knowledge simulators and witness extractors in multi-round argument systems for protocol designs. Critical-round zero-knowledge relies on the ability to perform complete zero-knowledge simulations by knowing the challenge of just one specific round in advance. Critical-round special soundness aims to address a stringent condition for witness extraction by formalizing it to operate with a smaller tree of transcripts than the one needed for extended extraction, which either outputs the intended witness or solves the underlying hard problem in an argument system. We show that these notions are satisfied by diverse protocols based on MPC-in-the-Head, interactive oracle proofs, and split-and-fold arguments.
We demonstrate the usefulness of the critical round framework by constructing proofs of partial knowledge (Cramer, Damgård, and Schoenmakers, CRYPTO'94) and trapdoor commitments (Damgård, CRYPTO'89) from critical-round multi-round proofs. Furthermore, our results imply advancements in post-quantum secure adaptor signatures and threshold ring signatures based on MPC-in-the-Head, eliminating the need for (costly) generic NP reductions.
EndGame: Field-Agnostic Succinct Blockchain with Arc
We present EndGame, a novel blockchain architecture that achieves succinctness through Reed-Solomon accumulation schemes. Our construction enables constant-time verification of blockchain state while maintaining strong security properties. We demonstrate how to efficiently encode blockchain state transitions using Reed-Solomon codes and accumulate proofs of state validity using the ARC framework. Our protocol achieves optimal light client verification costs and supports efficient state management without trusted setup.
Secure Rate-Distortion-Perception Trade-off Over Channels: A Randomized Distributed Function Computation (RDFC) Application
Secure rate-distortion-perception (RDP) trade-offs arise in critical applications, such as semantic compression and privacy-preserving generative coding, where preserving perceptual quality while minimizing distortion is vital. This paper studies a framework for secure RDP over noiseless and noisy broadcast channels under strong secrecy constraints. We first characterize the exact secure RDP region for noiseless transmission channels. We then develop an inner bound on the secure RDP region for a memoryless broadcast channel with correlated noise components at the receivers' observations and prove its tightness under a more capable broadcast channel assumption. Our results demonstrate how optimized binning schemes simultaneously achieve high perceptual quality, low distortion, and strong secrecy, illuminating fundamental information-theoretic limits for next-generation trustworthy computation systems.
Zeroed Out: Cryptanalysis of Weak PRFs in Alternating Moduli
The growing adoption of secure multi-party computation (MPC) has driven the development of efficient symmetric key primitives tailored for MPC. Recent advances, such as the alternating moduli paradigm, have shown promise but leave room for cryptographic and practical improvements. In this paper, we analyze a family of weak pseudorandom functions (wPRF) proposed at Crypto 2024, focusing on their One-to-One parameter sets. We demonstrate that these configurations fail to achieve their intended one-to-one mappings and exploit this observation to develop an efficient key recovery attack.
Our analysis reveals critical vulnerabilities, reducing the complexity of key recovery to $\mathcal{O}(2^{\lambda/2} \log_2 \lambda)$ for the Standard One-to-One wPRF and $\mathcal{O}(2^{0.84\lambda})$ for the Reversed Moduli variant -- both substantially below their claimed $\lambda$-bit security. We validate our findings through experimental evaluation, confirming alignment between predicted and observed attack complexities.
A Framework for Group Action-Based Multi-Signatures and Applications to LESS, MEDS, and ALTEQ
A multi-signature scheme allows a list of signers to sign a common message. They are widely used in scenarios where the same message must be signed and transmitted by $N$ users, and, instead of concatenating $N$ individual signatures, employing a multi-signature can reduce the data to be sent.
In recent years there have been numerous practical proposals in the discrete logarithm setting, such as MuSig2 (CRYPTO'21) for the Schnorr signature. Recently, these attempts have been extended to post-quantum assumptions, with lattice-based proposals such as MuSig-L (CRYPTO'22).
Given the growth of group action-based signatures, a natural question is whether a multi-signature can be built on the same models. In this work, we present the first construction of such a primitive relying on group action assumptions. We obtain a 3-round scheme achieving concurrent security in the ROM.
Moreover, we instantiate it using the three candidates to the additional post-quantum NIST's call, namely LESS, MEDS and ALTEQ, obtaining a good compression rate for different parameters sets.
Scrutinizing the Security of AES-based Hashing and One-way Functions
AES has cemented its position as the primary symmetric-key primitive for a wide range of cryptographic applications, which motivates the analysis on the concrete security of AES in practical instantiations, for instance, the collision resistance of AES-based hashing, the key commitment security of AES-based authenticated encryption schemes, and the one-wayness of AES-based one-way functions in ZK and MPC protocols. In this work, we introduce single-color initial structures (SCIS) into meet-in-the-middle (MITM) attacks to address neutral word generation, a critical bottleneck in MITM collision attacks. The SCIS technique leverages new structural insights to enable efficient neutral word generation and leads to multiple improved results on AES compared to the state-of-the-art. In particular, we present the first classical one-block collision attack on 7-round AES-MMO/MP, marking the first advancement in the number of attacked rounds in over a decade and matching the best-known results in the quantum setting, as well as the first one-block collision attack on 4-round AES-128-DM, bridging the gap highlighted by Taiyama \textit{et al.} at Asiacrypt 2024 from a non-differential-based approach. Additionally, we provide a comprehensive list of new results on the security margins of AES-192, AES-256, Rijndael-192, and Rijndael-256 in multiple attack settings.
CTng: Secure Certificate and Revocation Transparency
We present CTng, an evolutionary and practical PKI design that efficiently addresses multiple key challenges faced by deployed PKI systems. CTng ensures strong security properties, including guaranteed transparency of certificates and guaranteed, unequivocal revocation, achieved under NTTP-security, i.e., without requiring trust in any single CA, logger, or relying party. These guarantees hold even in the presence of arbitrary corruptions of these entities, assuming only a known bound (f) of corrupt monitors (e.g., f=8), with minimal performance impact.
CTng also enables offline certificate validation and preserves relying-party privacy, while providing scalable and efficient distribution of revocation updates. Furthermore, CTng is post-quantum ready, maintaining efficiency even with high-overhead quantum-secure signature schemes.
These properties significantly improve upon current PKI designs. In particular, while Certificate Transparency (CT) aims to eliminate single points of trust, the existing specification still assumes benign loggers. Addressing this through log redundancy is possible, but rather inefficient, limiting deployed configurations to f ≤ 2.
We present a security analysis and an evaluation of our open-source CTng prototype, showing that it is efficient and scalable under realistic deployment conditions.
Towards ML-KEM & ML-DSA on OpenTitan
This paper presents extensions to the OpenTitan hardware root of trust that aim at enabling high-performance lattice-based cryptography.
We start by carefully optimizing ML-KEM and ML-DSA --- the two algorithms primarily recommended and standardized by NIST --- in software targeting the OTBN accelerator.
Based on profiling results of these implementations, we propose tightly integrated extensions to OTBN, specifically an interface from OTBN to OpenTitan's Keccak accelerator (KMAC core) and extensions to the OTBN ISA to support operations on 256-bit vectors.
We implement these extensions in hardware and show that we achieve a speedup by a factor between 6 and 9 for different operations and parameter sets of ML-KEM and ML-DSA compared to our baseline implementation on unmodified OTBN.
This speedup is achieved with an increase in cell count of less than 17% in OTBN, which corresponds to an increase of less than 3% for the full Earl Grey OpenTitan core.
An Optimized Instantiation of Post-Quantum MQTT protocol on 8-bit AVR Sensor Nodes
Since the selection of the National Institute of Standards and Technology (NIST) Post-Quantum Cryptography (PQC) standardization algorithms, research on integrating PQC into security protocols such as TLS/SSL, IPSec, and DNSSEC has been actively pursued. However, PQC migration for Internet of Things (IoT) communication protocols remains largely unexplored. Embedded devices in IoT environments have limited computational power and memory, making it crucial to optimize PQC algorithms for efficient computation and minimal memory usage when deploying them on low-spec IoT devices. In this paper, we introduce KEM-MQTT, a lightweight and efficient Key Encapsulation Mechanism (KEM) for the Message Queuing Telemetry Transport (MQTT) protocol, widely used in IoT environments. Our approach applies the NIST KEM algorithm Crystals-Kyber (Kyber) while leveraging MQTT’s characteristics and sensor node constraints. To enhance efficiency, we address certificate verification issues and adopt KEMTLS to eliminate the need for Post-Quantum Digital Signatures Algorithm (PQC-DSA) in mutual authentication. As a result, KEM-MQTT retains its lightweight properties while maintaining the security guarantees of TLS 1.3. We identify inefficiencies in existing Kyber implementations on 8-bit AVR microcontrollers (MCUs), which are highly resource-constrained. To address this, we propose novel implementation techniques that optimize Kyber for AVR, focusing on high-speed execution, reduced memory consumption, and secure implementation, including Signed LookUp-Table (LUT) Reduction. Our optimized Kyber achieves performance gains of 81%,75%, and 85% in the KeyGen, Encaps, and DeCaps processes, respectively, compared to the reference implementation. With approximately 3 KB of stack usage, our Kyber implementation surpasses all state-of-the-art Elliptic Curve Diffie-Hellman (ECDH) implementations. Finally, in KEM-MQTT using Kyber-512, an 8-bit AVR device completes the handshake preparation process in 4.32 seconds, excluding the physical transmission and reception times.
A Round-Optimal Near-Linear Third-Party Private Set Intersection Protocol
Third-party private set intersection (PSI) enables two parties, each holding a private set to compute their intersection and reveal the result only to an inputless third party. In this paper, we present an efficient round-optimal third-party PSI protocol. Our work is motivated by real-world applications such as contact tracing whereby expedition is essential while concurrently preserving privacy. Our construction only requires $2$ communication rounds and attains a near-linear computational complexity of $O(n^{1+\varepsilon})$ for large dataset size $n$, where $\varepsilon>0$ is any fixed constant. Our improvements stem from algorithmic changes and the incorporation of new techniques along with precise parameter selections to achieve a tight asymptotic bound. Furthermore, we also present a third-party PSI cardinality protocol which has not been explored in prior third-party PSI work. In a third-party PSI cardinality setting, only the third-party obtains the size of the intersection and nothing else. Our construction to achieve the cardinality functionality attains a quasilinear computational complexity for the third-party.
RAMenPaSTA: Parallelizable Scalable Transparent Arguments of Knowledge for RAM Programs
Recent advances in Zero-Knowledge Proof and Argument (ZKP/ZKA) Systems allow efficiently verifiable computation in the circuit-based model (where programs can be represented as Boolean or arithmetic circuits). However, the circuit-based model is not widely popular due to its unsuitability for big programs (as the circuit size is linear to the program's size). Hence, the research community began looking into the Random-Access Machine model of programs, namely RAM programs, which is better suited for general-purpose programming. Consequently, a flurry of work began researching to construct ZKP protocols for the correct execution of RAM programs. In this paper, we also develop ZKP/ZKA for RAM programs, with a particular focus on two properties: (i) parallelizability, which significantly reduces prover (and verifier) computation, and (ii) genericity, which makes the construction requires minimal assumptions and can be instantiated with any existing ZKP protocols. To our knowledge, previous ZKP constructions for RAM programs either (i) are not known to support proving or verifying in parallel, (ii) require making non-black-box use of specific SNARK constructions, or (iii) even require proving computation of random oracle.
To this end, we propose the so-called Conditional Folding Scheme (CFS) as a building block to construct ZKP/ZKA for RAM programs. We provide a non-trivial practical construction for our CFS that is natively parallelizable, which significantly brings the runtime of proof generation down to $\mathcal{O}(W \log N)$ (compared to $\Omega(W N)$ in previous works), where $W$ is the witness size of the heaviest operation, and $N$ is the number of execution steps. We rigorously prove the security of our CFS (also for the case of folding in parallel). Our scheme can be made non-interactive using the Fiat-Shamir transform. Our CFS is generic and does not require a trusted setup (yielding a ``transparent'' argument). It can be adapted to construct Parallelizable Scalable Transparent Arguments of Knowledge for RAM Programs that we dub RAMenPaSTA.
USpt: Updatable Signature with Public Tokens
The Updatable Signature (US) allows valid signatures to be updated by an update token without accessing the newly generated signing key. Cini et al. (PKC’21) formally defined this signature and gave several constructions. However, their security model requires the secrecy of the update token, which is not applicable in many common application scenarios where existing signatures have been distributed to many parties. In addition, one can use the same token to update both the signing key and signatures, and all signatures can be updated by a single token, whereas the adversarial signature generated by an adversary might also be updated.
This work explores the (im)possibility of constructing an Updatable Signature with public tokens (USpt). Specifically, we first define the updatable signature with public tokens and present its security model. Then, from considering existing US schemes, we found that a secure USpt must properly handle a transform function from signature-update token to key-update token. We further formally proved the impossibility of constructing a secure USpt if (1) there is no transform function between key pairs and signatures, or (2) the signature-update token can be derived from the public keys of adjacent epochs. Finally, we present a concrete USpt scheme based on the BLS signature.
Evasive LWE Assumptions: Definitions, Classes, and Counterexamples
The evasive LWE assumption, proposed by Wee [Eurocrypt'22 Wee] for constructing a lattice-based optimal broadcast encryption, has shown to be a powerful assumption, adopted by subsequent works to construct advanced primitives ranging from ABE variants to obfuscation for null circuits. However, a closer look reveals significant differences among the precise assumption statements involved in different works, leading to the fundamental question of how these assumptions compare to each other. In this work, we initiate a more systematic study on evasive LWE assumptions:
(i) Based on the standard LWE assumption, we construct simple counterexamples against three private-coin evasive LWE variants, used in [Crypto'22 Tsabary, Asiacrypt'22 VWW, Crypto'23 ARYY] respectively, showing that these assumptions are unlikely to hold.
(ii) Based on existing evasive LWE variants and our counterexamples, we propose and define three classes of plausible evasive LWE assumptions, suitably capturing all existing variants for which we are not aware of non-obfuscation-based counterexamples.
(iii) We show that under our assumption formulations, the security proofs of [Asiacrypt'22 VWW] and [Crypto'23 ARYY] can be recovered, and we reason why the security proof of [Crypto'22 Tsabary] is also plausibly repairable using an appropriate evasive LWE assumption.
Cauchyproofs: Batch-Updatable Vector Commitment with Easy Aggregation and Application to Stateless Blockchains
Stateless blockchain designs have emerged to address the challenge of growing blockchain size using succinct global states. Previous works have developed vector commitments that support proof updates and aggregation to be used as such states. However, maintaining proofs for multiple users still demands significant computational resources, particularly to update proofs with every transaction. This paper introduces Cauchyproofs, a batch-updatable vector commitment that enables proof-serving nodes to efficiently update proofs in quasi-linear time relative to the number of users and transactions, utilizing an optimized KZG scheme to achieve complexity $O\left(\left(\left|\vec{\alpha}\right| + \left|\vec{\beta}\right|\right) \log^2 \left(\left|\vec{\alpha}\right| + \left|\vec{\beta}\right|\right)\right)$ for $\left|\alpha\right|$ users and $\left|\beta\right|$ transactions, compared to the previous $O\left(\left|\vec{\alpha}\right|\cdot\left|\vec{\beta}\right|\right)$ approaches. This advancement reduces the computational burden on proof-serving nodes, allowing efficient proof maintenance across large user groups. We demonstrate that our approach is approximately eight times faster than the naive approach at the Ethereum-level transaction throughput if we perform batch update every hour. Additionally, we present a novel matrix representation for KZG proofs utilizing Cauchy matrices, enabling faster all-proof computations with reduced elliptic curve operations. Finally, we propose an algorithm for history proof query, supporting retrospective proof generation with high efficiency. Our contributions substantially enhance the scalability and practicality of proof-serving nodes in stateless blockchain frameworks.
The Sponge is Quantum Indifferentiable
The sponge is a cryptographic construction that turns a public permutation into a hash function. When instantiated with the Keccak permutation, the sponge forms the NIST SHA-3 standard. SHA-3 is a core component of most post-quantum public-key cryptography schemes slated for worldwide adoption.
While one can consider many security properties for the sponge, the ultimate one is \emph{indifferentiability from a random oracle}, or simply \emph{indifferentiability}. The sponge was proved indifferentiable against classical adversaries by Bertoni et al. in 2008. Despite significant efforts in the years since, little is known about sponge security against quantum adversaries, even for simple properties like preimage or collision resistance beyond a single round. This is primarily due to the lack of a satisfactory quantum analog of the lazy sampling technique for permutations.
In this work, we develop a specialized technique that overcomes this barrier in the case of the sponge. We prove that the sponge is in fact indifferentiable from a random oracle against quantum adversaries. Our result establishes that the domain extension technique behind SHA-3 is secure in the post-quantum setting. Our indifferentiability bound for the sponge is a loose $O(\mathsf{poly}(q) 2^{-\min(r, c)/4})$, but we also give bounds on preimage and collision resistance that are tighter.
Secret and Shared Keys Recovery on Hamming Quasi-Cyclic with SASCA
Soft Analytical Side Channel Attacks (SASCA) are a powerful family of Side Channel Attacks (SCA) that allows the recovery of secret values with only a small number of traces. Their effectiveness lies in the Belief Propagation (BP) algorithm, which enables efficient computation of the marginal distributions of intermediate values. Post-quantum schemes such as Kyber, and more recently, Hamming Quasi-Cyclic (HQC), have been targets of SASCA. Previous SASCA on HQC focused on Reed-Solomon (RS) codes and successfully retrieved the shared key with a high success rate for high noise levels using a single trace. In this work, we present new SASCA on HQC, where both the shared key and the secret key are targeted. Our attacks are realized on simulations. Unlike the previous SASCA, we take a closer look at the Reed-Muller (RM) code. The advantage of this choice is that the RM decoder is applied before the RS decoder, enabling attacks targeting both the secret key and shared key. We build a factor graph of the Fast Hadamard Transform (FHT) function from the HQC reference implementation of April 2023. The information recovered from BP allows us to retrieve the shared key with a single trace. In addition to the previous SASCA targeting HQC, we also manage to recover the secret key with two different chosen ciphertext attacks. One of them requires a single trace and is successful until high noise levels.
Practical Zero-Trust Threshold Signatures in Large-Scale Dynamic Asynchronous Networks
Threshold signatures have become a critical tool in cryptocurrency systems, offering enhanced security by distributing the signing process among multiple signers. In this work, we distribute this process between a client and a permissionless decentralized blockchain, and present novel protocols for ECDSA and EdDSA/Schnorr signatures in this setting. Typical threshold access architectures used by trusted custodians suffer from the honeypot problem, wherein the more assets the custodian holds, the greater the incentive of compromising it.
Implementing threshold signatures over permissionless blockchains poses a few challenges.
First, existing networks typically work over an asynchronous reliable broadcast communication channel. Accordingly, our protocol is implemented over such a channel. As a result, it also benefits from identifiable abort, public verifiability, and guaranteed output delivery, and the client benefits from censorship resistance of blockchain systems.
Second, upon signing each block, the participating quorum may dynamically change and is post-determined.
Therefore, we design a fluid protocol, that supports a post-determined dynamic quorum in each communication round, thereby complying with existing broadcast channel implementations. Third, in permissionless networks, parties may join, leave, and change their stake. Therefore, we offer protocols for network reconfiguration, with complexity independent of the number of clients in the system, and our protocol efficiently supports a weighted threshold access structure for the network. Specifically, the complexity of distributed key generation and presign only depends on the number of parties and not on the overall weight, and the amortized cost of sign only depends on the individual weight.
Furthermore, our protocol introduces key improvements, including the removal of zero-knowledge proofs towards the client, and presigns with a non-interactive client. For Schnorr, the presigns are client-independent, and can be collected by the blockchain in a common pool, available for all clients in the system. These optimizations reduce communication overhead and improve the system's ability to handle traffic spikes during high-demand periods.
Our protocol is UC-secure, and is therefore natively designed for multiple clients to use the system in parallel. Notably, we propose a novel assumption, Slightly-Enhanced ECDSA Unforgeability, offering concrete security for 256-bit elliptic curves for threshold ECDSA with support for parallel execution of presigns.
In addition to securing cryptocurrency wallets, we demonstrate how our protocol enables various cross-chain applications, such as decentralized bridges, future transactions, andwallet transfer. Our system is designed for interoperability across multiple blockchains, enhancing security, scalability, and flexibility for decentralized finance (DeFi) ecosystems.
Deterministic algorithms for class group actions
We present an algorithm for the CSIDH protocol that is fully deterministic and strictly constant time. It does not require dummy operations and can be implemented without conditional branches. Our proof-of-concept C implementation shows that a key exchange can be performed in a constant (i.e. fixed) number of finite field operations, independent of the secret keys. The algorithm relies on a technique reminiscent of the standard Montgomery ladder, and applies to the computation of isogenies that divide an endomorphism of smooth degree represented by its kernel. We describe our method in the general context of class group actions on oriented elliptic curves, giving rise to a large family of non-interactive key exchanges different from CSIDH.
Faster Hash-based Multi-valued Validated Asynchronous Byzantine Agreement
Multi-valued Validated Byzantine Agreement (MVBA) is vital for asynchronous distributed protocols like asynchronous BFT consensus and distributed key generation, making performance improvements a long-standing goal. Existing communication-optimal MVBA protocols rely on computationally intensive public-key cryptographic tools, such as non-interactive threshold signatures, which are also vulnerable to quantum attacks. While hash-based MVBA protocols have been proposed to address these challenges, their higher communication overhead has raised concerns about practical performance.
We present a novel MVBA protocol with adaptive security, relying exclusively on hash functions to achieve post-quantum security. Our protocol delivers near-optimal communication, constant round complexity, and significantly reduced latency compared to existing schemes, though it has sub-optimal resilience, tolerating up to 20% Byzantine corruptions instead of the typical 33%. For example, with $n=201$ and input size 1.75 MB, it reduces latency by 81% over previous hash-based approaches.
Full-Authority Data Sharing Systems: Ciphertext-Dependent Proxy Re-Encryption with Dynamic Key Generation
Proxy re-encryption (PRE) is a powerful primitive for secure cloud storage sharing. Suppose Alice stores encrypted datasets (ciphertexts) in a cloud server (proxy). If Bob requests data sharing, Alice shares the ciphertexts by computing and sending a re-encryption key to the proxy, which will perform the re-encryption operation that generates the ciphertexts that are decryptable to Bob. Still, the proxy cannot access the plaintexts/datasets. Traditionally, the re-encryption key can convert all of Alice's ciphertexts, and the proxy should operate the re-encryption on the ciphertexts selected by the users (Alice/Bob). There is a trust issue: Alice must grant full decryption rights (losing control) to rely on proxy-enforced access policies (vulnerable to collusion). Existing PRE schemes fail to reconcile fine-grained control with collusion resistance. If Alice uses different keys to encrypt each dataset, the re-encryption complexity is linear to the number of requested datasets. We propose full-authority data sharing, a novel paradigm combining ciphertext-dependent PRE (cdPRE) and dynamic key generation (dKG). Unlike traditional PRE, cdPRE binds re-encryption keys to specific ciphertexts, ensuring collusion resistance (i.e., proxy + Bob cannot access unauthorised data). dKG dynamically connects keys via key derivation functions; for example, the chain system reduces per-dataset delegation cost to $O(1)$ for sequential release in publication/subscription systems (vs. $O(k)$ in trivial solutions, where $k$ is the number of datasets). We instantiate this paradigm with Kyber (NIST-PQC standardised) and AES, prove its security, and experimentally verify the high efficiency of the scheme.
Walnut: A Generic Framework with Enhanced Scalability for BFT Protocols
The performance of traditional BFT protocols significantly decreases as $n$ grows ($n$ for the number of replicas), and thus, they support up to a few hundred replicas. Such scalability issues severely limit the application scenarios of BFT. Meanwhile, the committee sampling technique has the potential to scale the replica size significantly by selecting a small portion of replicas as the committee and then conveying the consensus results to the rest.
However, this technique is rarely used in BFT, and there is still a lack of methods to scale the traditional BFT protocol being deployed to support more replicas rather than the costly re-deployment of new protocols.
This paper introduces Walnut, a secure and generic committee-sampling-based modular consensus. Specifically, we use the verifiable random function for committee election and integrate committee rotation with the consensus. This resulting construction ensures that each selected committee is of a fixed size and acknowledged by all replicas, even in a partially synchronous network. For Walnut, we provide a rigorous definition and outline the necessary properties of each module to achieve safety and liveness.
To clarify Walnut's effectiveness, we apply this framework to HotStuff to obtain the Walnut-HS protocol, together with a proof of fit-in. We also implement Walnut-HS and compare its performance with HotStuff, using up to 100 Amazon EC2 instances in WAN. The experiments show that Walnut-HS can easily scale to 1,000 replicas with only a slight performance degradation, while HotStuff performs poorly or even breaks when $n\!>\!200$. Besides, Walnut-HS performs well in comparison with Hotstuff for small-scale experiments. For example, the peak throughput of Walnut-HS is at most 38.6% higher than HotStuff for $n\!=\!100$.
DUPLEX: Scalable Zero-Knowledge Lookup Arguments over RSA Group
Lookup arguments enable a prover to convince a verifier that a committed vector of lookup elements $\vec{f} \in \mathbb{F}^m$ is contained within a predefined table $T \in \mathbb{F}^N$. These arguments are particularly beneficial for enhancing the performance of SNARKs in handling non-arithmetic operations, such as batched range checks or bitwise operations. While existing works have achieved efficient and succinct lookup arguments, challenges remain, particularly when dealing with large vectors of lookup elements in privacy-sensitive applications.
In this paper, we introduce $\duplex$, a scalable zero-knowledge lookup argument scheme that offers significant improvements over previous approaches. Notably, we present the first lookup argument designed to operate over the RSA group. Our core technique allows for the transformation of elements into prime numbers to ensure compatibility with the RSA group, all without imposing substantial computational costs on the prover. Given $m$ lookup elements, $\duplex$ achieves an asymptotic proving time of $O(m \log m)$, with constant-sized proofs, constant-time verification, and a public parameter size independent of the table size $N$. Additionally, $\duplex$ ensures the privacy of lookup elements and is robust against dynamic table updates, making it highly suitable for scalable verifiable computation in real-world applications.
We implemented and empirically evaluated $\duplex$, comparing it with the state-of-the-art zero-knowledge lookup argument Caulk [CCS'22]. Our experimental results demonstrate that $\duplex$ significantly outperforms Caulk in proving time for both single and batched lookup arguments, while maintaining practical proof size and verification time.
BPDTE: Batch Private Decision Tree Evaluation via Amortized Efficient Private Comparison
Machine learning as a service requires clients to entrust their information to the server, raising privacy concerns. Private Decision Tree Evaluation (PDTE) are proposal to address these concerns in decision trees, which are fundamental models in machine learning. However, existing solutions perform poorly with massive datasets in real-world applications, therefore, we focus on the batching variant called BPDTE, which supports a single evaluation on multiple data and significantly improves performance.
Firstly, we propose three private comparison (PrivCMP) algorithms that privately compare two numbers to determine which one is larger, by utilizing thermometer encoding and a novel folklore-inspired dichotomy method. These algorithms are non-interactive, batchable, and high-precision, achieving an amortized cost of less than 1 ms at 32-bit precision.
Secondly, we propose six BPDTE schemes based on our PrivCMP and the Clear Rows Relation (CRR) algorithm, introduced to ensure batching security. Experimental results show that our schemes improve amortized efficiency, offer more flexible batch sizes, and achieve higher precision.
Finally, we provide a formal security analysis of these schemes.
Post-Quantum PKE from Unstructured Noisy Linear Algebraic Assumptions: Beyond LWE and Alekhnovich's LPN
Noisy linear algebraic assumptions with respect to random matrices, in particular Learning with Errors ($\mathsf{LWE}$) and Alekhnovich Learning Parity with Noise (Alekhnovich $\mathsf{LPN}$), are among the most investigated assumptions that imply post-quantum public-key encryption (PKE). They enjoy elegant mathematical structure. Indeed, efforts to build post-quantum PKE and advanced primitives such as homomorphic encryption and indistinguishability obfuscation have increasingly focused their attention on these two assumptions and their variants.
Unfortunately, this increasing reliance on these two assumptions for building post-quantum cryptography leaves us vulnerable to potential quantum (and classical) attacks on Alekhnovich $\mathsf{LPN}$ and $\mathsf{LWE}$. Quantum algorithms is a rapidly advancing area, and we must stay prepared for unexpected cryptanalytic breakthroughs. Just three decades ago, a short time frame in the development of our field, Shor's algorithm rendered most then-popular number theoretic and algebraic assumptions quantumly broken. Furthermore, within the last several years, we have witnessed major classical and quantum
breaks on several assumptions previously introduced for post-quantum cryptography. Therefore, we ask the following question:
In a world where both $\mathsf{LWE}$ and Alekhnovich $\mathsf{LPN}$ are broken, can there still exist noisy linear assumptions that remain plausibly quantum hard and imply PKE?
To answer this question positively, we introduce two natural noisy-linear algebraic assumptions that are both with respect to random matrices, exactly like $\mathsf{LWE}$ and Alekhnovich $\mathsf{LPN}$, but with different error distributions. Our error distribution combines aspects of both small norm and sparse error distributions. We design a PKE from these assumptions and give evidence that these assumptions are likely to still be secure even in a world where both the $\mathsf{LWE}$ and Alekhnovich $\mathsf{LPN}$ assumptions are simultaneously broken. We also study basic properties of these assumptions, and show that in the parameter settings we employ to build PKE, neither of them are ``lattice'' assumptions in the sense that we don't see a way to attack them using a lattice closest vector problem solver, except via $\mathsf{NP}$-completeness reductions.
Rerandomizable Garbling, Revisited
In key-and-message homomorphic encryption (KMHE), the key space is a subset of the message space, allowing encryption of secret keys such that the same homomorphism can be applied to both the key and the message of a given ciphertext.
KMHE with suitable security properties is the main building block for constructing rerandomizable garbling schemes (RGS, Gentry et al., CRYPTO 2010), which enable advanced cryptographic applications like multi-hop homomorphic encryption, the YOSO-like MPC protocol SCALES (Acharya et al., TCC 2022 and CRYPTO 2024), and more.
The BHHO scheme (Boneh et al., CRYPTO 2008) is currently the only known KMHE scheme suitable for constructing RGS. An encryption of a secret key consists of $\mathcal{O}(\lambda^{2})$ group elements, where $\lambda$ is the security parameter, which incurs a significant bandwidth and computational overhead that makes the scheme itself, and the RGS protocols building upon it, impractical.
We present a new, more efficient KMHE scheme with linear-size ciphertexts.
Despite using heavier cryptographic tools (pairings instead of plain DDH-hard groups), the concrete ciphertext size and computational costs are very significantly reduced. We are able to shrink the ciphertext by 97.83 % (from 16.68 MB to 360 kB) and reduce the estimated computations for encryption by 99.996 % (from 4 minutes to 0.01 seconds) in comparison to BHHO.
Additionally, we introduce gate KMHE as a new tool to build more efficient RGS. Our RGS construction shrinks the size of a garbled gate by 98.99 % (from 133.43 MB to 1.35 MB) and decreases the estimated cost of garbling by 99.998 % (from 17 minutes to 0.02 seconds per gate) in comparison to Acharya et al.
In summary, our work shows for the first time that RGS and the SCALES protocol (and hence YOSO-like MPC) are practically feasible for simple circuits.
Fully Asymmetric Anamorphic Homomorphic Encryption from LWE
As introduced by Persiano {\it et al.} (Eurocrypt'22), anamorphic encryption (AE) is a primitive enabling private communications against a dictator that forces users to surrender their decryption keys. In its fully asymmetric flavor (defined by Catalano {\it et al.}, Eurocrypt'24), anamorphic channels can work as hidden public-key mechanisms in the sense that anamorphic encryptors are not necessarily able to decrypt anamorphic ciphertexts. Unfortunately, fully asymmetric AE is hard to come by and even impossible to obtain from ordinary public-key encryption via black-box constructions. So far, only three schemes are known to rely on well-established assumptions. In this paper, we exhibit constructions from the standard LWE assumption based on Regev's cryptosystem and its dual version. In both cases, we retain the additive homomorphism of the schemes. We additionally show that dual Regev is public-key anamorphic in the sense of Persiano {\it et al.} (Crypto'24). In the FHE setting, we show that the dual GSW system provides fully asymmetric AE (while preserving its leveled homomorphism) when instantiated with binary/ternary secret keys. Along the way, we discuss the extent to which our schemes satisfy a generalization of Banfi {\it et al.}'s notion of robustness (Eurocrypt'24) to the case of homomorphically evaluated ciphertexts.
Improvements on the schemes VOX and QR UOV When minus is a plus
In this article, we present an improvement for QR UOV and a reparation of VOX.
Thanks to the use of the perturbation minus we are able to fully exploit the parameters in order to reduce the public key. It also helps us to repair the scheme VOX. With QR UOV- we are able to significantly reduce the size of the public key at the cost of a small increase of the signature size. While with VOX- we can obtain a public key as short as $6KB$. VOX- also maintains a very short signature size.
We also show that the use of minus perturbation is very useful with schemes that uses the QR (Quotient ring perturbation).
MPC with Publicly Identifiable Abort from Pseudorandomness and Homomorphic Encryption
Publicly identifiable abort is a critical feature for ensuring accountability in outsourced computations using secure multiparty computation (MPC). Despite its importance, no prior work has specifically addressed identifiable abort in the context of outsourced computations. In this paper, we present the first MPC protocol that supports publicly identifiable abort with minimal overhead for external clients. Our approach minimizes client-side computation by requiring only a few pseudorandom function evaluations per input. On the server side, the verification process involves lightweight linear function evaluations using homomorphic encryption. This results in verification times of a few nanoseconds per operation for servers, with client overhead being approximately two orders of magnitude lower. Additionally, the public verifiability of our protocol reduces client input/output costs compared to SPDZ-based protocols, on which we base our protocol. For example, in secure aggregation use cases, our protocol achieves over twice the efficiency during the offline phase and up to an 18 % speedup in the online phase, significantly outperforming SPDZ.
Verifiable E-Voting with a Trustless Bulletin Board
Voter privacy and end-to-end (E2E) verifiability are critical features of electronic voting (e-voting) systems to safeguard elections. To achieve these properties commonly a perfect bulletin board (BB) is assumed that provides consistent, reliable, and tamper-proof storage and transmission of voting data. However, in practice, BBs operate in asynchronous and unreliable networks, and hence, are susceptible to vulnerabilities such as equivocation attacks and dropped votes, which can compromise both verifiability and privacy. Although prior research has weakened the perfect BB assumption, it still depends on trusting certain BB components.
In this work, we present and initiate a formal exploration of designing e-voting systems based on fully untrusted BBs. For this purpose, we leverage the notion of accountability and in particular use accountable BBs. Accountability ensures that if a security breach occurs, then cryptographic evidence can identify malicious parties. Fully untrusted BBs running in asynchronous networks bring new challenges. Among others, we identify several types of attacks that a malicious but accountable BB might be able to perform and propose a new E2E verifiability notion for this setting. Based on this notion and as a proof of concept, we construct the first e-voting system that is provably E2E verifiable and provides vote privacy even when the underlying BB is fully malicious. This establishes an alternative to traditional e-voting architectures that rely on (threshold) trusted BB servers.
Convolution-Friendly Image Compression with FHE
During the past few decades, the field of image processing has grown to cradle hundreds of applications,
many of which are outsourced to be computed on trusted remote servers.
More recently, Fully Homomorphic Encryption (FHE) has grown
in parallel as a powerful tool enabling computation on encrypted data,
and transitively on untrusted servers. As a result, new FHE-supported applications have emerged, but not all
have reached practicality due to hardware, bandwidth
or mathematical constraints inherent to FHE. One example is processing encrypted images, where practicality is closely related to bandwidth availability.
In this paper, we propose and implement a novel technique for
FHE-based image compression and decompression. Our technique is a stepping stone
towards practicality of encrypted image-processing and
applications such as private inference, object recognition, satellite-image searching
or video editing.
Inspired by the JPEG standard, and with new FHE-friendly
compression/decompression algorithms, our technique allows a client
to compress and encrypt images before sending them to a server,
greatly reducing the required bandwidth.
The server homomorphically decompresses a ciphertext to obtain
an encrypted image to which generic
pixel-wise processing or convolutional filters can be applied.
To reduce the round-trip bandwidth requirement, we also propose
a method for server-side post-processing compression.
Using our pipeline, we demonstrate that a high-definition grayscale image ($1024\times1024$) can be homomorphically decompressed, processed and
re-compressed in \(\sim\)$8.1$s with a compression ratio of 100/34.4 on a standard personal computer
without compromising on fidelity.
T-Spoon: Tightly Secure Two-Round Multi-Signatures with Key Aggregation
Multi-signatures over pairing-free cyclic groups have seen significant advancements in recent years, including achieving two-round protocols and supporting key aggregation.
Key aggregation enables the combination of multiple public keys into a single succinct aggregate key for verification and has essentially evolved from an optional feature to a requirement.
To enhance the concrete security of two-round schemes, Pan and Wagner (Eurocrypt 2023, 2024) introduced the first tightly secure constructions in this setting.
However, their schemes do not support key aggregation, and their approach inherently precludes a single short aggregate public key.
This leaves the open problem of achieving tight security and key aggregation simultaneously.
In this work, we solve this open problem by presenting the first tightly secure two-round multi-signature scheme in pairing-free groups supporting key aggregation.
As for Pan and Wagner's schemes, our construction is based on the DDH assumption.
In contrast to theirs, it also has truly compact signatures, with signature size asymptotically independent of the number of signers.
Correlation power analysis of LESS and CROSS
This paper presents a side-channel attack targeting the LESS and CROSS post-quantum digital signature schemes, resulting in full key recovery for both. These schemes have advanced to the second round of NIST’s call for additional signatures. By leveraging correlation power analysis and horizontal attacks, we are able to recover the secret key by observing the power consumption during the multiplication of an ephemeral secret vector with a public matrix. The attack path is enabled by the presence of a direct link between the secret key elements and the ephemeral secret, given correct responses. This attack targets version 1.2 of both schemes. In both settings we can recover the secret key in a single trace for the NIST’s security level I parameter set. Additionally, we propose improvements to the existing horizontal attack on CROSS, reducing the required rounds that need to be observed by an order of magnitude for the same parameter sets.
Post-Quantum Cryptography in eMRTDs: Evaluating PAKE and PKI for Travel Documents
Passports, identity cards and travel visas are examples of machine readable travel documents (MRTDs) or eMRTDs for their electronic variants. The security of the data exchanged between these documents and a reader is secured with a standardized password authenticated key exchange (PAKE) protocol known as PACE.
A new world-wide protocol migration is expected with the arrival of post-quantum cryptography (PQC) standards. In this paper, we focus on the impact of this migration on constrained embedded devices as used in eMRTDs. We present a feasibility study of a candidate post-quantum secure PAKE scheme as the replacement for PACE on existing widely deployed resource-constrained chips. In a wider context, we study the size, performance and security impact of adding post-quantum cryptography with a focus on chip storage and certificate chains for existing eMRTDs.
We show that if the required post-quantum certificates for the eMRTD fit in memory, the migration of existing eMRTD protocols to their post-quantum secure equivalent is already feasible but a performance penalty has to be paid. When using a resource constrained SmartMX3 P71D600 smart card, designed with classical cryptography in mind, then execution times of a post-quantum secure PAKE algorithm using the recommended post-quantum parameter of the new PQC standard ML-KEM can be done in under a second. This migration will be aided by future inclusion of dedicated hardware accelerators and increased memory to allow storage of larger keys and improve performance.
Chosen Ciphertext Security via BARGs
In this paper, we show a new set of cryptographic primitives that generically leads to chosen ciphertext secure (CCA secure) public-key encryption (PKE). Specifically, we show how a (non-interactive, publicly verifiable) batch argument (BARG) for NP can be combined with a chosen plaintext secure (CPA secure) PKE scheme to achieve a CCA secure one. The requirement of the succinctness of the proof size of a BARG used as a building block is arguably very mild: We require it to be only at most $(1 - \frac{1}{p(\lambda, n)}) \cdot k + q(\lambda, n) \cdot k^{\epsilon}$ for some non-negative constant $\epsilon < 1$ and polynomials $p, q$, where $\lambda$ denotes the security parameter, $n$ denotes the statement size, and $k$ denotes the batch size (i.e. the number of statements whose correctness is simultaneously proved), and thus it can even be (slightly) linear in $k$. A BARG with such succinctness is so weak that it cannot be used in the recent constructions of a non-interactive zero-knowledge proof system for NP based on a BARG (and a one-way function) by Bitansky et al. (STOC 2024) and Bradley, Waters, and Wu (TCC 2024). Therefore, our result gives a new building block that can upgrade CPA security into CCA security.
KeyJoin: Privacy-Focused CoinJoin Protocol for Bitcoin
Bitcoin is based on the Blockchain, an open ledger containing information about each transaction in the Bitcoin network. Blockchain serves many purposes, but it allows anyone to track all transactions and activities of each Bitcoin address. The privacy of the network is being threatened by some organizations that track transactions. Tracking and subsequent filtering of coins lead to the loss of exchangeability of Bitcoin.
Despite Bitcoin’s transparency, it is possible to increase user privacy using a variety of existing methods. One of these methods is called CoinJoin, was proposed by Bitcoin developer Greg Maxwell in 2013. This technology involves combining several users transactions to create a single transaction with multiple inputs and outputs, which makes transaction analysis more complicated.
This work describes the KeyJoin, a privacy-focused CoinJoin protocol based on the keyed-verification anonymous credentials (KVAC).
Towards Optimal Differential Attacks on FLY and PIPO
Lightweight block ciphers such as PIPO and FLY are designed to operate efficiently and securely in constrained environments. While the differential attack on PIPO-64-128 has already been studied by the designers, no concrete differential attack had been conducted for PIPO-64-256 and FLY. Motivated by this gap, we revisit the security of PIPO against differential attacks and generalize the analysis framework to make it applicable to structurally related ciphers. Based on this generalized framework, we search for key-recovery-attack-friendly distinguishers and apply clustering techniques to enhance their effectiveness in key-recovery attacks. As a result, we improve the previously proposed differential attack on PIPO-64-128, reducing the time complexity by a factor of $2^{31.7}$. Furthermore, we propose a 13-round differential attack on PIPO-64-256, which covers two more rounds than the previous result. We also apply the same methodology to FLY and present the first differential attack on 12-round FLY, reaching one round beyond the best-known distinguisher. We believe this work improves the understanding of the structures of FLY and PIPO, and provides a basis for future research on advanced key-recovery attacks for related cipher designs.
An Attack on TON’s ADNL Secure Channel Protocol
We present an attack on the Abstract Datagram Network Layer (ADNL) protocol used in The Open Network (TON), currently the tenth largest blockchain by market capitalization. In its TCP variant, ADNL secures communication between clients and specialized nodes called liteservers, which provide access to blockchain data. We identify two cryptographic design flaws in this protocol: a handshake that permits session-key replay and a non-standard integrity mechanism whose security critically depends on message confidentiality. We transform these vulnerabilities into an efficient plaintext-recovery attack by exploiting two ADNL communication patterns, allowing message reordering across replayed sessions. We then develop a plaintext model for this scenario and construct an efficient algorithm that recovers the keystream using a fraction of known plaintexts and a handful of replays. We implement our attack and show that an attacker intercepting the communication between a TON liteserver and a widely deployed ADNL client can recover the keystream used to encrypt server responses by performing eight connection replays to the server. This allows the decryption of sensitive data, such as account balances and user activity patterns. Additionally, the attacker can modify server responses to manipulate blockchain information displayed to the client, including account balances and asset prices.
Efficient CPA Attack on Hardware Implementation of ML-DSA in Post-Quantum Root of Trust
Side-channel attacks (SCA) pose a significant threat to cryptographic implementations, including those designed to withstand the computational power of quantum computers.
This paper introduces the first side-channel attack on an industry-grade post-quantum cryptography implementation.
Specifically, we present a Correlation Power Analysis (CPA) attack targeting the open-source hardware implementation of ML-DSA within a Silicon Root of Trust framework developed through a multi-party collaboration involving leading technology companies.
Our attack focuses on the modular reduction process that follows the Number Theoretic Transform-based polynomial pointwise multiplication.
By exploiting side-channel leakage from a distinctive unique reduction algorithm and leveraging the zeroization mechanism used to securely erase sensitive information by clearing internal registers, we significantly enhance the attack's efficacy.
Our findings reveal that an adversary can extract the secret keys using only 10,000 power traces.
With access to these keys, an attacker could forge signatures for certificate generation, thereby compromising the integrity of the root of trust.
This work highlights the vulnerabilities of industry-standard root-of-trust systems to side-channel attacks. It underscores the urgent need for robust countermeasures to secure commercially deployed systems against such threats.
Registered Functional Encryption for Attribute-Weighted Sums with Access Control
In this work, we present Functional Encryption (FE) schemes for Attribute-Weighted Sums (AWS), introduced by Abdalla, Gong and Wee (Crypto 2020) in the registration-based setting (RFE). In such a setting, users sample their own public/private key pairs $(\mathsf{pk}_i, \mathsf{sk}_i)$; a key curator registers user public keys along with their functions $h_i$; encryption takes as input $N$ attribute-value pairs $\{\vec x_\ell, \vec z_\ell\}_{\ell\in[N]}$ where $\vec x_\ell$ is public and $\vec z_\ell$ is private; and decryption recovers the weighted sum $\sum_{\ell\in[N]}h_i(\vec x_\ell)^\mathsf{T}\vec z_\ell$ while leaking no additional information about $\vec z_\ell$. Recently, Agrawal, Tomida and Yadav (Crypto 2023) studied the attribute-based case of AWS (AB-AWS) providing fine-grained access control, where the function is described by a tuple $(g_i, h_i)$, the input is extended to $(\vec y, \{\vec x_\ell, \vec z_\ell\}_{\ell \in [N]})$ and decryption recovers the weighted sum only if $g_i(\vec y) = 0$. Our main results are the following:
- We build the first RFE for (AB-)1AWS functionality, where $N=1$, that achieves adaptive indistinguishability-based security under the (bilateral) $k$-Lin assumption in prime-order pairing groups. Prior works achieve RFE for linear and quadratic functions without access control in the standard model, or for attribute-based linear functions in the generic group model.
- We develop the first RFE for AB-AWS functionality, where $N$ is unbounded, that achieves very selective simulation-based security under the bilateral $k$-Lin assumption. Here, “very selective” means that the adversary declares challenge attribute values, all registered functions and corrupted users upfront. Previously, SIM-secure RFEs were only constructed for linear and quadratic functions without access control in the same security model.
We devise a novel nested encoding mechanism that facilitates achieving attribute-based access control and unbounded inputs in the registration-based setting for AWS functionalities, proven secure in the standard model. In terms of efficiency, our constructions feature short public parameters, secret keys independent of $N$, and compact ciphertexts unaffected by the length of public inputs. Moreover, as required by RFE properties, all objective sizes and algorithm costs scale poly-logarithmically with the total number of registered users in the system.
The Meta-Complexity of Secret Sharing
A secret-sharing scheme allows the distribution of a secret $s$ among $n$ parties, such that only certain predefined “authorized” sets of parties can reconstruct the secret, while all other “unauthorized” sets learn nothing about $s$. The collection of authorized/unauthorized sets is defined by a monotone function $f: \{0,1\}^n \rightarrow \{0,1\}$. It is known that any monotone function can be realized by a secret-sharing scheme; thus, the smallest achievable \emph{total share size}, $S(f)$, serves as a natural complexity measure.
In this paper, we initiate the study of the following meta-complexity question: Given a monotone function $f$, is it possible to efficiently distinguish between cases where the secret-sharing complexity of $f$ is small versus large? We examine this question across several computational models, yielding the following main results.
(Hardness for formulas and circuits): Given a monotone formula $f$ of size $L$, it is coNP-hard to distinguish between ``cheap'' functions, where the maximum share size is 1 bit and the total share size is $O(L^{0.01})$, and ``expensive'' functions, where the maximum share size is $\Omega(\sqrt{L})$ and the total share size is $\Omega(L/\log L)$.
This latter bound nearly matches known secret-sharing constructions yielding a total share size of $L$ bits. For monotone circuits, we strengthen the bound on the expensive case to a maximum share size of $\Omega(L/\log L)$ and a total share size of $\Omega(L^2/\log L)$. These results rule out the existence of instance-optimal compilers that map a formula $f$ to a secret-sharing scheme with complexity polynomially related to $S(f)$.
(Hardness for truth tables): Under cryptographic assumptions, either (1) every $n$-bit slice function can be realized by a $\text{poly}(n)$-size secret-sharing scheme, or (2) given a truth-table representation of $f$ of size $N = 2^n$, it is computationally infeasible to distinguish in time $\text{poly}(N)$ between cases where $S(f) = \text{poly}(n)$ and $S(f) = n^{\omega(1)}$. Option (1) would be considered a breakthrough result, as the best-known construction for slices has a sub-exponential complexity of $2^{\tilde{O}(\sqrt{n})}$ (Liu, Vaikuntanathan, and Wee; Eurocrypt 2018). Our proof introduces a new worst-case-to-average-case reduction for slices, which may be of independent interest.
(Hardness for graphs): We examine the simple case where $f$ is given as a 2-DNF, represented by a graph $G$ whose edges correspond to 2-terms, and ask whether it is possible to distinguish between cases where the share size is constant and those where the share size is large, say $\Omega(\log n)$. We establish several connections between this question and questions in communication complexity. For instance, we show that graphs admitting constant-cost secret sharing form a subclass of graphs with constant randomized communication complexity and constant-size adjacency sketches (Harms, Wild, and Zamaraev; STOC 2022). We leverage these connections to establish new lower bounds for specific graph families, derive a combinatorial characterization of graphs with constant-size linear secret-sharing schemes, and show that a natural class of myopic algorithms fails to distinguish cheap graphs from expensive ones.
Engorgio: An Arbitrary-Precision Unbounded-Size Hybrid Encrypted Database via Quantized Fully Homomorphic Encryption
This work proposes an encrypted hybrid database framework that combines vectorized data search and relational data query over quantized fully homomorphic encryption (FHE). We observe that, due to the lack of efficient encrypted data ordering capabilities, most existing encrypted database (EDB) frameworks do not support hybrid queries involving both vectorized and relational data. To further enrich query expressiveness while retaining evaluation efficiency, we propose Engorgio, a hybrid EDB framework based on quantized data ordering techniques over FHE. Specifically, we design a new quantized data encoding scheme along with a set of novel comparison and permutation algorithms to accurately generate and apply orders between large-precision data items. Furthermore, we optimize specific query types, including full table scan, batched query, and Top-k query to enhance the practical performance of the proposed framework. In the experiment, we show that, compared to the state-of-the-art EDB frameworks, Engorgio is up to 28x--854x faster in homomorphic comparison, 65x--687x faster in homomorphic sorting and 15x--1,640x faster over a variety of end-to-end relational, vectorized, and hybrid SQL benchmarks. Using Engorgio, the amortized runtime for executing a relational and hybrid query on a 48-core processor is under 3 and 75 seconds, respectively, over a 10K-row hybrid database.
Generic Construction of Secure Sketches from Groups
Secure sketches are designed to facilitate the recovery of originally enrolled data from inputs that may vary slightly over time. This capability is important in applications where data consistency cannot be guaranteed due to natural variations, such as in biometric systems and hardware security. Traditionally, secure sketches are constructed using error-correcting codes to handle these variations effectively. Additionally, principles of information theory ensure the security of these sketches by managing the trade-off between data recoverability and confidentiality. In this paper, we show how to construct a new family of secure sketches generically from groups. The notion of groups with unique factorization property is first introduced, which is of independent interest and serves as a building block for our secure sketch construction. Next, an in-depth study of the underlying mathematical structures is provided, and some computational and decisional hardness assumptions are defined. As a result, it is argued that our secure sketches are efficient; can handle a linear fraction of errors with respect to the norm 1 distance; and that they are reusable and irreversible. To our knowledge, such generic group-based secure sketch construction is the first of its kind, and it offers a viable alternative to the currently known secure sketches.
A proof of P≠NP (New symmetric encryption algorithm against any linear attacks and differential attacks)
P vs NP problem is the most important unresolved problem in the field of computational complexity. Its impact has penetrated into all aspects of algorithm design, especially in the field of cryptography. The security of cryptographic algorithms based on short keys depends on whether P is equal to NP. In fact, Shannon strictly proved that the one-time-pad system meets unconditional security, but because the one-time-pad system requires the length of key to be at least the length of plaintext, how to transfer the key is a troublesome problem that restricts the use of the one-time-pad system in practice. Cryptography algorithms used in practice are all based on short key, and the security of the short key mechanism is ultimately based on one-way assumption. In fact, the existence of one-way function can directly lead to the important conclusion P≠NP.
In this paper, we originally constructed a short-key block cipher algorithm. The core feature of this algorithm is that, when any plaintext-ciphertext pairs are known, the problem of cracking its key is equilavent to the problem of two independent encryption system cracking their keys separately when only knowing their respective ciphertexts. In addition, for these two independent encryption systems, verifying a possible plaintext is also exponentially computationally complex when only the ciphertext is known.
Based on the above feature, we construct a problem and theoretically prove that the problem satisfies the properties of one-way functions, thereby solving the problem of the existence of one-way functions, that is, directly proving that P≠NP.
Universally Composable Interactive and Ordered Multi-Signatures
Uncategorized
Uncategorized
Multi-signatures allow a given set of parties to cooperate in order to create a digital signature whose size is independent of the number of signers. At the same time, no other set of parties can create such a signature. While non-interactive multi-signatures are known (e.g. BLS from pairings), many popular multi-signature schemes such as MuSig2 (which are constructed from pairing-free discrete logarithm-style assumptions) require interaction. Such interactive multi-signatures have recently found practical applications e.g. in the cryptocurrency space.
Motivated by classical and emerging use cases of such interactive multi-signatures, we introduce the first systematic treatment of interactive multi-signatures in the universal composability (UC) framework. Along the way, we revisit existing game-based security notions and prove that constructions secure in the game-based setting can easily be made UC secure and vice versa.
In addition, we consider interactive multi-signatures where the signers must interact in a fixed pattern (so-called ordered multi-signatures). Here, we provide the first construction of ordered multi-signatures based on the one-more discrete logarithm assumption, whereas the only other previously known construction required pairings. Our scheme achieves a stronger notion of unforgeability, guaranteeing that the adversary cannot obtain a signature altering the relative order of honest signers. We also present the first formalization of ordered multi-signatures in the UC framework and again show that our stronger game-based definitions are equivalent to UC security.
A Note on ``CABC: A Cross-Domain Authentication Method Combining Blockchain with Certificateless Signature for IIoT''
We show that the authentication method [Future Gener. Comput. Syst. 158: 516-529 (2024)] cannot be practically implemented, because the signature scheme is insecure against certificateless public key replacement forgery attack. The explicit dependency between the certificateless public key and secret key is not properly used to construct some intractable problems, such as Elliptic Curve Discrete Logarithm (ECDL). An adversary can find an efficient signing algorithm functionally equivalent to the valid signing algorithm. We also correct some typos in the original presentation.
Conditional disclosure of secrets with quantum resources
The conditional disclosure of secrets (CDS) primitive is among the simplest cryptographic settings in which to study the relationship between communication, randomness, and security. CDS involves two parties, Alice and Bob, who do not communicate but who wish to reveal a secret $z$ to a referee if and only if a Boolean function $f$ has $f(x,y)=1$. Alice knows $x,z$, Bob knows $y$, and the referee knows $x,y$. Recently, a quantum analogue of this primitive called CDQS was defined and related to f-routing, a task studied in the context of quantum position-verification. CDQS has the same inputs, outputs, and communication pattern as CDS but allows the use of shared entanglement and quantum messages. We initiate the systematic study of CDQS, with the aim of better understanding the relationship between privacy and quantum resources in the information theoretic setting. We begin by looking for quantum analogues of results already established in the classical CDS literature. Doing so we establish a number of basic properties of CDQS, including lower bounds on entanglement and communication stated in terms of measures of communication complexity. Because of the close relationship to the $f$-routing position-verification scheme, our results have relevance to the security of these schemes.
A note on closed addition chains and complete numbers
We introduce a new class of addition chains and show the numbers for which these chains are optimal satisfy the Scholz conjecture, precisely the inequality $$\iota(2^n-1)\leq n-1+\iota(n).$$
Constant-time Integer Arithmetic for SQIsign
SQIsign, the only isogeny-based signature scheme submitted to NIST’s additional signature standardization call, achieves the smallest public key and signature sizes among all post-quantum signature schemes. However, its existing implementation, particularly in its quaternion arithmetic operations, relies on GMP’s big integer functions, which, while efficient, are often not designed for constant-time execution.
In this work, we take a step toward side-channel-protected SQIsign by implementing constant-time techniques for SQIsign’s big integer arithmetic, which forms the computational backbone of its quaternion module. For low-level fundamental functions including Euclidean division, exponentiation and the function that computes integer square root, we either extend or tailor existing solutions according to SQIsign's requirements such as handling signed integers or scaling them for integers up to $\sim$12,000 bits. Further, we propose a novel constant-time modular reduction technique designed to handle dynamically changing moduli.Our implementation is written in C without reliance on high-level libraries such as GMP and we evaluate the constant-time properties of our implementation using Timecop with Valgrind that confirm the absence of timing-dependent execution paths. We provide experimental benchmarks across various SQIsign parameter sizes to demonstrate the performance of our constant-time implementation.
Worst-Case Time Analysis of Key Agreement Protocols in 10BASE-T1S Automotive Networks
With the rise of in-vehicle and car-to-x communication systems, ensuring robust security in automotive networks is becoming increasingly vital. As the industry shifts toward Ethernet-based architectures, the IEEE 802.1AE MACsec standard is gaining prominence as a critical security solution for future in-vehicle networks (IVNs). MACsec utilizes the MACsec Key Agreement Protocol (MKA), defined in the IEEE 802.1X standard, to establish secure encryption keys for data transmission. However, when applied to 10BASE-T1S Ethernet networks with multidrop topologies, MKA encounters a significant challenge known as the real-time paradox. This paradox arises from the competing demands of prioritizing key agreement messages and real-time control data, which conflict with each other. Infineon addresses this challenge with its innovative In-Line Key Agreement (IKA) protocol. By embedding key agreement information directly within a standard data frame, IKA effectively resolves the real-time paradox and enhances network performance. This paper establishes a theoretical worst-case delay bound for key agreement in multidrop 10BASE-T1S IVNs with more than two nodes, using Network Calculus techniques. The analysis compares the MKA and IKA protocols in terms of performance. For a startup scenario involving a 16-node network with a 50 bytes MPDU size, the MKA protocol exhibits a worst-case delay that is 1080% higher than that of IKA. As the MPDU size increases to 1486 bytes, this performance gap narrows significantly, reducing the delay difference to just 6.6%.
Simple Power Analysis Attack on SQIsign
The isogeny-based post-quantum digital signature algorithm SQIsign offers the most compact key and signature sizes among all candidates in the ongoing NIST call for additional post-quantum signature algorithms. To the best of our knowledge, we present the first Simple Power Analysis (SPA) side-channel attack on SQIsign, demonstrating its feasibility for key recovery.
Our attack specifically targets secret-dependent computations within Cornacchia's algorithm, a fundamental component of SQIsign's quaternion module. At the core of this algorithm, a secret-derived yet ephemeral exponent is used in a modular exponentiation subroutine. By performing SPA on the modular exponentiation, we successfully recover this ephemeral exponent. We then develop a method to show how this leaked exponent can be exploited to ultimately reconstruct the secret signing key of SQIsign.
Our findings emphasize the critical need for side-channel-resistant implementations of SQIsign, highlighting previously unexplored vulnerabilities in its design.
Row Reduction Techniques for n-Party Garbling
Recent advancements in maliciously secure garbling have significantly improved the efficiency of constant-round multi-party computation. Research in the field has primarily focused on reducing communication complexity through row reduction techniques and improvements to the preprocessing phase with the use of simpler correlations.
In this work, we present two contributions to reduce the communication complexity of state of the art multi-party garbling with an arbitrary number of corruptions. First, we show how to achieve full row reduction for $n$-party garbled circuits in HSS17-style protocols (Hazay et al., Asiacrypt'17 & JC'20) and authenticated garbling (Yang et al., CCS'20), reducing the size of the garbled circuit by 25% from $4n\kappa$ to $3n\kappa$ and from $(4n-6)\kappa$ to $3(n-1)\kappa$ bits per AND gate, respectively. Achieving row reduction in multi-party garbling has been an open problem which was partly addressed by the work of Yang et al. for authenticated garbling. In our work, we show a full row reduction for both garbling approaches, thus addressing this open problem completely.
Second, drawing inspiration from the work of Dittmer et al. (Crypto 2022), we propose a new preprocessing protocol to obtain the required materials for the garbling phase using large field triples that can be generated with sublinear communication. The new preprocessing significantly reduces the communication overhead of garbled circuits. Our optimizations result in up to a $6\times$ reduction in communication compared to HSS17 and a $2.2\times$ reduction over the state of the art authenticated garbling of Yang et al. for 3 parties in a circuit with 10 million AND gates.
Bandwidth-Efficient Robust Threshold ECDSA in Three Rounds
Threshold ECDSA schemes distribute the capability of issuing signatures to multiple parties. They have been used in practical MPC wallets holding cryptocurrencies. However, most prior protocols are not robust, wherein even one misbehaving or non-responsive party would mandate an abort. Robust schemes have been proposed (Wong et al., NDSS ’23, ’24), but they do not match state-of-the-art number of rounds which is only three (Doerner et al., S&P ’24). In this work, we propose robust threshold ECDSA schemes RompSig-Q and RompSig-L that each take three rounds (two of which are broadcasts). Building on the works of Wong et al. and
further optimized towards saving bandwidth, they respectively take each signer (1.0𝑡 + 1.6) KiB and 3.0 KiB outbound broadcast communication, and thus exhibit bandwidth efficiency that is competitive in practical scenarios where broadcasts are natively handled. RompSig-Q preprocesses multiplications and features fast online signing; RompSig-L leverages threshold CL encryption for scalability and dynamic participation.
Buffalo: A Practical Secure Aggregation Protocol for Buffered Asynchronous Federated Learning
Federated Learning (FL) has become a crucial framework for collaboratively training Machine Learning (ML) models while ensuring data privacy. Traditional synchronous FL approaches, however, suffer from delays caused by slower clients (called stragglers), which hinder the overall training process.
Specifically, in a synchronous setting, model aggregation happens once all the intended clients have submitted their local updates to the server. To address these inefficiencies, Buffered Asynchronous FL (BAsyncFL) was introduced, allowing clients to update the global model as soon as they complete local training. In such a setting, the new global model is obtained once the buffer is full, thus removing synchronization bottlenecks. Despite these advantages, existing Secure Aggregation (SA) techniques—designed to protect client updates from inference attacks—rely on synchronized rounds, making them unsuitable for asynchronous settings.
In this paper, we present Buffalo, the first practical SA protocol tailored for BAsyncFL. Buffalo leverages lattice-based encryption to handle scalability challenges in large ML models and introduces a new role, the assistant, to support the server in securely aggregating client updates. To protect against an actively corrupted server, we enable clients to verify that their local updates have been correctly integrated into the global model. Our comprehensive evaluation—incorporating theoretical analysis and real-world experiments on benchmark datasets—demonstrates that Buffalo is an efficient and scalable privacy-preserving solution in BAsyncFL environments.
Fast Enhanced Private Set Union in the Balanced and Unbalanced Scenarios
Private set union (PSU) allows two parties to compute the union of their sets without revealing anything else. It can be categorized into balanced and unbalanced scenarios depending on the size of the set on both sides. Recently, Jia et al. (USENIX Security 2024) highlight that existing scalable PSU solutions suffer from during-execution leakage and propose a PSU with enhanced security for the balanced setting. However, their protocol's complexity is superlinear with the size of the set. Thus, the problem of constructing a linear enhanced PSU remains open, and no unbalanced enhanced PSU exists. In this work, we address these two open problems:
-Balanced case: We propose the first linear enhanced PSU. Compared to the state-of-the-art enhanced PSU (Jia et al., USENIX Security 2024), our protocol achieves a $2.2 - 8.8\times$ reduction in communication cost and a $1.2 - 8.6\times$ speedup in running time, depending on set sizes and network environments.
-Unbalanced case: We present the first unbalanced enhanced PSU, which achieves sublinear communication complexity in the size of the large set. Experimental results demonstrate that the larger the difference between the two set sizes, the better our protocol performs. For unbalanced set sizes $(2^{10},2^{20})$ with single thread in $1$Mbps bandwidth, our protocol requires only $2.322$ MB of communication. Compared with the state-of-the-art enhanced PSU, there is $38.1\times$ shrink in communication and roughly $17.6\times$ speedup in the running time.
Reality Check on Side-Channels: Lessons learnt from breaking AES on an ARM Cortex A processor
Side-channel analysis (SCA) has posed a significant threat to systems for nearly three decades. Numerous practical demonstrations have targeted everyday devices, such as smartcards, cryptocurrency wallets, and smartphones. However, much of the research in the public domain has focused on low-end microcontrollers, limiting our understanding of the challenges involved in attacking more complex systems. In this work, we conduct a reality check on SCA by targeting a high-performance ARM Cortex-A72 out-of-order processor, commonly found in smartphones. We evaluate the practical effort required for key recovery attacks, considering various threat models, from basic to advanced. Our results show that while basic approaches fail, advanced approaches like deep learning-based SCA can successfully recover the secret key. This multi-tier evaluation approach is crucial for comprehensive risk assessment and informed decision-making regarding mitigation strategies, balancing security, performance, and area constraints.
Repeated Agreement is Cheap! On Weak Accountability and Multishot Byzantine Agreement
Byzantine Agreement (BA) allows $n$ processes to propose input values to reach consensus on a common, valid $L_o$-bit value, even in the presence of up to $t < n$ faulty processes that can deviate arbitrarily from the protocol. Although strategies like randomization, adaptiveness, and batching have been extensively explored to mitigate the inherent limitations of one-shot agreement tasks, there has been limited progress on achieving good amortized performance for multi-shot agreement, despite its obvious relevance to long-lived functionalities such as state machine replication.
Observing that a weak form of accountability suffices to identify and exclude malicious processes, we propose new efficient and deterministic multi-shot agreement protocols for multi-value validated Byzantine agreement (MVBA) with a strong unanimity validity property (SMVBA) and interactive consistency (IC). Specifically, let $\kappa$ represent the size of the cryptographic objects needed to solve Byzantine agreement when $n<3t$. We achieve both IC and SMVBA with $O(1)$ amortized latency, with a bounded number of slower instances. The SMVBA protocol has $O(nL_o +n\kappa)$ amortized communication and the IC has $O(nL_o + n^2\kappa)$ amortized communication. For input values larger than $\kappa$, our protocols are asymptotically optimal. These results mark a substantial improvement—up to a linear factor, depending on $L_o$—over prior results. To the best of our knowledge, the present paper is the first to achieve the long-term goal of implementing a state machine replication abstraction of a distributed service that is just as fast and efficient as its centralized version, but with greater robustness and availability.
Efficient Verifiable Mixnets from Lattices, Revisited
Mixnets are powerful building blocks for providing anonymity
in applications like electronic voting and anonymous messaging. The en-
cryption schemes upon which traditional mixnets are built, as well as the
zero-knowledge proofs used to provide verifiability, will, however, soon
become insecure once a cryptographically-relevant quantum computer is
built. In this work, we construct the most compact verifiable mixnet that
achieves privacy and verifiability through encryption and zero-knowledge
proofs based on the hardness of lattice problems, which are believed to
be quantum-safe.
A core component of verifiable mixnets is a proof of shuffle. The starting
point for our construction is the proof of shuffle of Aranha et al. (CT-
RSA 2021). We first identify an issue with the soundness proof in that
work, which is also present in the adaptation of this proof in the mixnets
of Aranha et al. (ACM CCS 2023) and Hough et al. (IACR CiC 2025).
The issue is that one cannot directly adapt classical proofs of shuffle
to the lattice setting due to the splitting structure of the rings used in
lattice-based cryptography. This is not just an artifact of the proof, but
a problem that manifests itself in practice, and we successfully mount an
attack against the implementation of the first of the mixnets. We fix the
problem and introduce a general approach for proving shuffles in split-
ting rings that can be of independent interest.
The efficiency improvement of our mixnet over prior work is achieved by
switching from re-encryption mixnets (as in the works of Aranha et al.
and Hough et al.) to decryption mixnets with very efficient layering based
on the hardness of the LWE and LWR problems over polynomial rings.
The ciphertexts in our scheme are smaller by approximately a factor of
12X and 2X over the aforementioned instantiations, while the linear-size
zero-knowledge proofs are smaller by a factor of 4X and 2X.
High-Performance FPGA Implementations of Lightweight ASCON-128 and ASCON-128a with Enhanced Throughput-to-Area Efficiency
The ASCON algorithm was chosen for its efficiency and suitability for resource-constrained environments such as IoT devices. In this paper, we present a high-performance FPGA implementation of ASCON-128 and ASCON-128a, optimized for the throughput-to-area ratio. By utilizing a 6-round permutation in one cycle for ASCON-128 and a 4-round permutation in one cycle for ASCON-128a, we have effectively maximized throughput while ensuring efficient resource utilization. Our implementation shows significant improvements over existing designs, achieving 34.16\% better throughput-to-area efficiency on Artix-7 and 137.58\% better throughput-to-area efficiency on Kintex-7 FPGAs. When comparing our results on the Spartan-7 FPGA with Spartan-6, we observed a 98.63\% improvement in throughput-to-area efficiency. However, it is important to note that this improvement may also be influenced by the advanced capabilities of the Spartan-7 platform compared to the older Spartan-6, in addition to the design optimizations implemented in this work.
A Specification of an Anonymous Credential System Using BBS+ Signatures with Privacy-Preserving Revocation and Device Binding
Recently, there has been a growing interest in anonymous credentials (ACs) as they can mitigate the risk of personal data being processed by untrusted actors without consent and beyond the user's control. Furthermore, due to the privacy-by-design paradigm of ACs, they can prove possession of personal attributes, such as an authenticated government document containing sensitive personal information, while preserving the privacy of the individual by not actually revealing the data. Typically, AC specifications consider the privacy of individuals during the presentation of an AC, but often neglect privacy-preserving approaches for enhanced security features such as AC non-duplication or AC revocation. To achieve more privacy-friendly enhanced security features of non-duplication and privacy-preserving revocation, an AC can be partially stored on secure, trusted hardware and linked to a status credential that reflects its revocation status.
In this paper, we specify an AC system that satisfies the requirements of minimality of information, unlinkability, non-duplication, and privacy-preserving revocation.
This is achieved by adapting the hardware binding method of the Direct Anonymous Attestation protocol with the BBS+ short group signatures of Camenisch et al. and combining it with status credentials.
Sampling Arbitrary Discrete Distributions for RV Commitment Schemes Using the Trimmed-Tree Knuth-Yao Algorithm
Sampling from non-uniform randomness according to an algorithm which keeps the internal randomness used by the sampler hidden is increasingly important for cryptographic applications, such as timing-attack-resistant lattice-based cryptography or certified differential privacy. In this paper we present a provably efficient sampler that maintains random sample privacy, or random sample hiding, and is applicable to arbitrary discrete random variables. Namely, we present a constant-time version of the classic Knuth-Yao algorithm that we name "trimmed-tree" Knuth-Yao. We establish distribution-tailored Boolean circuit complexity bounds for this algorithm, in contrast to the previous naive distribution-agnostic bounds. For a $\sigma^2$-sub-Gaussian discrete distribution where $b_t$ is the number of bits for representing the domain, and $b_p$ is the bits for precision of the PDF values, we prove the Boolean circuit complexity of the trimmed-tree Knuth-Yao algorithm has upper bound $O(\sigma b_p^{3/2} b_t)$, an exponential improvement over the naive bounds, and in certain parameter regimes establish the lower bound $\widetilde{\Omega}( ( \sigma + b_p ) b_t )$. Moreover, by proving the subtrees in the trimmed-tree Knuth-Yao circuit are small, we prove it can computed by running $b_p$ circuits of size $O(\sigma b_p^{1/2} b_t)$ in parallel and then running $O(b_p b_t )$ sequential operations on the output. We apply these circuits for trimmed-tree Knuth-Yao to constructing random variable commitment schemes for arbitrary discrete distributions, giving exponential improvements in the number of random bits and circuit complexity used for certified differentially private means and counting queries over large datasets and domains.
One-Step Schnorr Threshold Identification
Threshold zero-knowledge protocols have not been widely adopted,
presumably due to the relevant network overhead,
complicated certification processes
and thus limited interoperability chances.
In this work, we propose $\mathsf{OSST}$,
a Schnorr-based threshold identification scheme
that is both non-interactive and non-reliant on the public shares.
Given a $(n, t)$-shared secret $x$,
the proposed protocol allows
any $t^* \ge t$ (but no less) shareholders to collectively prove
that their secret keys combine to $x$ in the sense of interpolation
without revealing any information beyond that.
On the one side, the provers do not need to engage in
multi-party computations,
sending their packets to the verifier asynchronously.
On the other side, the verification operation
involves the combined public key $y \equiv g ^ x$ alone,
meaning that the verifier does not need to
have validated and registered the individual member identities.
The protocol
can be cheaply adopted in permissionless and dynamic environments,
where no certification processes or infrastructure support
are available, and be easily integrated
with existing verifiers by means of pure software extension.
No publicly trusted setup is required beyond
the assumption that $x$ has been
distributed by means of Shamir's secret sharing
(or equivalent distribution scheme)
and that the public counterpart has been advertised correctly;
in particular, the protocol is intended to be
secure against impersonation attacks
in the plain public-key model.
We provide evidence that this has good chances to
hold true by giving a formal security proof
in the random oracle model under the
one-more discrete-logarithm hardness
($\mathsf{OMDL}$) assumption.
Generalization of semi-regular sequences: Maximal Gröbner basis degree, variants of genericness, and related conjectures
Nowadays, the notion of semi-regular sequences, originally proposed by Fröberg, becomes very important not only in Mathematics, but also in Information Science, in particular Cryptology. For example, it is highly expected that randomly generated polynomials form a semi-regular sequence, and based on this observation, secure cryptosystems based on polynomial systems can be devised. In this paper, we deal with a semi regular sequence and its variant, named a generalized cryptographic semi-regular sequence, and give precise analysis on the complexity of computing a Gröbner basis of the ideal generated by such a sequence with help of several regularities of the ideal related to Lazard's bound on maximal Gröbner basis degree and other bounds. We also study the genericness of the property that a sequence is semi-regular, and its variants related to Fröberg's conjecture. Moreover, we discuss on the genericness of another important property that the initial ideal is weakly reverse lexicographic, related to Moreno-Socías' conjecture, and show some criteria to examine whether both Fröberg's conjecture and Moreno-Socías' one hold at the same time.
Multi-Client Attribute-Based and Predicate Encryption, Revisited
Multi-client Attribute-Based Encryption (ABE) is a generalization of key-policy ABE where attributes can be independently encrypted across several ciphertexts w.r.t. labels, and a joint decryption of these ciphertexts is possible if and only if (1) all ciphertexts share the same label, and (2) the combination of attributes satisfies the policy of the decryption key. All encryptors have their own secret key and security is preserved even if some of them are known to the adversary.
Very recently, Pointcheval et al. (TCC 2024) presented a semi-generic construction of MC-ABE for restricted function classes, e.g., NC0 and constant-threshold policies. We identify an abstract criterion common to all their policy classes which suffices to present the construction in a fully black-box way and allows for a slight strengthening of the supported policy classes. The construction of Pointcheval et al. is based on pairings. We additionally provide a new lattice-based instantiation from (public-coin) evasive LWE.
Furthermore, we revisit existing constructions for policies that can be viewed as a conjunction of local policies (one per encryptor). Existing constructions from MDDH (Agrawal et al., CRYPTO 2023) and LWE (Francati et al., EUROCRYPT 2023) do not support encryption w.r.t. different labels. We show how this feature can be included. Notably, the security model of Francati et al. additionally guarantees attribute-hiding but does not capture collusions. Our new construction is also attribute-hiding and provides resilience against any polynomially bounded number of collusions which must be fixed at the time of setup.
One Bit to Rule Them All – Imperfect Randomness Harms Lattice Signatures
The Fiat-Shamir transform is one of the most widely applied methods for secure signature construction. Fiat-Shamir starts with an interactive zero-knowledge identification protocol and transforms this via a hash function into a non-interactive signature. The protocol's zero-knowledge property ensures that a signature does not leak information on its secret key $\mathbf s$, which is achieved by blinding $\mathbf s$ via proper randomness $\mathbf y$.
Most prominent Fiat-Shamir examples are DSA signatures and the new post-quantum standard Dilithium.
In practice, DSA signatures have experienced fatal attacks via leakage of a few bits of the randomness $\mathbf y$ per signature.
Similar attacks now emerge for lattice-based signatures, such as Dilithium.
We build on, improve and generalize the pioneering leakage attack on Dilithium by Liu, Zhou, Sun, Wang, Zhang, and Ming.
In theory, their original attack can recover a 256-dimensional subkey of Dilithium-II (aka ML-DSA-44) from leakage in a single bit of $\mathbf{y}$ per signature, in any bit position $j \geq 6$.
However, the memory requirement of their attack grows exponentially in the bit position $j$ of the leak.
As a consequence, if the bit leak is in a high-order position, then their attack is infeasible.
In our improved attack, we introduce a novel transformation, that allows us to get rid of the exponential memory requirement.
Thereby, we make the attack feasible for $all$ bit positions $j \geq 6$.
Furthermore, our novel transformation significantly reduces the number of required signatures in the attack.
The attack applies more generally to all Fiat-Shamir-type lattice-based signatures.
For a signature scheme based on module LWE over an $\ell$-dimensional module, the attack uses a 1-bit leak per signature to efficiently recover a $\frac{1}{\ell}$-fraction of the secret key.
In the ring LWE setting, which can be seen as module LWE with $\ell = 1$, the attack thus recovers the whole key.
For Dilithium-II, which uses $\ell = 4$, knowledge of a $\frac{1}{4}$-fraction of the 1024-dimensional secret key lets its security estimate drop significantly from $128$ to $84$ bits.
SoK: Dlog-based Distributed Key Generation
Distributed Key Generation (DKG) protocols are fundamental components of threshold cryptography, enabling key generation in a trustless manner for a range of cryptographic operations such as threshold encryption and signing. Of particular widespread use are DKG protocols for discrete-logarithm based cryptosystems. In this Systematization of Knowledge (SoK), we present a comprehensive analysis of existing DKG protocols in the discrete-logarithm setting, with the goal of identifying cryptographic techniques and design principles that facilitate the development of secure and resilient protocols. To offer a structured overview of the literature, we adopt a modular approach and classify DKG protocols based on their underlying network assumption and cryptographic tools. These two factors determine how DKG protocols manage secret sharing and reach consensus as their essential building blocks. We also highlight various insights and suggest future research directions that could drive further advancements in this area.
Compact Lattice Signatures via Iterative Rejection Sampling
One of the primary approaches for constructing lattice-based signature schemes is through the “Fiat-Shamir with aborts” methodology. Schemes constructed using this approach may abort and restart during signing, corresponding to rejection sampling produced signatures in order to ensure that they follow a distribution that is independent of the secret key. This rejection sampling is only feasible when the output distribution is sufficiently wide, limiting how compact this type of signature schemes can be.
In this work, we develop a new method to construct lattice signatures with the “Fiat-Shamir with aborts” approach. By constructing signatures in a way that is influenced by the rejection condition, we can significantly lower the rejection probability. This allows our scheme to use an iterative rejection sampling to target narrower output distributions than previous methods, resulting in much more compact signatures.
In the most compact variant of our new signature scheme, the combined size of a signature and a verification key is less than half of that for ML-DSA and comparable to that of compact hash-and-sign lattice signature schemes, such as Falcon. Alternatively, by targeting a somewhat wider distribution, the rejection condition of the scheme can be securely ignored. This non-aborting variant of our scheme still retains a notable size advantage over previous lattice-based Fiat-Shamir schemes.
BOIL: Proof-Carrying Data from Accumulation of Correlated Holographic IOPs
In this paper, we present a batching technique for oracles corresponding to codewords of a Reed–Solomon code. This protocol is inspired by the round function of the STIR protocol (CRYPTO 2024). Using this oracle batching protocol, we propose a construction of a practically efficient accumulation scheme, which we call BOIL. Our accumulation scheme can be initiated with an arbitrary correlated holographic IOP, leading to a new class of PCD constructions. The results of this paper were originally given as a presentation at zkSummit12.
Relating Definitions of Computational Differential Privacy in Wider Parameter Regimes
The literature on computational differential privacy (CDP) has focused almost exclusively on definitions that are computational analogs of `pure' $(\epsilon,0)$-DP. We initiate the formal study of computational versions of approximate DP, i.e. $(\epsilon, \delta)$-DP with non-negligible $\delta$. We focus on IND-CDP and SIM$_{\forall\exists}$-CDP and show that the hierarchy between them when $\delta > 0$ potentially differs substantially from when $\delta = 0$. In one direction, we show that for $\delta < 1$, any mechanism which is $(\epsilon,\delta)$-SIM$_{\forall\exists}$-CDP also is $(\epsilon,\delta)$-IND-CDP, but only if $\epsilon$ is logarithmic in the security parameter. As a special case, this proves that the existing implication from $(\epsilon,0)$-SIM$_{\forall\exists}$-CDP to $(\epsilon,0)$-IND-CDP does not hold for arbitrary $\epsilon$, as previously claimed. Furthermore, we prove that when the parameters are the same in IND-CDP and SIM$_{\forall\exists}$-CDP and $\epsilon$ is superlogarithmic, there exists a natural task that can be solved whilst satisfying SIM$_{\forall\exists}$-CDP but which no IND-CDP mechanism can solve. This is the first separation in the CDP literature which is not due to using a task contrived specifically in order to give rise to the separation.
In the other direction, we show that the techniques for establishing an implication from $(\epsilon,0)$-IND-CDP to $(\epsilon,0)$-SIM$_{\forall\exists}$-CDP extend only to that a mechanism being $(\epsilon,\delta)$-IND-CDP implies it is also $(\epsilon,\delta')$-SIM$_{\forall\exists}$-CDP with $\delta' > \delta$. Finally, we show that the Groce-Katz-Yerukhimovich barrier results against separations between CDP and statistical DP hold also in the setting of non-negligible $\delta$.
Randomized vs. Deterministic? Practical Randomized Synchronous BFT in Expected Constant Time
Most practical synchronous Byzantine fault-tolerant (BFT) protocols, such as Sync HotStuff (S&P 2020), follow the convention of partially synchronous BFT and adopt a deterministic design. Indeed, while these protocols achieve O(n) time complexity, they exhibit impressive performance in failure-free scenarios.
This paper challenges this conventional wisdom, showing that a randomized paradigm terminating in expected O(1) time may well outperform prior ones even in the failure-free scenarios. Our framework reduces synchronous BFT to a new primitive called multi-valued Byzantine agreement with strong external validity (MBA-SEV). Inspired by the external validity property of multi-valued validated Byzantine agreement (MVBA), the additional validity properties allow us to build a BFT protocol where replicas agree on the hashes of the blocks. Our instantiation of the paradigm, Sonic, achieves O(n) amortized message complexity per block proposal, expected O(1) time, and enables a fast path of only two communication step.
Our evaluation results using up to 91 instances on Amazon EC2 show that the peak throughput of Sonic and P-Sonic (a pipelining variant of Sonic) is 2.24x-14.52x and 3.08x-24.25x that of Sync HotStuff, respectively.
FHECAP: An Encrypted Control System with Piecewise Continuous Actuation
We propose an encrypted controller framework for linear time-invariant systems with actuator non-linearity based on fully homomorphic encryption (FHE). While some existing works explore the use of partially homomorphic encryption (PHE) in implementing linear control systems, the impacts of the non-linear behaviors of the actuators on the systems are often left unconcerned. In particular, when the inputs to the controller become too small or too large, actuators may burn out due to unstable system state oscillations. To solve this dilemma, we design and implement FHECAP, an FHE-based controller framework that can homomorphically apply non-linear functions to the actuators to rectify the system inputs. In FHECAP, we first design a novel data encoding scheme tailored for efficient gain matrix evaluation. Then, we propose a high-precision homomorphic algorithm to apply non-arithmetic piecewise function to realize the actuator normalization. In the experiments, compared with the existing state-of-the-art encrypted controllers, FHECAP achieves $4\times$--$1000\times$ reduction in computational latency. We evaluate the effectiveness of FHECAP in the real-world application of encrypted control for spacecraft rendezvous. The simulation results show that the FHECAP achieves real-time spacecraft rendezvous with negligible accuracy loss.
Constructing More Super-optimal Pairings via Small Degree Endomorphisms
Bilinear pairings play a crucial role in public-key cryptography. Accelerating the pairing computation, especially shortening the length of the Miller loop is of great significance. Several researches have been focused on constructing pairings with shorter Miller loops. The variants of Tate pairings such as ate pairings and optimal ate pairings, are successively proposed. Moreover, for several specific pairing-friendly curves with efficiently-computable automorphisms, the Miller loop can be further shortened to $\log_2(r)/2\varphi(k)$, where $\varphi$ (resp. $k$) represents the Euler's function (resp. embedding degree). Such pairings are named as super-optimal (ate) pairings, whose lengths of Miller loops achieve the theoretical lower bound. Nevertheless, not all pairing-friendly curves are compatible with the construction of super-optimal pairings. This paper aims to provide a framework for constructing super-optimal pairings on more pairing-friendly curves by employing the degree-$n$ endomorphisms where $n > 1$. We mainly targets on enhancing the performance for the pairing computation on family GG22D7 with CM-discriminant \(D = 7\) and embedding degree \(k = 22\) (resp. family GG28D11 with CM- discriminant \(D = 11\) and embedding degree \(k = 28\)) by exploiting degree-$2$ endomorphisms (resp. degree-$3$ endomorphisms), and provide explicit formulas for the corresponding super-optimal ate pairings. For implementation, by taking a pairing-friendly curve GG22D7-457 at the 192-bit security level as an example, we present the corresponding detailed implementation procedure, computational cost analysis, and experimental results. The results illustrate that compared with the previous method, our new super-optimal pairing formula can reduce about $24.4\%$ of \(\mathbb{F}_{p}\)-multiplications for the Miller loop on GG22D7-457. In terms of CPU clock cycles, leveraging our super-optimal pairing formula is $26.0\%$ faster. This work extends the application of super-optimal pairings, and has the potential to offer a wider range of choices of curves for pairing-based cryptography.
Perfect 2-Party Computation from Additive Somewhat Homomorphic Encryption
Two-party computation has been an active area of research since Yao's breakthrough results on garbled circuits. We present secret key somewhat additive homomorphic schemes where the client has perfect privacy (server can be computationally unbounded). Our basic scheme is somewhat additive homomorphic and we extend it to support multiplication. The server handles circuit multiplication gates by returning the multiplicands to the client which updates the decryption key so that the original ciphertext vector includes the encrypted multiplication gate outputs. We give a 2-party computation (2PC) protocol that also incorporates server inputs where the client has perfect privacy. Server privacy holds against a computationally bounded adversary since it depends on the hardness of the subset sum problem. The 2PC protocol maintains circuit privacy except for leaking the number of multiplication gates to the client. Scaling the 2PC protocol via separate encryption parameters for smaller subcircuits allows the ciphertext size to remain constant as circuit size grows.
Asynchronous Byzantine Agreement with Subquadratic Communication
Uncategorized
Uncategorized
Understanding the communication complexity of Byzantine agreement (BA) is a fundamental problem in distributed computing. In particular, as protocols are run with a large number of parties (as, e.g., in the context of blockchain protocols), it is important to understand the dependence of the communication on the number of parties $n$. Although adaptively secure BA protocols with $o(n^2)$ communication are known in the synchronous and partially synchronous settings, no such protocols are known in the fully asynchronous case.
We show here an asynchronous BA protocol with subquadratic communication tolerating an adaptive adversary who can corrupt $f<(1-\epsilon)n/3$ of the parties (for any $\epsilon>0$).
One variant of our protocol assumes initial setup done by a trusted dealer, after which an unbounded number of BA executions can be run; alternately, we can achieve subquadratic amortized communication with no prior setup. We also show that some form of setup is needed for (non-amortized) subquadratic BA tolerating $\Theta(n)$ corrupted parties.
As a contribution of independent interest, we show a secure-computation protocol in the same threat model that has $o(n^2)$ communication when computing no-input functionalities with short output (e.g., coin tossing).
Security Analysis of NIST Key Derivation Using Pseudorandom Functions
Key derivation functions can be used to derive variable-length random strings that serve as cryptographic keys. They are integral to many widely-used communication protocols such as TLS, IPsec and Signal. NIST SP 800-108 specifies several key derivation functions based on pseudorandom functions such as \mode{CMAC} and \mode{HMAC}, that can be used to derive additional keys from an existing cryptographic key. This standard either explicitly or implicitly requests their KDFs to be variable output length pseudorandom function, collision resistant, and preimage resistant. Yet, since the publication of this standard dating back to the year of 2008, until now, there is no formal analysis to justify these security properties of KDFs.
In this work, we give the formal security analysis of key derivation functions in NIST SP 800-108. We show both positive and negative results regarding these key derivation functions. For KCTR-CMAC, KFB-CMAC, and KDPL-CMAC that are key derivation functions based on CMAC in counter mode, feedback mode, and double-pipeline mode respectively, we prove that all of them are secure variable output length pseudorandom functions and preimage resistance. We show that KFB-CMAC and KDPL-CMAC are collision resistance. While for KCTR-CMAC, we can mount collision attack against it that requires only six block cipher queries and can succeed with probability 1/4. For KCTR-HMAC, KFB-HMAC, and KDPL-HMAC that are key derivation functions based on HMAC in modes, we show that all of them behave like variable output length pseudorandom functions. When the key of these key derivation functions is of variable length, they suffer from collision attacks. For the case when the key of these key derivation function is of fixed length and less than $d-1$ bits where $d$ is the input block size of the underlying compression function, we can prove that they are collision resistant and preimage resistant.
Groebner Basis Cryptanalysis of Anemoi
Arithmetization-Oriented (AO) symmetric primitives play an important role in the efficiency and security of zero-knowledge (ZK) proof systems. The design and cryptanalysis of AO symmetric-key primitives is a new topic particularly focusing on algebraic aspects. An efficient AO hash function aims at lowering the multiplicative complexity in the arithmetic circuit of the hash function over a suitable finite field. The AO hash function Anemoi was proposed in CRYPTO 2023.
In this work we present an in-depth Groebner basis (GB) cryptanalysis of Anemoi over GF(p). The main aim of any GB cryptanalysis is to obtain a well-structured set of polynomials representing the target primitive, and finally solve this system of polynomials using an efficient algorithm.
We propose a new polynomial modelling for Anemoi that we call ACICO. We show that using ACICO one can obtain a GB defined by a well-structured set of polynomials. Moreover, by utilising ACICO we can prove the exact complexity of the Groebner basis computation (w.r.t Buchberger's algorithm) in the cryptanalysis of Anemoi. The structured GB further allows us to prove the dimension of the quotient space which was conjectured in a recently published work.
Afterwards, we provide the complexity analysis for computing the variety (or the solutions) of the GB polynomial system (corresponding to Anemoi) which is the final step in GB cryptanalysis, by using known approaches. In particular, we show that GB polynomial structure allows us to use the Wiedemann algorithm and improve the efficiency of cryptanalysis compared to previous works.
Our GB cryptanalysis is applicable to more than two branches (a parameter in Anemoi), while the previously published results showed cryptanalysis only for two branches. Our complexity analysis implies that the security of Anemoi should not be relied upon the GB computation.
We also address an important mathematical question in GB cryptanalysis of Anemoi namely, does the Anemoi polynomial system has a Shape form?, positively. By proving this we guarantee that upon application of basis conversion method like FGLM one can obtain a convenient system of polynomials that is easy to solve.
Security Analysis of Signal's PQXDH Handshake
Signal recently deployed a new handshake protocol named PQXDH to protect against "harvest-now-decrypt-later" attacks of a future quantum computer. To this end, PQXDH adds a post-quantum KEM to the Diffie-Hellman combinations of the prior X3DH handshake.
In this work, we give a reductionist security analysis of Signal's PQXDH handshake in a game-based security model that captures the targeted "maximum-exposure" security against both classical and quantum adversaries, allowing fine-grained compromise of user's long-term, semi-static, and ephemeral key material. We augment prior such models to capture not only the added KEM component but also the signing of public keys, which prior analyses did not capture but which adds an additional flavor of post-quantum security in PQXDH. We then establish fully parameterized, concrete security bounds for the classical and post-quantum session key security of PQXDH, and discuss how design choices in PQXDH make a KEM binding property necessary and how a lack of domain separation reduces the achievable security.
Our discussion of KEM binding and domain separation complements the concurrent tool-based analysis of PQXDH by Bhargavan, Jacomme, Kiefer, and Schmidt (USENIX Security 2024), which pointed out a potential re-encapsulation attack if the KEM shared secret does not bind the public key. In contrast to the tool-based analysis, we analyze all protocol modes of PQXDH and its "maximum-exposure" security. We further show that both Kyber (used in PQXDH) and the NIST standard ML-KEM (expected to replace Kyber) satisfy a novel binding notion we introduce and rely on for our PQXDH analysis, which may be of independent interest.
HydraProofs: Optimally Computing All Proofs in a Vector Commitment (with applications to efficient zkSNARKs over data from multiple users)
In this work, we introduce HydraProofs, the first vector commitment (VC) scheme that achieves the following two properties. (i) The prover can produce all the opening proofs for different elements (or consecutive sub-arrays) for a vector of size N in optimal time O(N). (ii) It is directly compatible with a family of zkSNARKs that encode their input as a multi-linear polynomial, i.e., our VC can be directly used when running the zkSNARK on its pre-image, without the need to open'' the entire vector pre-image inside the zkSNARK. To the best of our knowledge, all prior VC schemes either achieve (i) but are not efficiently pluggable'' into zkSNARKs (e.g., a Merkle tree commitment that requires re-computing the entire hash tree inside the circuit), or achieve (ii) but take (NlogN) time. We then combine HydraProofs with the seminal GKR protocol and apply the resulting zkSNARK in a setting where multiple users participate in a computation executed by an untrusted server and each user wants to ensure the correctness of the result and that her data was included. Our experimental evaluation shows our approach outperforms prior ones by 4-16x for prover times on general circuits. Finally, we consider two concrete application use cases, verifiable secret sharing and verifiable robust aggregation. For the former, our construction achieves the first scheme for Shamir's secret sharing with linear time prover (lower than the time needed for the dealer computation). For the second we propose a scheme that works against misbehaving aggregators and our experiments show it can be reasonably deployed in existing schemes with minimal slow-downs.
Ciphertext-Policy ABE from Inner-Product FE
The enormous potential of Attribute-Based Encryption (ABE) in the context of IoT has driven researchers to propose pairing-free ABE schemes that are suitable for resource-constrained devices. Unfortunately, many of these schemes turned out to be insecure. This fact seems to reinforce the point of view of some authors according to which instantiating an Identity-Based Encryption (IBE) in plain Decisional Diffie-Hellman (DDH) groups is impossible. In this paper, we provide a generic AND gate access structured Ciphertext-Policy ABE (CP-ABE) scheme with secret access policy from Inner-Product Functional Encryption (IPFE). We also propose an instantiation of that generic CP-ABE scheme from the DDH assumption. From our generic CP-ABE scheme we derive an IBE scheme by introducing the concept of Clustered Identity-Based Encryption (CIBE). Our schemes show that it is indeed possible to construct practical and secure IBE and ABE schemes based on the classical DDH assumption. An implementation of our CIBE in Python using the Charm framework is available on GitHub [21].
Candidate Matchmaking Encryption from Attribute-Based Encryption Schemes
We were deeply impressed by the paper by Ateniese et al., published in Crypto 2019. In it, they presented a black-box construction of matchmaking encryption (ME) based on functional encryption. In our work, we propose an ME scheme based on standard assumptions in the standard model. This scheme has been proven to be secure under the learning with error (LWE) assumption. Our ME scheme is achieved through a novel framework of bilateral-policy attribute-based encryption (BP-ABE) and a new intermediate primitive termed a perturbed pseudorandom generator (PPRG), which facilitates the implementation of authentication functionality by replacing non-interactive zero-knowledge proof functionality.
In the scheme presented in this paper, the user's "public key" is generated using Hamming correlation robustness and user attributes. Note that the 'public key' is not public. In order to preserve the privacy of the two parties involved in matchmaking encryption, our BP-ABE scheme does not use the 'public key' directly to encrypt the plaintext. Instead, the message sender selects matching attributes and uses a Hamming correlation robustness and homomorphic pseudorandom function (HPRF) to generate temporary public keys and hide the public key and user attributes.
When these temporary public keys satisfy the access policy, the receiver can decrypt the data using their private key. Regarding the authentication function of matchmaking encryption, this paper proposes a non-interactive privacy set intersection (PSI) scheme based on HPRF and PPRG. The message sender encrypts their 'public key' using the proposed PSI scheme as part of the ciphertext. The receiver also encrypts their 'public key' using the proposed PSI scheme and matches the attributes, thereby completing the message authentication function. We consider our approach to be a significant departure from existing constructions, despite its simplicity.
Computationally Efficient Asynchronous MPC with Linear Communication and Low Additive Overhead
We explore the setting of asynchronous multi-party computation (AMPC) with optimal resilience $n=3t+1$, and develop an efficient protocol that optimizes both communication and computation.
The recent work by Goyal, Liu-Zhang, and Song [Crypto' 24] was the first to achieve AMPC with amortized linear communication cost without using computationally heavy public-key cryptography. However, its $\mathcal{O}(n^{14})$ additive communication overhead renders it impractical for most real-world applications.
It is possible to reduce the communication overhead significantly by leveraging cryptographic tools such as %random oracle hash,
homomorphic commitments, public-key cryptography, or zero-knowledge proofs; however, the corresponding AMPC protocols introduce computation overhead of $\Omega(nC)$ public-key cryptographic operations that become bottleneck as $n$ grows. Overall, achieving AMPC with linear communication complexity, low additive communication overhead, and low computation overhead remains an open challenge.
In this work, we resolve this efficiency challenge by utilizing the random oracle model. By relying solely on computationally efficient primitives such as random oracle hash and symmetric-key cryptography, our protocol is not only efficient in terms of computation and communication overhead but also post-quantum secure. For a circuit with $C$ multiplication gates, our protocol achieves $\mathcal{O}(Cn)$ communication per multiplication gate with an additive overhead of $\mathcal{O}(n^4)$ communication. In terms of computation, our protocol only introduces an additive overhead of $\mathcal{O}(n^5)$ hash computations independent of the circuit size.
Generic Construction of Dual-Server Public Key Authenticated Encryption with Keyword Search
Chen et al. (IEEE Transactions on Cloud Computing 2022) introduced dual-server public key authenticated encryption with keyword search (DS-PAEKS), and proposed a DS-PAEKS scheme under the decisional Diffie-Hellman assumption. In this paper, we propose a generic construction of DS-PAEKS from PAEKS, public key encryption, and signatures. By providing a concrete attack, we show that the DS-PAEKS scheme of Chen et al. is vulnerable. That is, the proposed generic construction yields the first DS-PAEKS schemes. Our attack with a slight modification works against the Chen et al. dual-server public key encryption with keyword search (DS-PEKS) scheme (IEEE Transactions on Information Forensics and Security 2016). Moreover, we demonstrate that the Tso et al. generic construction of DS-PEKS from public key encryption (IEEE Access 2020) is also vulnerable.
Behemoth: transparent polynomial commitment scheme with constant opening proof size and verifier time
Polynomial commitment schemes are fundamental building blocks in numerous cryptographic protocols such as verifiable secret sharing, zero-knowledge succinct non-interactive arguments, and many more. The most efficient polynomial commitment schemes rely on a trusted setup which is undesirable in trust-minimized applications, e.g., cryptocurrencies. However, transparent polynomial commitment schemes are inefficient (polylogarithmic opening proofs and/or verification time) compared to their trusted counterparts. It has been an open problem to devise a transparent, succinct polynomial commitment scheme or prove an impossibility result in the transparent setting. In this work, for the first time, we create a transparent, constant-size polynomial commitment scheme called Behemoth with constant-size opening proofs and a constant-time verifier. The downside of Behemoth is that it employs a cubic prover in the degree of the committed polynomial. We prove the security of our scheme in the generic group model and discuss parameter settings in which it remains practical even for the prover.