Papers updated in last 7 days (60 results)
Packed Sumcheck over Fields of Small Characteristic with Application to Verifiable FHE
Verifiable computation over encrypted data is gaining increasing attention, and using SNARKs to provide proofs for FHE operations has emerged as a promising approach. However, the mismatch between FHE's typically small prime fields and SNARKs' larger prime fields creates verifiable FHE challenges. In this work, we construct a packed sumcheck protocol specifically designed for small fields. This approach leverages folding and repetition techniques to maintain security without field expansion, with all operations performed on the base field. For a field $\mathbb{F}_p$ requiring $k$-fold expansion, our sumcheck protocol operates with $(\log k + l)$ variables, where the sumcheck statement consists of $d$ multiplied multilinear polynomial statements. The prover can complete the proof in $O(kd^2 + k^{1.807}d) \cdot 2^l$ modular multiplications and $O(k^2d^2)\cdot 2^l$ modular additions over $\mathbb{F}_p$.
We futher propose a proof system for bit-wise bootstrapping by integrating the packed sumcheck protocol with the Brakedown (CRYPTO 2023) and Binius (EUROCRYPT 2025) commitment schemes. Our construction exploits the highly repetitive structure of bit-wise bootstrapping by decomposing the procedure into a sequence of vector operations, enabling the design of an efficient proof protocol through the verification of these vector relations. The resulting system has linear prover time while performing all computations over the base field.
Context-Dependent Threshold Decryption and its Applications
We initiate the study of high-threshold public-key decryption, along with an enhanced security feature called context-dependent decryption.
Our study includes definitions, constructions, security proofs, and applications.
The notion of high-threshold decryption has received almost no attention in the literature. The enhanced security feature of context-dependent encryption is entirely new, and plays an important role in many natural applications of threshold decryption.
An Efficient Variant of F4 Algorithm for Solving MQ Problem
In this paper, we propose an enhanced variant of the F4 algorithm specifically designed for efficiently solving multivariate quadratic (MQ) problems, which are central to many post-quantum cryptographic schemes. Our approach overcomes a major inefficiency of conventional F4 by integrating a Hilbert-driven strategy that determines the optimal number of S-polynomials to generate at each degree, thereby reducing unnecessary zero reductions. We further introduce refined pair selection techniques that prioritize candidates yielding S-polynomials with smaller leading terms, which in turn minimizes the dimensions of intermediate matrices used during reduction. Experimental results show that our implementation outperforms state-of-the-art systems such as M4GB and Magma's F4 in both single-core and multi-core environments. Notably, our method sets new records in the Fukuoka MQ Challenge for Type VI problems over $\mathbb{F}_{31}$ with $m = 21$ and $m = 22$, demonstrating the robustness and practical impact of our approach in solving highly challenging MQ instances.
Somewhat Homomorphic Encryption from Linear Homomorphism and Sparse LPN
We construct somewhat homomorphic encryption from the sparse learning-parities-with-noise problem, along with an assumption that implies linearly homomorphic encryption (e.g., the decisional Diffie-Hellman or decisional composite residuosity assumptions). Our resulting schemes support an a-priori bounded number of homomorphic operations: $O(\log \lambda / \log \log \lambda)$ multiplications followed by poly($\lambda$) additions, where $\lambda$ is a security parameter. These schemes have compact ciphertexts: before and after homomorphic evaluation, the bit length of each ciphertext is a fixed polynomial in the security parameter $\lambda$, independent of the number of homomorphic operations that the scheme supports. This gives the first constructions of somewhat homomorphic encryption that can evaluate the class of bounded-degree polynomials without relying on lattice assumptions or bilinear maps.
Our new encryption schemes are conceptually simple: much as in Gentry, Sahai, and Waters’ fully homomorphic encryption scheme, ciphertexts in our scheme are matrices, homomorphic addition is matrix addition, and homomorphic multiplication is matrix multiplication. Moreover, when encrypting many messages at once and performing many homomorphic evaluations at once, the bit length of the ciphertexts in (some of) our schemes can be made arbitrarily close to the bit length of the plaintexts. The main limitation of our schemes is that they require a large evaluation key, whose size scales with the complexity of the homomorphic computation performed, though this key can be re-used across any polynomial number of encryptions and evaluations. Our construction builds on recent work of Dao, Ishai, Jain, and Lin, who construct a homomorphic secret-sharing scheme from the sparse-LPN assumption.
A New Generalized Attack on RSA-like Cryptosystems
Rivest, Shamir, and Adleman published the RSA cryptosystem in 1978, which has been widely used over the last four decades. The security of RSA is based on the difficulty of factoring large integers $N = pq$, where $p$ and $q$ are prime numbers. The public exponent $e$ and the private exponent $d$ are related by the equation $ed - k(p-1)(q-1) = 1$. Recently, Cotan and Te{\c{s}}eleanu (NordSec 2023) introduced a variant of RSA, where the public exponent $e$ and the private exponent $d$ satisfy the equation $ed - k(p^n-1)(q^n-1) = 1$ for some positive integer $n$. In this paper, we study the general equation $eu - (p^n - 1)(q^n - 1)v = w$ with positive integers $u$ and $v$, and $w\in \mathbb{Z}$. We show that, given the public parameters $N$ and $e$, one can recover $u$ and $v$ and factor the modulus $N$ in polynomial time by combining continued fractions with Coppersmith's algorithm which relies on lattice reduction techniques, under specific conditions on $u$, $v$, and $w$. Furthermore, we show that if the private exponent $d$ in an RSA-like cryptosystem is either small or too large, then $N$ can be factored in polynomial time. This attack applies to the standard RSA cryptosystem.
KZH-Fold: Accountable Voting from Sublinear Accumulation
Accumulation schemes are powerful primitives that enable distributed and incremental verifiable computation with less overhead than recursive SNARKs. However, most existing schemes with constant-size accumulation verifiers suffer from linear-sized accumulators and deciders, leading to unsuitable linear-sized proofs in distributed settings such as accountable voting protocols. Our contributions are as follows:
I) We introduce KZH, a novel multilinear polynomial commitment scheme (PCS) with sublinear opening and KZH-fold, a polynomial accumulation (PA) scheme where the verifier only does $3$ group scalar multiplications and $O(n^{1/2})$ accumulator size and decider time. Our scheme generalizes to achieve accumulator and decider complexity of $O(k \cdot n^{1/k})$ while a verifier complexity $O(k)$.
2) As an orthogonal contribution to KZH-fold, we build an IVC (PCD) scheme for R1CS via Spartan+PA, in which instantiated with KZH-fold, i.e. Spartan+KZH-fold results in a sublinear proof and decider. With the recipe of Spartan+PA, we build non-uniform IVC and non-uniform PCD. Our non-uniform PCD is the first approach in which the prover's computation and communication at each step and grow sublinearly with the combined circuit size of all instructions. This approach can be instantiated with any PA and doesn't depend on KZH-fold.
3) We demonstrate the power of Spartan+KZH-fold by implementing an accountable voting scheme using a novel signature aggregation protocol supporting millions of participants, significantly reducing communication overhead and verifier time compared to BLS-based aggregation. We implemented and benchmarked our protocols, Spartan+KZH-fold achieves a 2000x reduction in communication and a 50x improvement in decider time over Nova when proving 2000 Poseidon hashes, at the cost of 3x the prover time.
Shaking up authenticated encryption
Authenticated encryption (AE) is a cryptographic mechanism that allows communicating parties to protect the confidentiality and integrity of messages exchanged over a public channel, provided they share a secret key. In this work, we present new AE schemes leveraging the SHA-3 standard functions SHAKE128 and SHAKE256, offering 128 and 256 bits of security strength, respectively, and their “Turbo” counterparts. They support session-based communication, where a ciphertext authenticates the sequence of messages since the start of the session. The chaining in the session allows decryption in segments, avoiding the need to buffer the entire deciphered cryptogram between decryption and validation. And, thanks to the collision resistance of (Turbo)SHAKE, they provide so-called CMT-4 committing security, meaning that they provide strong guarantees that a ciphertext uniquely binds to the key, plaintext and associated data. The AE schemes we propose have a unique combination of advantages. The most important are that 1) their security is based on the security claim of SHAKE, that has received a large amount of public scrutiny, that 2) they make use of the standard Keccak-$p$ permutation that not only receives more and more dedicated hardware support, but also allows competitive software-only implementations thanks to the TurboSHAKE instances, and that 3) they do not suffer from a 64-bit birthday bound like most AES-based schemes. Of independent interest, we introduce the deck cipher as the stateful counterpart of the deck function and the duplex cipher generalizing keyed duplex and harmonize their security notions. Finally, we provide an elegant solution for multi-layer domain separation.
Asymptotically Optimal Early Termination for Dishonest Majority Broadcast
Deterministic broadcast protocols among $n$ parties tolerating $t$ corruptions require $\min\{f+2, t+1\}$ rounds, where $f \le t$ is the actual number of corruptions in an execution of the protocol. We provide the first protocol which is optimally resilient, adaptively secure, and asymptotically matches this lower bound for any $t<(1-\varepsilon)n$. By contrast, the best known algorithm in this regime by Loss and Nielsen (EUROCRYPT'24) always requires $O(\min\{f^2, t\})$ rounds. Our main technical tool is a generalization of the notion of polarizer introduced by Loss and Nielsen, which allows parties to obtain transferable cryptographic evidence of missing messages with fewer rounds of interaction.
Adversary Resilient Learned Bloom Filters
A learned Bloom filter (LBF) combines a classical Bloom filter (CBF) with a learning model to reduce the amount of memory needed to represent a given set while achieving a target false positive rate (FPR). Provable security against adaptive adversaries that advertently attempt to increase FPR has been studied for CBFs. However, achieving adaptive security for LBFs is an open problem. In this paper, we close this gap and show how to achieve adaptive security for LBFs. In particular, we define several adaptive security notions capturing varying degrees of adversarial control, including full and partial adaptivity, in addition to LBF extensions of existing adversarial models for CBFs, including the Always-Bet and Bet-or-Pass notions. We propose two secure LBF constructions, \bloomname{} and \betpassbloomname{}, and formally prove their security under these models, assuming the existence of one-way functions. Based on our analysis and use case evaluations, our constructions achieve strong security guarantees while maintaining competitive FPR and memory overhead.
Related-Key Cryptanalysis of FUTURE
At Africacrypt 2022, Gupta et al. introduced FUTURE, a 64-bit lightweight block cipher based on an MDS matrix and designed in an SPN structure, with a focus on achieving single-cycle encryption and low implementation cost, especially in unrolled architectures. While the designers evaluated its security under various attack models, they did not consider related-key cryptanalysis. In this work, we address this gap by analyzing the security of FUTURE in the related-key setting using techniques based on Mixed Integer Linear Programming (MILP). We first propose a simplified and generalizable approach for applying MILP to model any MDS or near-MDS-based cipher that follows the substitution-permutation paradigm. Using our MILP framework, we construct an 8-round related-key distinguisher on FUTURE, requiring $2^{58}$ plaintexts, $2^{58}$ \xor operations, and negligible memory. We further identify a full-round (i.e., 10 rounds) boomerang distinguisher with a probability of $2^{-45.8}$, enabling a distinguishing attack with $2^{48}$ data and time complexity. In addition, we develop a full-round key recovery attack on FUTURE with data, time, and memory complexities of $2^{18}$, $2^{70}$, and $2^{64}$, respectively. Although all known single-key attacks remain impractical (with time complexities of at least $2^{112}$), our results demonstrate a full-round cryptanalysis of FUTURE in the related-key setting, thereby challenging its claimed security guarantees.
Efficient Permutation Correlations and Batched Random Access for Two-Party Computation
In this work we formalize the notion of a two-party permutation correlation $(A, B), (C, \pi)$ s.t. $\pi(A)=B+C$ for a random permutation $\pi$ of $n$ elements and vectors $A,B,C\in \mathbb{F}^n$. This correlation can be viewed as an abstraction and generalization of the Chase et al. (Asiacrypt 2020) share translation protocol. We give a systematization of knowledge for how such a permutation correlation can be derandomized to allow the parties to perform a wide range of oblivious permutations of secret-shared data. This systematization immediately enables the translation of various popular honest-majority protocols to be efficiently instantiated in the two-party setting, e.g. collaborative filtering, sorting, database joins, graph algorithms, and many more.
We give two novel protocols for efficiently generating a random permutation correlation. The first uses MPC-friendly PRFs to generate a correlation of $n$ elements, each of size $\ell=\log|\mathbb{F}|$ bits, with $O(n\ell)$ bit-OTs, time, communication, and only 3 rounds including setup. Similar asymptotics previously required relatively expensive public-key cryptography, e.g. Paillier or LWE. Our protocol implementation for $n=2^{20},\ell=128$ requires just 7 seconds & $\sim2\ell n$ OTs for communication, a respective 40 & $1.1\times$ improvement on the LWE solution of Juvekar at al. (CCS 2018). The second protocol is based on pseudo-random correlation generators and achieves an overhead that is sublinear in the string length $\ell$, i.e. the communication and number of OTs is $O(n\log \ell)$. The overhead of the latter protocol has larger hidden constants, and therefore is more efficient only when long strings are permuted, e.g. in graph algorithms.
Finally, we present a suite of highly efficient protocols based on permutations for performing various batched random access operations. These include the ability to extract a hidden subset of a secret-shared list. More generally, we give ORAM-like protocols for obliviously reading and writing from a list in a batched manner. We argue that this suite of batched random access protocols should be a first class primitive in the MPC practitioner's toolbox.
Signatures with Tight Adaptive Corruptions from Search Assumptions
We construct the \emph{first} tightly secure signature schemes in the multi-user setting with adaptive corruptions from static search assumptions, such as classical discrete logarithm, RSA, factoring, or post-quantum group action discrete logarithm assumptions. In contrast to our scheme, the previous tightly secure schemes are based on decisional assumptions (e.g., (group action) DDH) or interactive search assumptions (e.g., one-more CDH). The security of our schemes is independent of the number of users, signing queries, and random oracle queries, and forging our signatures is as hard as solving the underlying static search problems. Our signature schemes are based on an identification scheme with multiple secret keys per public key and ``second-key recovery resistance,'' difficulty of finding another secret key of a given public and secret key pair (e.g., Okamoto identification (CRYPTO'92) and Parallel-OR identification (CRYPTO'94)). These properties allow a reduction in solving a search problem while answering signing and corruption queries for all users in the signature security game. To convert such an identification scheme into a signature scheme tightly, we employ randomized Fischlin transformation introduced by Kondi and shelat (Asiacrypt 2022) that provides straight-line extraction. Klooss et al. (Asiacrypt 2024) showed that randomized Fischlin transformation satisfies the zero-knowledge property in the programmable ROM if an exponential-size challenge space is used. This fact intuitively implies that the transformation with a large challenge space guarantees the tight security of our signature scheme in the programmable random oracle model, but we successfully prove its tight security in the \emph{non-programmable} random oracle model \emph{without enlarging the challenge size}. Also, as a side contribution, we reconsider the zero-knowledge property of randomized Fischlin transformation, and show that the transformation with a polynomial size challenge space has zero-knowledge if the underlying Sigma protocol satisfies certain properties.
FLUENT: A Tool for Efficient Mixed-Protocol Semi-Private Function Evaluation
In modern business to customer interactions, handling private or confidential data is essential. Private Function Evaluation (PFE) protocols ensure the privacy of both the customers' input data and the business' function evaluated on it which is often sensitive intellectual property (IP). However, fully hiding the function in PFE results in high performance overhead. Semi-Private Function Evaluation (SPFE) is a generalization of PFE to only partially hide the function, whereas specific non-critical components remain public. Our paper introduces a novel framework designed to make SPFE accessible to non-experts and practical for real-world deployments.
To achieve this, we improve on previous SPFE solutions in two aspects.
First, we enhance the developer experience by leveraging High-Level Synthesis (HLS), making our tool more user-friendly than previous SPFE frameworks.
Second, we achieve a \(2 \times\) speedup compared to the previous state-of-the-art through more efficient underlying constructions and the usage of Lookup Tables (LUTs).
We evaluate the performance of our framework in terms of communication and runtime efficiency. Our final implementation is available as an open-source project, aiming to bridge the gap between advanced cryptographic protocols and their practical application in industry scenarios.
Improved Subfield Curve Search For Specific Field Characteristics
Isogeny-based cryptography relies its security on the hardness of the supersingular isogeny problem: finding an isogeny between two supersingular curves defined over a quadratic field.
The Delfs-Galbraith algorithm is the most efficient procedure for solving the supersingular isogeny problem with a time complexity of $\tilde{\mathcal{O}}(p^{1/2})$ operations. The bottleneck of the Delfs-Galbraith algorithm is the so-called subfield curve search (i.e., finding an isogenous supersingular elliptic curve defined over the prime field), which determines the time complexity of the aforementioned algorithm.
Given that, for efficiency, most recent isogeny-based constructions propose using finite fields with field characteristics equal to $p = 2^a \cdot f - 1$ for some positive integers $a$ and $f$.
This work focuses on primes of that particular form, and presents two new algorithms for finding subfield curves with a time complexity of $\mathcal{O}(p^{1/2})$ operations and a memory complexity polynomial in $\log_2{p}$. Such algorithms exploit the existence of large $2^a$-torsion points and extend the subfield root detection algorithm of Corte-Real Santos, Costello, and Shi (Crypto, 2022) to a projective scenario where one can work over the curve coefficients instead of the explicit j-invariants of the curves.
These algorithms easily extend to primes of the form $p =2^a \cdot f + 1$ and $p = \ell^a \cdot f - 1$ with $\ell$ being a small integer.
This study also examines the usage of radical $3$-isogenies with the proposed extended subfield root detection algorithm. In this context, the results indicate that the radical $3$-isogeny approach is competitive compared to the state-of-the-art algorithms.
How Fast Does the Inverse Walk Approximate a Random Permutation?
For a finite field $\mathbb{F}$ of size $n$, the (patched) inverse permutation $\operatorname{INV}: \mathbb{F} \to \mathbb{F}$ computes the inverse of $x$ over $\mathbb{F}$ when $x\neq 0$ and outputs $0$ when $x=0$, and the $\operatorname{ARK}_K$ (AddRoundKey) permutation adds a fixed constant $K$ to its input, i.e.,
$$\operatorname{INV}(x) = x^{n-2} \hspace{.1in} \mbox{and} \hspace{.1in}
\operatorname{ARK}_K(x) = x + K \;.$$
We study the process of alternately applying the $\operatorname{INV}$ permutation followed by a random linear permutation $\operatorname{ARK}_K$, which is a random walk over the alternating (or symmetric) group that we call the {\em inverse walk}.
We show both lower and upper bounds on the number of rounds it takes for this process to approximate a random permutation over $\mathbb{F}$. We show that $r$ rounds of the inverse walk over the field of size $n$ with $$r = \Theta\left(n\log n + n\log \frac{1}{\epsilon}\right)$$ rounds generate a permutation that is $\epsilon$-close (in variation distance) to a uniformly random even permutation (i.e. a permutation from the alternating group $A_{n}$). Our result is provided with explicit constants, and is tight, up to logarithmic factors.
Our result answers an open question from the work of Liu, Pelecanos, Tessaro, and Vaikuntanathan (CRYPTO 2023) by proving the $t$-wise independence of (a variant of) AES for $t$ up to the square root of the field size, compared to the original result that only held for $t=2$. It also constitutes a significant improvement on a result of Carlitz (Proc. American Mathematical Society, 1953) who showed a {\em reachability result}: namely, that every even permutation can be generated {\em eventually} by composing $\operatorname{INV}$ and $\operatorname{ARK}$. We show a {\em tight convergence result}, namely a tight quantitative bound on the number of rounds to reach a random (even) permutation.
Our work brings to the forefront the view of block ciphers as random walks and uses an entirely new set of (functional analytic) tools to study their pseudorandomness, both of which we hope will prove useful in the study of block ciphers.
Quantum Periodic Distinguisher Construction: Symbolization Method and Automated Tool
As one of the famous quantum algorithms, Simon's algorithm enables the efficient derivation of the period of periodic functions in polynomial time. However, the complexity of constructing periodic functions has hindered the widespread application of Simon's algorithm in symmetric-key cryptanalysis. Currently, aside from the exhaustive search-based testing method introduced by Canale et al. at CRYPTO 2022, there is no unified model for effectively searching for periodic distinguishers. Although Xiang et al. established a link between periodic functions and truncated differential theory at ToSC 2024, their approach lacks the ability to construct periods using unknown differentials and does not provide automated tools. This limitation underscores the inadequacy of existing methods in identifying periodic distinguishers for complex structures. In this paper, we address the challenge of advancing periodic distinguishers for symmetric-key ciphers. First, we propose a more generalized method for constructing periodic distinguishers, addressing the limitations of Xiang et al.'s theory in handling unknown differences. We further extend our method to probabilistic periodic distinguishers, thereby extending the separability property proposed by Hodžić et al. in 2020. As a result, our method can cover a wider range of periodic distinguishers. Second, we introduce a novel symbolic representation to simplify the search of periodic distinguishers. Based on this representation, we propose the first fully automated SMT-based search model, which efficiently addresses the challenges of manual searching in complex structures. Finally, we extend the model to SPN structures based on our new method. Our model has broad applicability through significant advancements in analyzing generalized Feistel structures (GFSs) and SPN-based ciphers. We have achieved new quantum distinguishers with the following round configurations: 10 rounds for GFS-4F, 10 rounds for LBlock, 10 rounds for TWINE, and 16 rounds for Skipjack-B, improving the previous best results by 1, 2, 2, and 3 rounds, respectively. In the domain of SPN-based ciphers, our model has enabled the identification of novel periodic distinguishers, including the first 9-round distinguisher for SKINNY and the first 12-round distinguisher for CRAFT. These achievements lay the foundation for quantum cryptanalysis of SPN-based ciphers using Simon’s algorithm.
DeepFold: Efficient Multilinear Polynomial Commitment from Reed-Solomon Code and Its Application to Zero-knowledge Proofs
This work presents Deepfold, a novel multilinear polynomial commitment scheme (PCS) based on Reed-Solomon code that offers optimal prover time and a more concise proof size. For the first time, Deepfold adapts the FRI-based multilinear PCS to the list decoding radius setting, requiring significantly fewer query repetitions and thereby achieving a 3$\times$ reduction in proof size compared to Basefold (Crypto'24), while preserving its advantages in prover time. Compared with PolyFRIM (USENIX Security'24), Deepfold achieves a 2$\times$ improvement in prover time, verifier time, and proof size. Another contribution of this work is a batch evaluation scheme, which enables the FRI-based multilinear PCS to handle polynomials whose size is not a power of two more efficiently.
Our scheme has broad applications in zk-SNARKs, since PCS is a key component in modern zk-SNARK constructions. For example, when replacing the PCS component of Virgo (S&P'20) with Deepfold, our scheme achieves a 2.5$\times$ faster prover time when proving the knowledge of a Merkle tree with 256 leaves, while maintaining the similar proof size. When replacing the PCS component of HyperPlonk (Eurocrypt'23) with Deepfold, our scheme has about 3.6$\times$ faster prover time. Additionally, when applying our arbitrary length input commitment to verifiable matrix multiplications for matrices of size
1200$\times$768 and 768$\times$2304, which are actual use cases in GPT-2 model, the performance showcases a 2.4$\times$ reduction in prover time compared to previous approaches.
Critical Rounds in Multi-Round Proofs: Proof of Partial Knowledge, Trapdoor Commitments, and Advanced Signatures
Zero-knowledge simulators and witness extractors, initially developed for proving the security of proof systems, turned out to be also useful in constructing advanced protocols from simple three-move interactive proofs. However, in the context of multi-round public-coin protocols, the interfaces of these auxiliary algorithms become more complex, introducing a range of technical challenges that hinder the generalization of these constructions.
We introduce a framework to enhance the usability of zero-knowledge simulators and witness extractors in multi-round argument systems for protocol designs. Critical-round zero-knowledge relies on the ability to perform complete zero-knowledge simulations by knowing the challenge of just one specific round in advance. Critical-round special soundness aims to address a stringent condition for witness extraction by formalizing it to operate with a smaller tree of transcripts than the one needed for extended extraction, which either outputs the intended witness or solves the underlying hard problem in an argument system. We show that these notions are satisfied by diverse protocols based on MPC-in-the-Head, interactive oracle proofs, and split-and-fold arguments.
We demonstrate the usefulness of the critical round framework by constructing proofs of partial knowledge (Cramer, Damgård, and Schoenmakers, CRYPTO'94) and trapdoor commitments (Damgård, CRYPTO'89) from critical-round multi-round proofs. Furthermore, our results imply advancements in post-quantum secure adaptor signatures and threshold ring signatures based on MPC-in-the-Head, eliminating the need for (costly) generic NP reductions.
EndGame: Field-Agnostic Succinct Blockchain with Arc
We present EndGame, a novel blockchain architecture that achieves succinctness through Reed-Solomon accumulation schemes. Our construction enables constant-time verification of blockchain state while maintaining strong security properties. We demonstrate how to efficiently encode blockchain state transitions using Reed-Solomon codes and accumulate proofs of state validity using the ARC framework. Our protocol achieves optimal light client verification costs and supports efficient state management without trusted setup.
Secure Rate-Distortion-Perception Trade-off Over Channels: A Randomized Distributed Function Computation (RDFC) Application
Secure rate-distortion-perception (RDP) trade-offs arise in critical applications, such as semantic compression and privacy-preserving generative coding, where preserving perceptual quality while minimizing distortion is vital. This paper studies a framework for secure RDP over noiseless and noisy broadcast channels under strong secrecy constraints. We first characterize the exact secure RDP region for noiseless transmission channels. We then develop an inner bound on the secure RDP region for a memoryless broadcast channel with correlated noise components at the receivers' observations and prove its tightness under a more capable broadcast channel assumption. Our results demonstrate how optimized binning schemes simultaneously achieve high perceptual quality, low distortion, and strong secrecy, illuminating fundamental information-theoretic limits for next-generation trustworthy computation systems.
Zeroed Out: Cryptanalysis of Weak PRFs in Alternating Moduli
The growing adoption of secure multi-party computation (MPC) has driven the development of efficient symmetric key primitives tailored for MPC. Recent advances, such as the alternating moduli paradigm, have shown promise but leave room for cryptographic and practical improvements. In this paper, we analyze a family of weak pseudorandom functions (wPRF) proposed at Crypto 2024, focusing on their One-to-One parameter sets. We demonstrate that these configurations fail to achieve their intended one-to-one mappings and exploit this observation to develop an efficient key recovery attack.
Our analysis reveals critical vulnerabilities, reducing the complexity of key recovery to $\mathcal{O}(2^{\lambda/2} \log_2 \lambda)$ for the Standard One-to-One wPRF and $\mathcal{O}(2^{0.84\lambda})$ for the Reversed Moduli variant -- both substantially below their claimed $\lambda$-bit security. We validate our findings through experimental evaluation, confirming alignment between predicted and observed attack complexities.
A Framework for Group Action-Based Multi-Signatures and Applications to LESS, MEDS, and ALTEQ
A multi-signature scheme allows a list of signers to sign a common message. They are widely used in scenarios where the same message must be signed and transmitted by $N$ users, and, instead of concatenating $N$ individual signatures, employing a multi-signature can reduce the data to be sent.
In recent years there have been numerous practical proposals in the discrete logarithm setting, such as MuSig2 (CRYPTO'21) for the Schnorr signature. Recently, these attempts have been extended to post-quantum assumptions, with lattice-based proposals such as MuSig-L (CRYPTO'22).
Given the growth of group action-based signatures, a natural question is whether a multi-signature can be built on the same models. In this work, we present the first construction of such a primitive relying on group action assumptions. We obtain a 3-round scheme achieving concurrent security in the ROM.
Moreover, we instantiate it using the three candidates to the additional post-quantum NIST's call, namely LESS, MEDS and ALTEQ, obtaining a good compression rate for different parameters sets.
Scrutinizing the Security of AES-based Hashing and One-way Functions
AES has cemented its position as the primary symmetric-key primitive for a wide range of cryptographic applications, which motivates the analysis on the concrete security of AES in practical instantiations, for instance, the collision resistance of AES-based hashing, the key commitment security of AES-based authenticated encryption schemes, and the one-wayness of AES-based one-way functions in ZK and MPC protocols. In this work, we introduce single-color initial structures (SCIS) into meet-in-the-middle (MITM) attacks to address neutral word generation, a critical bottleneck in MITM collision attacks. The SCIS technique leverages new structural insights to enable efficient neutral word generation and leads to multiple improved results on AES compared to the state-of-the-art. In particular, we present the first classical one-block collision attack on 7-round AES-MMO/MP, marking the first advancement in the number of attacked rounds in over a decade and matching the best-known results in the quantum setting, as well as the first one-block collision attack on 4-round AES-128-DM, bridging the gap highlighted by Taiyama \textit{et al.} at Asiacrypt 2024 from a non-differential-based approach. Additionally, we provide a comprehensive list of new results on the security margins of AES-192, AES-256, Rijndael-192, and Rijndael-256 in multiple attack settings.
CTng: Secure Certificate and Revocation Transparency
We present CTng, an evolutionary and practical PKI design that efficiently addresses multiple key challenges faced by deployed PKI systems. CTng ensures strong security properties, including guaranteed transparency of certificates and guaranteed, unequivocal revocation, achieved under NTTP-security, i.e., without requiring trust in any single CA, logger, or relying party. These guarantees hold even in the presence of arbitrary corruptions of these entities, assuming only a known bound (f) of corrupt monitors (e.g., f=8), with minimal performance impact.
CTng also enables offline certificate validation and preserves relying-party privacy, while providing scalable and efficient distribution of revocation updates. Furthermore, CTng is post-quantum ready, maintaining efficiency even with high-overhead quantum-secure signature schemes.
These properties significantly improve upon current PKI designs. In particular, while Certificate Transparency (CT) aims to eliminate single points of trust, the existing specification still assumes benign loggers. Addressing this through log redundancy is possible, but rather inefficient, limiting deployed configurations to f ≤ 2.
We present a security analysis and an evaluation of our open-source CTng prototype, showing that it is efficient and scalable under realistic deployment conditions.
Towards ML-KEM & ML-DSA on OpenTitan
This paper presents extensions to the OpenTitan hardware root of trust that aim at enabling high-performance lattice-based cryptography.
We start by carefully optimizing ML-KEM and ML-DSA --- the two algorithms primarily recommended and standardized by NIST --- in software targeting the OTBN accelerator.
Based on profiling results of these implementations, we propose tightly integrated extensions to OTBN, specifically an interface from OTBN to OpenTitan's Keccak accelerator (KMAC core) and extensions to the OTBN ISA to support operations on 256-bit vectors.
We implement these extensions in hardware and show that we achieve a speedup by a factor between 6 and 9 for different operations and parameter sets of ML-KEM and ML-DSA compared to our baseline implementation on unmodified OTBN.
This speedup is achieved with an increase in cell count of less than 17% in OTBN, which corresponds to an increase of less than 3% for the full Earl Grey OpenTitan core.
An Optimized Instantiation of Post-Quantum MQTT protocol on 8-bit AVR Sensor Nodes
Since the selection of the National Institute of Standards and Technology (NIST) Post-Quantum Cryptography (PQC) standardization algorithms, research on integrating PQC into security protocols such as TLS/SSL, IPSec, and DNSSEC has been actively pursued. However, PQC migration for Internet of Things (IoT) communication protocols remains largely unexplored. Embedded devices in IoT environments have limited computational power and memory, making it crucial to optimize PQC algorithms for efficient computation and minimal memory usage when deploying them on low-spec IoT devices. In this paper, we introduce KEM-MQTT, a lightweight and efficient Key Encapsulation Mechanism (KEM) for the Message Queuing Telemetry Transport (MQTT) protocol, widely used in IoT environments. Our approach applies the NIST KEM algorithm Crystals-Kyber (Kyber) while leveraging MQTT’s characteristics and sensor node constraints. To enhance efficiency, we address certificate verification issues and adopt KEMTLS to eliminate the need for Post-Quantum Digital Signatures Algorithm (PQC-DSA) in mutual authentication. As a result, KEM-MQTT retains its lightweight properties while maintaining the security guarantees of TLS 1.3. We identify inefficiencies in existing Kyber implementations on 8-bit AVR microcontrollers (MCUs), which are highly resource-constrained. To address this, we propose novel implementation techniques that optimize Kyber for AVR, focusing on high-speed execution, reduced memory consumption, and secure implementation, including Signed LookUp-Table (LUT) Reduction. Our optimized Kyber achieves performance gains of 81%,75%, and 85% in the KeyGen, Encaps, and DeCaps processes, respectively, compared to the reference implementation. With approximately 3 KB of stack usage, our Kyber implementation surpasses all state-of-the-art Elliptic Curve Diffie-Hellman (ECDH) implementations. Finally, in KEM-MQTT using Kyber-512, an 8-bit AVR device completes the handshake preparation process in 4.32 seconds, excluding the physical transmission and reception times.
A Round-Optimal Near-Linear Third-Party Private Set Intersection Protocol
Third-party private set intersection (PSI) enables two parties, each holding a private set to compute their intersection and reveal the result only to an inputless third party. In this paper, we present an efficient round-optimal third-party PSI protocol. Our work is motivated by real-world applications such as contact tracing whereby expedition is essential while concurrently preserving privacy. Our construction only requires $2$ communication rounds and attains a near-linear computational complexity of $O(n^{1+\varepsilon})$ for large dataset size $n$, where $\varepsilon>0$ is any fixed constant. Our improvements stem from algorithmic changes and the incorporation of new techniques along with precise parameter selections to achieve a tight asymptotic bound. Furthermore, we also present a third-party PSI cardinality protocol which has not been explored in prior third-party PSI work. In a third-party PSI cardinality setting, only the third-party obtains the size of the intersection and nothing else. Our construction to achieve the cardinality functionality attains a quasilinear computational complexity for the third-party.
RAMenPaSTA: Parallelizable Scalable Transparent Arguments of Knowledge for RAM Programs
Recent advances in Zero-Knowledge Proof and Argument (ZKP/ZKA) Systems allow efficiently verifiable computation in the circuit-based model (where programs can be represented as Boolean or arithmetic circuits). However, the circuit-based model is not widely popular due to its unsuitability for big programs (as the circuit size is linear to the program's size). Hence, the research community began looking into the Random-Access Machine model of programs, namely RAM programs, which is better suited for general-purpose programming. Consequently, a flurry of work began researching to construct ZKP protocols for the correct execution of RAM programs. In this paper, we also develop ZKP/ZKA for RAM programs, with a particular focus on two properties: (i) parallelizability, which significantly reduces prover (and verifier) computation, and (ii) genericity, which makes the construction requires minimal assumptions and can be instantiated with any existing ZKP protocols. To our knowledge, previous ZKP constructions for RAM programs either (i) are not known to support proving or verifying in parallel, (ii) require making non-black-box use of specific SNARK constructions, or (iii) even require proving computation of random oracle.
To this end, we propose the so-called Conditional Folding Scheme (CFS) as a building block to construct ZKP/ZKA for RAM programs. We provide a non-trivial practical construction for our CFS that is natively parallelizable, which significantly brings the runtime of proof generation down to $\mathcal{O}(W \log N)$ (compared to $\Omega(W N)$ in previous works), where $W$ is the witness size of the heaviest operation, and $N$ is the number of execution steps. We rigorously prove the security of our CFS (also for the case of folding in parallel). Our scheme can be made non-interactive using the Fiat-Shamir transform. Our CFS is generic and does not require a trusted setup (yielding a ``transparent'' argument). It can be adapted to construct Parallelizable Scalable Transparent Arguments of Knowledge for RAM Programs that we dub RAMenPaSTA.
USpt: Updatable Signature with Public Tokens
The Updatable Signature (US) allows valid signatures to be updated by an update token without accessing the newly generated signing key. Cini et al. (PKC’21) formally defined this signature and gave several constructions. However, their security model requires the secrecy of the update token, which is not applicable in many common application scenarios where existing signatures have been distributed to many parties. In addition, one can use the same token to update both the signing key and signatures, and all signatures can be updated by a single token, whereas the adversarial signature generated by an adversary might also be updated.
This work explores the (im)possibility of constructing an Updatable Signature with public tokens (USpt). Specifically, we first define the updatable signature with public tokens and present its security model. Then, from considering existing US schemes, we found that a secure USpt must properly handle a transform function from signature-update token to key-update token. We further formally proved the impossibility of constructing a secure USpt if (1) there is no transform function between key pairs and signatures, or (2) the signature-update token can be derived from the public keys of adjacent epochs. Finally, we present a concrete USpt scheme based on the BLS signature.
Evasive LWE Assumptions: Definitions, Classes, and Counterexamples
The evasive LWE assumption, proposed by Wee [Eurocrypt'22 Wee] for constructing a lattice-based optimal broadcast encryption, has shown to be a powerful assumption, adopted by subsequent works to construct advanced primitives ranging from ABE variants to obfuscation for null circuits. However, a closer look reveals significant differences among the precise assumption statements involved in different works, leading to the fundamental question of how these assumptions compare to each other. In this work, we initiate a more systematic study on evasive LWE assumptions:
(i) Based on the standard LWE assumption, we construct simple counterexamples against three private-coin evasive LWE variants, used in [Crypto'22 Tsabary, Asiacrypt'22 VWW, Crypto'23 ARYY] respectively, showing that these assumptions are unlikely to hold.
(ii) Based on existing evasive LWE variants and our counterexamples, we propose and define three classes of plausible evasive LWE assumptions, suitably capturing all existing variants for which we are not aware of non-obfuscation-based counterexamples.
(iii) We show that under our assumption formulations, the security proofs of [Asiacrypt'22 VWW] and [Crypto'23 ARYY] can be recovered, and we reason why the security proof of [Crypto'22 Tsabary] is also plausibly repairable using an appropriate evasive LWE assumption.
Cauchyproofs: Batch-Updatable Vector Commitment with Easy Aggregation and Application to Stateless Blockchains
Stateless blockchain designs have emerged to address the challenge of growing blockchain size using succinct global states. Previous works have developed vector commitments that support proof updates and aggregation to be used as such states. However, maintaining proofs for multiple users still demands significant computational resources, particularly to update proofs with every transaction. This paper introduces Cauchyproofs, a batch-updatable vector commitment that enables proof-serving nodes to efficiently update proofs in quasi-linear time relative to the number of users and transactions, utilizing an optimized KZG scheme to achieve complexity $O\left(\left(\left|\vec{\alpha}\right| + \left|\vec{\beta}\right|\right) \log^2 \left(\left|\vec{\alpha}\right| + \left|\vec{\beta}\right|\right)\right)$ for $\left|\alpha\right|$ users and $\left|\beta\right|$ transactions, compared to the previous $O\left(\left|\vec{\alpha}\right|\cdot\left|\vec{\beta}\right|\right)$ approaches. This advancement reduces the computational burden on proof-serving nodes, allowing efficient proof maintenance across large user groups. We demonstrate that our approach is approximately eight times faster than the naive approach at the Ethereum-level transaction throughput if we perform batch update every hour. Additionally, we present a novel matrix representation for KZG proofs utilizing Cauchy matrices, enabling faster all-proof computations with reduced elliptic curve operations. Finally, we propose an algorithm for history proof query, supporting retrospective proof generation with high efficiency. Our contributions substantially enhance the scalability and practicality of proof-serving nodes in stateless blockchain frameworks.
The Sponge is Quantum Indifferentiable
The sponge is a cryptographic construction that turns a public permutation into a hash function. When instantiated with the Keccak permutation, the sponge forms the NIST SHA-3 standard. SHA-3 is a core component of most post-quantum public-key cryptography schemes slated for worldwide adoption.
While one can consider many security properties for the sponge, the ultimate one is \emph{indifferentiability from a random oracle}, or simply \emph{indifferentiability}. The sponge was proved indifferentiable against classical adversaries by Bertoni et al. in 2008. Despite significant efforts in the years since, little is known about sponge security against quantum adversaries, even for simple properties like preimage or collision resistance beyond a single round. This is primarily due to the lack of a satisfactory quantum analog of the lazy sampling technique for permutations.
In this work, we develop a specialized technique that overcomes this barrier in the case of the sponge. We prove that the sponge is in fact indifferentiable from a random oracle against quantum adversaries. Our result establishes that the domain extension technique behind SHA-3 is secure in the post-quantum setting. Our indifferentiability bound for the sponge is a loose $O(\mathsf{poly}(q) 2^{-\min(r, c)/4})$, but we also give bounds on preimage and collision resistance that are tighter.
Secret and Shared Keys Recovery on Hamming Quasi-Cyclic with SASCA
Soft Analytical Side Channel Attacks (SASCA) are a powerful family of Side Channel Attacks (SCA) that allows the recovery of secret values with only a small number of traces. Their effectiveness lies in the Belief Propagation (BP) algorithm, which enables efficient computation of the marginal distributions of intermediate values. Post-quantum schemes such as Kyber, and more recently, Hamming Quasi-Cyclic (HQC), have been targets of SASCA. Previous SASCA on HQC focused on Reed-Solomon (RS) codes and successfully retrieved the shared key with a high success rate for high noise levels using a single trace. In this work, we present new SASCA on HQC, where both the shared key and the secret key are targeted. Our attacks are realized on simulations. Unlike the previous SASCA, we take a closer look at the Reed-Muller (RM) code. The advantage of this choice is that the RM decoder is applied before the RS decoder, enabling attacks targeting both the secret key and shared key. We build a factor graph of the Fast Hadamard Transform (FHT) function from the HQC reference implementation of April 2023. The information recovered from BP allows us to retrieve the shared key with a single trace. In addition to the previous SASCA targeting HQC, we also manage to recover the secret key with two different chosen ciphertext attacks. One of them requires a single trace and is successful until high noise levels.
Practical Zero-Trust Threshold Signatures in Large-Scale Dynamic Asynchronous Networks
Threshold signatures have become a critical tool in cryptocurrency systems, offering enhanced security by distributing the signing process among multiple signers. In this work, we distribute this process between a client and a permissionless decentralized blockchain, and present novel protocols for ECDSA and EdDSA/Schnorr signatures in this setting. Typical threshold access architectures used by trusted custodians suffer from the honeypot problem, wherein the more assets the custodian holds, the greater the incentive of compromising it.
Implementing threshold signatures over permissionless blockchains poses a few challenges.
First, existing networks typically work over an asynchronous reliable broadcast communication channel. Accordingly, our protocol is implemented over such a channel. As a result, it also benefits from identifiable abort, public verifiability, and guaranteed output delivery, and the client benefits from censorship resistance of blockchain systems.
Second, upon signing each block, the participating quorum may dynamically change and is post-determined.
Therefore, we design a fluid protocol, that supports a post-determined dynamic quorum in each communication round, thereby complying with existing broadcast channel implementations. Third, in permissionless networks, parties may join, leave, and change their stake. Therefore, we offer protocols for network reconfiguration, with complexity independent of the number of clients in the system, and our protocol efficiently supports a weighted threshold access structure for the network. Specifically, the complexity of distributed key generation and presign only depends on the number of parties and not on the overall weight, and the amortized cost of sign only depends on the individual weight.
Furthermore, our protocol introduces key improvements, including the removal of zero-knowledge proofs towards the client, and presigns with a non-interactive client. For Schnorr, the presigns are client-independent, and can be collected by the blockchain in a common pool, available for all clients in the system. These optimizations reduce communication overhead and improve the system's ability to handle traffic spikes during high-demand periods.
Our protocol is UC-secure, and is therefore natively designed for multiple clients to use the system in parallel. Notably, we propose a novel assumption, Slightly-Enhanced ECDSA Unforgeability, offering concrete security for 256-bit elliptic curves for threshold ECDSA with support for parallel execution of presigns.
In addition to securing cryptocurrency wallets, we demonstrate how our protocol enables various cross-chain applications, such as decentralized bridges, future transactions, andwallet transfer. Our system is designed for interoperability across multiple blockchains, enhancing security, scalability, and flexibility for decentralized finance (DeFi) ecosystems.
Deterministic algorithms for class group actions
We present an algorithm for the CSIDH protocol that is fully deterministic and strictly constant time. It does not require dummy operations and can be implemented without conditional branches. Our proof-of-concept C implementation shows that a key exchange can be performed in a constant (i.e. fixed) number of finite field operations, independent of the secret keys. The algorithm relies on a technique reminiscent of the standard Montgomery ladder, and applies to the computation of isogenies that divide an endomorphism of smooth degree represented by its kernel. We describe our method in the general context of class group actions on oriented elliptic curves, giving rise to a large family of non-interactive key exchanges different from CSIDH.
Faster Hash-based Multi-valued Validated Asynchronous Byzantine Agreement
Multi-valued Validated Byzantine Agreement (MVBA) is vital for asynchronous distributed protocols like asynchronous BFT consensus and distributed key generation, making performance improvements a long-standing goal. Existing communication-optimal MVBA protocols rely on computationally intensive public-key cryptographic tools, such as non-interactive threshold signatures, which are also vulnerable to quantum attacks. While hash-based MVBA protocols have been proposed to address these challenges, their higher communication overhead has raised concerns about practical performance.
We present a novel MVBA protocol with adaptive security, relying exclusively on hash functions to achieve post-quantum security. Our protocol delivers near-optimal communication, constant round complexity, and significantly reduced latency compared to existing schemes, though it has sub-optimal resilience, tolerating up to 20% Byzantine corruptions instead of the typical 33%. For example, with $n=201$ and input size 1.75 MB, it reduces latency by 81% over previous hash-based approaches.
Full-Authority Data Sharing Systems: Ciphertext-Dependent Proxy Re-Encryption with Dynamic Key Generation
Proxy re-encryption (PRE) is a powerful primitive for secure cloud storage sharing. Suppose Alice stores encrypted datasets (ciphertexts) in a cloud server (proxy). If Bob requests data sharing, Alice shares the ciphertexts by computing and sending a re-encryption key to the proxy, which will perform the re-encryption operation that generates the ciphertexts that are decryptable to Bob. Still, the proxy cannot access the plaintexts/datasets. Traditionally, the re-encryption key can convert all of Alice's ciphertexts, and the proxy should operate the re-encryption on the ciphertexts selected by the users (Alice/Bob). There is a trust issue: Alice must grant full decryption rights (losing control) to rely on proxy-enforced access policies (vulnerable to collusion). Existing PRE schemes fail to reconcile fine-grained control with collusion resistance. If Alice uses different keys to encrypt each dataset, the re-encryption complexity is linear to the number of requested datasets. We propose full-authority data sharing, a novel paradigm combining ciphertext-dependent PRE (cdPRE) and dynamic key generation (dKG). Unlike traditional PRE, cdPRE binds re-encryption keys to specific ciphertexts, ensuring collusion resistance (i.e., proxy + Bob cannot access unauthorised data). dKG dynamically connects keys via key derivation functions; for example, the chain system reduces per-dataset delegation cost to $O(1)$ for sequential release in publication/subscription systems (vs. $O(k)$ in trivial solutions, where $k$ is the number of datasets). We instantiate this paradigm with Kyber (NIST-PQC standardised) and AES, prove its security, and experimentally verify the high efficiency of the scheme.
Walnut: A Generic Framework with Enhanced Scalability for BFT Protocols
The performance of traditional BFT protocols significantly decreases as $n$ grows ($n$ for the number of replicas), and thus, they support up to a few hundred replicas. Such scalability issues severely limit the application scenarios of BFT. Meanwhile, the committee sampling technique has the potential to scale the replica size significantly by selecting a small portion of replicas as the committee and then conveying the consensus results to the rest.
However, this technique is rarely used in BFT, and there is still a lack of methods to scale the traditional BFT protocol being deployed to support more replicas rather than the costly re-deployment of new protocols.
This paper introduces Walnut, a secure and generic committee-sampling-based modular consensus. Specifically, we use the verifiable random function for committee election and integrate committee rotation with the consensus. This resulting construction ensures that each selected committee is of a fixed size and acknowledged by all replicas, even in a partially synchronous network. For Walnut, we provide a rigorous definition and outline the necessary properties of each module to achieve safety and liveness.
To clarify Walnut's effectiveness, we apply this framework to HotStuff to obtain the Walnut-HS protocol, together with a proof of fit-in. We also implement Walnut-HS and compare its performance with HotStuff, using up to 100 Amazon EC2 instances in WAN. The experiments show that Walnut-HS can easily scale to 1,000 replicas with only a slight performance degradation, while HotStuff performs poorly or even breaks when $n\!>\!200$. Besides, Walnut-HS performs well in comparison with Hotstuff for small-scale experiments. For example, the peak throughput of Walnut-HS is at most 38.6% higher than HotStuff for $n\!=\!100$.
DUPLEX: Scalable Zero-Knowledge Lookup Arguments over RSA Group
Lookup arguments enable a prover to convince a verifier that a committed vector of lookup elements $\vec{f} \in \mathbb{F}^m$ is contained within a predefined table $T \in \mathbb{F}^N$. These arguments are particularly beneficial for enhancing the performance of SNARKs in handling non-arithmetic operations, such as batched range checks or bitwise operations. While existing works have achieved efficient and succinct lookup arguments, challenges remain, particularly when dealing with large vectors of lookup elements in privacy-sensitive applications.
In this paper, we introduce $\duplex$, a scalable zero-knowledge lookup argument scheme that offers significant improvements over previous approaches. Notably, we present the first lookup argument designed to operate over the RSA group. Our core technique allows for the transformation of elements into prime numbers to ensure compatibility with the RSA group, all without imposing substantial computational costs on the prover. Given $m$ lookup elements, $\duplex$ achieves an asymptotic proving time of $O(m \log m)$, with constant-sized proofs, constant-time verification, and a public parameter size independent of the table size $N$. Additionally, $\duplex$ ensures the privacy of lookup elements and is robust against dynamic table updates, making it highly suitable for scalable verifiable computation in real-world applications.
We implemented and empirically evaluated $\duplex$, comparing it with the state-of-the-art zero-knowledge lookup argument Caulk [CCS'22]. Our experimental results demonstrate that $\duplex$ significantly outperforms Caulk in proving time for both single and batched lookup arguments, while maintaining practical proof size and verification time.
BPDTE: Batch Private Decision Tree Evaluation via Amortized Efficient Private Comparison
Machine learning as a service requires clients to entrust their information to the server, raising privacy concerns. Private Decision Tree Evaluation (PDTE) are proposal to address these concerns in decision trees, which are fundamental models in machine learning. However, existing solutions perform poorly with massive datasets in real-world applications, therefore, we focus on the batching variant called BPDTE, which supports a single evaluation on multiple data and significantly improves performance.
Firstly, we propose three private comparison (PrivCMP) algorithms that privately compare two numbers to determine which one is larger, by utilizing thermometer encoding and a novel folklore-inspired dichotomy method. These algorithms are non-interactive, batchable, and high-precision, achieving an amortized cost of less than 1 ms at 32-bit precision.
Secondly, we propose six BPDTE schemes based on our PrivCMP and the Clear Rows Relation (CRR) algorithm, introduced to ensure batching security. Experimental results show that our schemes improve amortized efficiency, offer more flexible batch sizes, and achieve higher precision.
Finally, we provide a formal security analysis of these schemes.
Post-Quantum PKE from Unstructured Noisy Linear Algebraic Assumptions: Beyond LWE and Alekhnovich's LPN
Noisy linear algebraic assumptions with respect to random matrices, in particular Learning with Errors ($\mathsf{LWE}$) and Alekhnovich Learning Parity with Noise (Alekhnovich $\mathsf{LPN}$), are among the most investigated assumptions that imply post-quantum public-key encryption (PKE). They enjoy elegant mathematical structure. Indeed, efforts to build post-quantum PKE and advanced primitives such as homomorphic encryption and indistinguishability obfuscation have increasingly focused their attention on these two assumptions and their variants.
Unfortunately, this increasing reliance on these two assumptions for building post-quantum cryptography leaves us vulnerable to potential quantum (and classical) attacks on Alekhnovich $\mathsf{LPN}$ and $\mathsf{LWE}$. Quantum algorithms is a rapidly advancing area, and we must stay prepared for unexpected cryptanalytic breakthroughs. Just three decades ago, a short time frame in the development of our field, Shor's algorithm rendered most then-popular number theoretic and algebraic assumptions quantumly broken. Furthermore, within the last several years, we have witnessed major classical and quantum
breaks on several assumptions previously introduced for post-quantum cryptography. Therefore, we ask the following question:
In a world where both $\mathsf{LWE}$ and Alekhnovich $\mathsf{LPN}$ are broken, can there still exist noisy linear assumptions that remain plausibly quantum hard and imply PKE?
To answer this question positively, we introduce two natural noisy-linear algebraic assumptions that are both with respect to random matrices, exactly like $\mathsf{LWE}$ and Alekhnovich $\mathsf{LPN}$, but with different error distributions. Our error distribution combines aspects of both small norm and sparse error distributions. We design a PKE from these assumptions and give evidence that these assumptions are likely to still be secure even in a world where both the $\mathsf{LWE}$ and Alekhnovich $\mathsf{LPN}$ assumptions are simultaneously broken. We also study basic properties of these assumptions, and show that in the parameter settings we employ to build PKE, neither of them are ``lattice'' assumptions in the sense that we don't see a way to attack them using a lattice closest vector problem solver, except via $\mathsf{NP}$-completeness reductions.
Rerandomizable Garbling, Revisited
In key-and-message homomorphic encryption (KMHE), the key space is a subset of the message space, allowing encryption of secret keys such that the same homomorphism can be applied to both the key and the message of a given ciphertext.
KMHE with suitable security properties is the main building block for constructing rerandomizable garbling schemes (RGS, Gentry et al., CRYPTO 2010), which enable advanced cryptographic applications like multi-hop homomorphic encryption, the YOSO-like MPC protocol SCALES (Acharya et al., TCC 2022 and CRYPTO 2024), and more.
The BHHO scheme (Boneh et al., CRYPTO 2008) is currently the only known KMHE scheme suitable for constructing RGS. An encryption of a secret key consists of $\mathcal{O}(\lambda^{2})$ group elements, where $\lambda$ is the security parameter, which incurs a significant bandwidth and computational overhead that makes the scheme itself, and the RGS protocols building upon it, impractical.
We present a new, more efficient KMHE scheme with linear-size ciphertexts.
Despite using heavier cryptographic tools (pairings instead of plain DDH-hard groups), the concrete ciphertext size and computational costs are very significantly reduced. We are able to shrink the ciphertext by 97.83 % (from 16.68 MB to 360 kB) and reduce the estimated computations for encryption by 99.996 % (from 4 minutes to 0.01 seconds) in comparison to BHHO.
Additionally, we introduce gate KMHE as a new tool to build more efficient RGS. Our RGS construction shrinks the size of a garbled gate by 98.99 % (from 133.43 MB to 1.35 MB) and decreases the estimated cost of garbling by 99.998 % (from 17 minutes to 0.02 seconds per gate) in comparison to Acharya et al.
In summary, our work shows for the first time that RGS and the SCALES protocol (and hence YOSO-like MPC) are practically feasible for simple circuits.
Fully Asymmetric Anamorphic Homomorphic Encryption from LWE
As introduced by Persiano {\it et al.} (Eurocrypt'22), anamorphic encryption (AE) is a primitive enabling private communications against a dictator that forces users to surrender their decryption keys. In its fully asymmetric flavor (defined by Catalano {\it et al.}, Eurocrypt'24), anamorphic channels can work as hidden public-key mechanisms in the sense that anamorphic encryptors are not necessarily able to decrypt anamorphic ciphertexts. Unfortunately, fully asymmetric AE is hard to come by and even impossible to obtain from ordinary public-key encryption via black-box constructions. So far, only three schemes are known to rely on well-established assumptions. In this paper, we exhibit constructions from the standard LWE assumption based on Regev's cryptosystem and its dual version. In both cases, we retain the additive homomorphism of the schemes. We additionally show that dual Regev is public-key anamorphic in the sense of Persiano {\it et al.} (Crypto'24). In the FHE setting, we show that the dual GSW system provides fully asymmetric AE (while preserving its leveled homomorphism) when instantiated with binary/ternary secret keys. Along the way, we discuss the extent to which our schemes satisfy a generalization of Banfi {\it et al.}'s notion of robustness (Eurocrypt'24) to the case of homomorphically evaluated ciphertexts.
Improvements on the schemes VOX and QR UOV When minus is a plus
In this article, we present an improvement for QR UOV and a reparation of VOX.
Thanks to the use of the perturbation minus we are able to fully exploit the parameters in order to reduce the public key. It also helps us to repair the scheme VOX. With QR UOV- we are able to significantly reduce the size of the public key at the cost of a small increase of the signature size. While with VOX- we can obtain a public key as short as $6KB$. VOX- also maintains a very short signature size.
We also show that the use of minus perturbation is very useful with schemes that uses the QR (Quotient ring perturbation).
MPC with Publicly Identifiable Abort from Pseudorandomness and Homomorphic Encryption
Publicly identifiable abort is a critical feature for ensuring accountability in outsourced computations using secure multiparty computation (MPC). Despite its importance, no prior work has specifically addressed identifiable abort in the context of outsourced computations. In this paper, we present the first MPC protocol that supports publicly identifiable abort with minimal overhead for external clients. Our approach minimizes client-side computation by requiring only a few pseudorandom function evaluations per input. On the server side, the verification process involves lightweight linear function evaluations using homomorphic encryption. This results in verification times of a few nanoseconds per operation for servers, with client overhead being approximately two orders of magnitude lower. Additionally, the public verifiability of our protocol reduces client input/output costs compared to SPDZ-based protocols, on which we base our protocol. For example, in secure aggregation use cases, our protocol achieves over twice the efficiency during the offline phase and up to an 18 % speedup in the online phase, significantly outperforming SPDZ.
Verifiable E-Voting with a Trustless Bulletin Board
Voter privacy and end-to-end (E2E) verifiability are critical features of electronic voting (e-voting) systems to safeguard elections. To achieve these properties commonly a perfect bulletin board (BB) is assumed that provides consistent, reliable, and tamper-proof storage and transmission of voting data. However, in practice, BBs operate in asynchronous and unreliable networks, and hence, are susceptible to vulnerabilities such as equivocation attacks and dropped votes, which can compromise both verifiability and privacy. Although prior research has weakened the perfect BB assumption, it still depends on trusting certain BB components.
In this work, we present and initiate a formal exploration of designing e-voting systems based on fully untrusted BBs. For this purpose, we leverage the notion of accountability and in particular use accountable BBs. Accountability ensures that if a security breach occurs, then cryptographic evidence can identify malicious parties. Fully untrusted BBs running in asynchronous networks bring new challenges. Among others, we identify several types of attacks that a malicious but accountable BB might be able to perform and propose a new E2E verifiability notion for this setting. Based on this notion and as a proof of concept, we construct the first e-voting system that is provably E2E verifiable and provides vote privacy even when the underlying BB is fully malicious. This establishes an alternative to traditional e-voting architectures that rely on (threshold) trusted BB servers.
Convolution-Friendly Image Compression with FHE
During the past few decades, the field of image processing has grown to cradle hundreds of applications,
many of which are outsourced to be computed on trusted remote servers.
More recently, Fully Homomorphic Encryption (FHE) has grown
in parallel as a powerful tool enabling computation on encrypted data,
and transitively on untrusted servers. As a result, new FHE-supported applications have emerged, but not all
have reached practicality due to hardware, bandwidth
or mathematical constraints inherent to FHE. One example is processing encrypted images, where practicality is closely related to bandwidth availability.
In this paper, we propose and implement a novel technique for
FHE-based image compression and decompression. Our technique is a stepping stone
towards practicality of encrypted image-processing and
applications such as private inference, object recognition, satellite-image searching
or video editing.
Inspired by the JPEG standard, and with new FHE-friendly
compression/decompression algorithms, our technique allows a client
to compress and encrypt images before sending them to a server,
greatly reducing the required bandwidth.
The server homomorphically decompresses a ciphertext to obtain
an encrypted image to which generic
pixel-wise processing or convolutional filters can be applied.
To reduce the round-trip bandwidth requirement, we also propose
a method for server-side post-processing compression.
Using our pipeline, we demonstrate that a high-definition grayscale image ($1024\times1024$) can be homomorphically decompressed, processed and
re-compressed in \(\sim\)$8.1$s with a compression ratio of 100/34.4 on a standard personal computer
without compromising on fidelity.
T-Spoon: Tightly Secure Two-Round Multi-Signatures with Key Aggregation
Multi-signatures over pairing-free cyclic groups have seen significant advancements in recent years, including achieving two-round protocols and supporting key aggregation.
Key aggregation enables the combination of multiple public keys into a single succinct aggregate key for verification and has essentially evolved from an optional feature to a requirement.
To enhance the concrete security of two-round schemes, Pan and Wagner (Eurocrypt 2023, 2024) introduced the first tightly secure constructions in this setting.
However, their schemes do not support key aggregation, and their approach inherently precludes a single short aggregate public key.
This leaves the open problem of achieving tight security and key aggregation simultaneously.
In this work, we solve this open problem by presenting the first tightly secure two-round multi-signature scheme in pairing-free groups supporting key aggregation.
As for Pan and Wagner's schemes, our construction is based on the DDH assumption.
In contrast to theirs, it also has truly compact signatures, with signature size asymptotically independent of the number of signers.
Correlation power analysis of LESS and CROSS
This paper presents a side-channel attack targeting the LESS and CROSS post-quantum digital signature schemes, resulting in full key recovery for both. These schemes have advanced to the second round of NIST’s call for additional signatures. By leveraging correlation power analysis and horizontal attacks, we are able to recover the secret key by observing the power consumption during the multiplication of an ephemeral secret vector with a public matrix. The attack path is enabled by the presence of a direct link between the secret key elements and the ephemeral secret, given correct responses. This attack targets version 1.2 of both schemes. In both settings we can recover the secret key in a single trace for the NIST’s security level I parameter set. Additionally, we propose improvements to the existing horizontal attack on CROSS, reducing the required rounds that need to be observed by an order of magnitude for the same parameter sets.
Post-Quantum Cryptography in eMRTDs: Evaluating PAKE and PKI for Travel Documents
Passports, identity cards and travel visas are examples of machine readable travel documents (MRTDs) or eMRTDs for their electronic variants. The security of the data exchanged between these documents and a reader is secured with a standardized password authenticated key exchange (PAKE) protocol known as PACE.
A new world-wide protocol migration is expected with the arrival of post-quantum cryptography (PQC) standards. In this paper, we focus on the impact of this migration on constrained embedded devices as used in eMRTDs. We present a feasibility study of a candidate post-quantum secure PAKE scheme as the replacement for PACE on existing widely deployed resource-constrained chips. In a wider context, we study the size, performance and security impact of adding post-quantum cryptography with a focus on chip storage and certificate chains for existing eMRTDs.
We show that if the required post-quantum certificates for the eMRTD fit in memory, the migration of existing eMRTD protocols to their post-quantum secure equivalent is already feasible but a performance penalty has to be paid. When using a resource constrained SmartMX3 P71D600 smart card, designed with classical cryptography in mind, then execution times of a post-quantum secure PAKE algorithm using the recommended post-quantum parameter of the new PQC standard ML-KEM can be done in under a second. This migration will be aided by future inclusion of dedicated hardware accelerators and increased memory to allow storage of larger keys and improve performance.
Chosen Ciphertext Security via BARGs
In this paper, we show a new set of cryptographic primitives that generically leads to chosen ciphertext secure (CCA secure) public-key encryption (PKE). Specifically, we show how a (non-interactive, publicly verifiable) batch argument (BARG) for NP can be combined with a chosen plaintext secure (CPA secure) PKE scheme to achieve a CCA secure one. The requirement of the succinctness of the proof size of a BARG used as a building block is arguably very mild: We require it to be only at most $(1 - \frac{1}{p(\lambda, n)}) \cdot k + q(\lambda, n) \cdot k^{\epsilon}$ for some non-negative constant $\epsilon < 1$ and polynomials $p, q$, where $\lambda$ denotes the security parameter, $n$ denotes the statement size, and $k$ denotes the batch size (i.e. the number of statements whose correctness is simultaneously proved), and thus it can even be (slightly) linear in $k$. A BARG with such succinctness is so weak that it cannot be used in the recent constructions of a non-interactive zero-knowledge proof system for NP based on a BARG (and a one-way function) by Bitansky et al. (STOC 2024) and Bradley, Waters, and Wu (TCC 2024). Therefore, our result gives a new building block that can upgrade CPA security into CCA security.
KeyJoin: Privacy-Focused CoinJoin Protocol for Bitcoin
Bitcoin is based on the Blockchain, an open ledger containing information about each transaction in the Bitcoin network. Blockchain serves many purposes, but it allows anyone to track all transactions and activities of each Bitcoin address. The privacy of the network is being threatened by some organizations that track transactions. Tracking and subsequent filtering of coins lead to the loss of exchangeability of Bitcoin.
Despite Bitcoin’s transparency, it is possible to increase user privacy using a variety of existing methods. One of these methods is called CoinJoin, was proposed by Bitcoin developer Greg Maxwell in 2013. This technology involves combining several users transactions to create a single transaction with multiple inputs and outputs, which makes transaction analysis more complicated.
This work describes the KeyJoin, a privacy-focused CoinJoin protocol based on the keyed-verification anonymous credentials (KVAC).
Towards Optimal Differential Attacks on FLY and PIPO
Lightweight block ciphers such as PIPO and FLY are designed to operate efficiently and securely in constrained environments. While the differential attack on PIPO-64-128 has already been studied by the designers, no concrete differential attack had been conducted for PIPO-64-256 and FLY. Motivated by this gap, we revisit the security of PIPO against differential attacks and generalize the analysis framework to make it applicable to structurally related ciphers. Based on this generalized framework, we search for key-recovery-attack-friendly distinguishers and apply clustering techniques to enhance their effectiveness in key-recovery attacks. As a result, we improve the previously proposed differential attack on PIPO-64-128, reducing the time complexity by a factor of $2^{31.7}$. Furthermore, we propose a 13-round differential attack on PIPO-64-256, which covers two more rounds than the previous result. We also apply the same methodology to FLY and present the first differential attack on 12-round FLY, reaching one round beyond the best-known distinguisher. We believe this work improves the understanding of the structures of FLY and PIPO, and provides a basis for future research on advanced key-recovery attacks for related cipher designs.
An Attack on TON’s ADNL Secure Channel Protocol
We present an attack on the Abstract Datagram Network Layer (ADNL) protocol used in The Open Network (TON), currently the tenth largest blockchain by market capitalization. In its TCP variant, ADNL secures communication between clients and specialized nodes called liteservers, which provide access to blockchain data. We identify two cryptographic design flaws in this protocol: a handshake that permits session-key replay and a non-standard integrity mechanism whose security critically depends on message confidentiality. We transform these vulnerabilities into an efficient plaintext-recovery attack by exploiting two ADNL communication patterns, allowing message reordering across replayed sessions. We then develop a plaintext model for this scenario and construct an efficient algorithm that recovers the keystream using a fraction of known plaintexts and a handful of replays. We implement our attack and show that an attacker intercepting the communication between a TON liteserver and a widely deployed ADNL client can recover the keystream used to encrypt server responses by performing eight connection replays to the server. This allows the decryption of sensitive data, such as account balances and user activity patterns. Additionally, the attacker can modify server responses to manipulate blockchain information displayed to the client, including account balances and asset prices.
Efficient CPA Attack on Hardware Implementation of ML-DSA in Post-Quantum Root of Trust
Side-channel attacks (SCA) pose a significant threat to cryptographic implementations, including those designed to withstand the computational power of quantum computers.
This paper introduces the first side-channel attack on an industry-grade post-quantum cryptography implementation.
Specifically, we present a Correlation Power Analysis (CPA) attack targeting the open-source hardware implementation of ML-DSA within a Silicon Root of Trust framework developed through a multi-party collaboration involving leading technology companies.
Our attack focuses on the modular reduction process that follows the Number Theoretic Transform-based polynomial pointwise multiplication.
By exploiting side-channel leakage from a distinctive unique reduction algorithm and leveraging the zeroization mechanism used to securely erase sensitive information by clearing internal registers, we significantly enhance the attack's efficacy.
Our findings reveal that an adversary can extract the secret keys using only 10,000 power traces.
With access to these keys, an attacker could forge signatures for certificate generation, thereby compromising the integrity of the root of trust.
This work highlights the vulnerabilities of industry-standard root-of-trust systems to side-channel attacks. It underscores the urgent need for robust countermeasures to secure commercially deployed systems against such threats.
Registered Functional Encryption for Attribute-Weighted Sums with Access Control
In this work, we present Functional Encryption (FE) schemes for Attribute-Weighted Sums (AWS), introduced by Abdalla, Gong and Wee (Crypto 2020) in the registration-based setting (RFE). In such a setting, users sample their own public/private key pairs $(\mathsf{pk}_i, \mathsf{sk}_i)$; a key curator registers user public keys along with their functions $h_i$; encryption takes as input $N$ attribute-value pairs $\{\vec x_\ell, \vec z_\ell\}_{\ell\in[N]}$ where $\vec x_\ell$ is public and $\vec z_\ell$ is private; and decryption recovers the weighted sum $\sum_{\ell\in[N]}h_i(\vec x_\ell)^\mathsf{T}\vec z_\ell$ while leaking no additional information about $\vec z_\ell$. Recently, Agrawal, Tomida and Yadav (Crypto 2023) studied the attribute-based case of AWS (AB-AWS) providing fine-grained access control, where the function is described by a tuple $(g_i, h_i)$, the input is extended to $(\vec y, \{\vec x_\ell, \vec z_\ell\}_{\ell \in [N]})$ and decryption recovers the weighted sum only if $g_i(\vec y) = 0$. Our main results are the following:
- We build the first RFE for (AB-)1AWS functionality, where $N=1$, that achieves adaptive indistinguishability-based security under the (bilateral) $k$-Lin assumption in prime-order pairing groups. Prior works achieve RFE for linear and quadratic functions without access control in the standard model, or for attribute-based linear functions in the generic group model.
- We develop the first RFE for AB-AWS functionality, where $N$ is unbounded, that achieves very selective simulation-based security under the bilateral $k$-Lin assumption. Here, “very selective” means that the adversary declares challenge attribute values, all registered functions and corrupted users upfront. Previously, SIM-secure RFEs were only constructed for linear and quadratic functions without access control in the same security model.
We devise a novel nested encoding mechanism that facilitates achieving attribute-based access control and unbounded inputs in the registration-based setting for AWS functionalities, proven secure in the standard model. In terms of efficiency, our constructions feature short public parameters, secret keys independent of $N$, and compact ciphertexts unaffected by the length of public inputs. Moreover, as required by RFE properties, all objective sizes and algorithm costs scale poly-logarithmically with the total number of registered users in the system.
The Meta-Complexity of Secret Sharing
A secret-sharing scheme allows the distribution of a secret $s$ among $n$ parties, such that only certain predefined “authorized” sets of parties can reconstruct the secret, while all other “unauthorized” sets learn nothing about $s$. The collection of authorized/unauthorized sets is defined by a monotone function $f: \{0,1\}^n \rightarrow \{0,1\}$. It is known that any monotone function can be realized by a secret-sharing scheme; thus, the smallest achievable \emph{total share size}, $S(f)$, serves as a natural complexity measure.
In this paper, we initiate the study of the following meta-complexity question: Given a monotone function $f$, is it possible to efficiently distinguish between cases where the secret-sharing complexity of $f$ is small versus large? We examine this question across several computational models, yielding the following main results.
(Hardness for formulas and circuits): Given a monotone formula $f$ of size $L$, it is coNP-hard to distinguish between ``cheap'' functions, where the maximum share size is 1 bit and the total share size is $O(L^{0.01})$, and ``expensive'' functions, where the maximum share size is $\Omega(\sqrt{L})$ and the total share size is $\Omega(L/\log L)$.
This latter bound nearly matches known secret-sharing constructions yielding a total share size of $L$ bits. For monotone circuits, we strengthen the bound on the expensive case to a maximum share size of $\Omega(L/\log L)$ and a total share size of $\Omega(L^2/\log L)$. These results rule out the existence of instance-optimal compilers that map a formula $f$ to a secret-sharing scheme with complexity polynomially related to $S(f)$.
(Hardness for truth tables): Under cryptographic assumptions, either (1) every $n$-bit slice function can be realized by a $\text{poly}(n)$-size secret-sharing scheme, or (2) given a truth-table representation of $f$ of size $N = 2^n$, it is computationally infeasible to distinguish in time $\text{poly}(N)$ between cases where $S(f) = \text{poly}(n)$ and $S(f) = n^{\omega(1)}$. Option (1) would be considered a breakthrough result, as the best-known construction for slices has a sub-exponential complexity of $2^{\tilde{O}(\sqrt{n})}$ (Liu, Vaikuntanathan, and Wee; Eurocrypt 2018). Our proof introduces a new worst-case-to-average-case reduction for slices, which may be of independent interest.
(Hardness for graphs): We examine the simple case where $f$ is given as a 2-DNF, represented by a graph $G$ whose edges correspond to 2-terms, and ask whether it is possible to distinguish between cases where the share size is constant and those where the share size is large, say $\Omega(\log n)$. We establish several connections between this question and questions in communication complexity. For instance, we show that graphs admitting constant-cost secret sharing form a subclass of graphs with constant randomized communication complexity and constant-size adjacency sketches (Harms, Wild, and Zamaraev; STOC 2022). We leverage these connections to establish new lower bounds for specific graph families, derive a combinatorial characterization of graphs with constant-size linear secret-sharing schemes, and show that a natural class of myopic algorithms fails to distinguish cheap graphs from expensive ones.
Engorgio: An Arbitrary-Precision Unbounded-Size Hybrid Encrypted Database via Quantized Fully Homomorphic Encryption
This work proposes an encrypted hybrid database framework that combines vectorized data search and relational data query over quantized fully homomorphic encryption (FHE). We observe that, due to the lack of efficient encrypted data ordering capabilities, most existing encrypted database (EDB) frameworks do not support hybrid queries involving both vectorized and relational data. To further enrich query expressiveness while retaining evaluation efficiency, we propose Engorgio, a hybrid EDB framework based on quantized data ordering techniques over FHE. Specifically, we design a new quantized data encoding scheme along with a set of novel comparison and permutation algorithms to accurately generate and apply orders between large-precision data items. Furthermore, we optimize specific query types, including full table scan, batched query, and Top-k query to enhance the practical performance of the proposed framework. In the experiment, we show that, compared to the state-of-the-art EDB frameworks, Engorgio is up to 28x--854x faster in homomorphic comparison, 65x--687x faster in homomorphic sorting and 15x--1,640x faster over a variety of end-to-end relational, vectorized, and hybrid SQL benchmarks. Using Engorgio, the amortized runtime for executing a relational and hybrid query on a 48-core processor is under 3 and 75 seconds, respectively, over a 10K-row hybrid database.
Generic Construction of Secure Sketches from Groups
Secure sketches are designed to facilitate the recovery of originally enrolled data from inputs that may vary slightly over time. This capability is important in applications where data consistency cannot be guaranteed due to natural variations, such as in biometric systems and hardware security. Traditionally, secure sketches are constructed using error-correcting codes to handle these variations effectively. Additionally, principles of information theory ensure the security of these sketches by managing the trade-off between data recoverability and confidentiality. In this paper, we show how to construct a new family of secure sketches generically from groups. The notion of groups with unique factorization property is first introduced, which is of independent interest and serves as a building block for our secure sketch construction. Next, an in-depth study of the underlying mathematical structures is provided, and some computational and decisional hardness assumptions are defined. As a result, it is argued that our secure sketches are efficient; can handle a linear fraction of errors with respect to the norm 1 distance; and that they are reusable and irreversible. To our knowledge, such generic group-based secure sketch construction is the first of its kind, and it offers a viable alternative to the currently known secure sketches.
A proof of P≠NP (New symmetric encryption algorithm against any linear attacks and differential attacks)
P vs NP problem is the most important unresolved problem in the field of computational complexity. Its impact has penetrated into all aspects of algorithm design, especially in the field of cryptography. The security of cryptographic algorithms based on short keys depends on whether P is equal to NP. In fact, Shannon strictly proved that the one-time-pad system meets unconditional security, but because the one-time-pad system requires the length of key to be at least the length of plaintext, how to transfer the key is a troublesome problem that restricts the use of the one-time-pad system in practice. Cryptography algorithms used in practice are all based on short key, and the security of the short key mechanism is ultimately based on one-way assumption. In fact, the existence of one-way function can directly lead to the important conclusion P≠NP.
In this paper, we originally constructed a short-key block cipher algorithm. The core feature of this algorithm is that, when any plaintext-ciphertext pairs are known, the problem of cracking its key is equilavent to the problem of two independent encryption system cracking their keys separately when only knowing their respective ciphertexts. In addition, for these two independent encryption systems, verifying a possible plaintext is also exponentially computationally complex when only the ciphertext is known.
Based on the above feature, we construct a problem and theoretically prove that the problem satisfies the properties of one-way functions, thereby solving the problem of the existence of one-way functions, that is, directly proving that P≠NP.