Papers updated in last 31 days (236 results)
Doubly Efficient Fuzzy Private Set Intersection for High-dimensional Data with Cosine Similarity
Fuzzy private set intersection (Fuzzy PSI) is a cryptographic protocol for privacy-preserving similarity matching, which is one of the essential operations in various real-world applications such as facial authentication, information retrieval, or recommendation systems. Despite recent advancements in fuzzy PSI protocols, still a huge barrier remains in deploying them for these applications. The main obstacle is the high dimensionality, e.g., from 128 to 512, of data; lots of existing methods, Garimella et al. (CRYPTO’23, CRYPTO’24) or van Baarsen et al. (EUROCRYPT’24), suffer from exponential overhead on communication and/or computation cost. In addition, the dominant similarity metric in these applications is cosine similarity, which disables several optimization tricks based on assumptions for the distribution of data, e.g., techniques by Gao et al. (ASIACRYPT’24). In this paper, we propose a novel fuzzy PSI protocol for cosine similarity, called FPHE, that overcomes these limitations at the same time. FPHE features linear complexity on both computation and communication with respect to the dimension of set elements, only requiring much weaker assumption than prior works. The basic strategy of ours is to homomorphically compute cosine similarity and run an approximated comparison function, with a clever packing method for efficiency. In addition, we introduce a novel proof technique to harmonize the approximation error from the sign function with the noise flooding, proving the security of FPHE under the semi-honest model. Moreover, we show that our construction can be extended to support various functionalities, such as labeled or circuit fuzzy PSI. Through experiments, we show that FPHE can perform fuzzy PSI over 512-dimensional data in a few minutes, which was computationally infeasible for all previous proposals under the same assumption as ours.
New Quantum Cryptanalysis of Binary Elliptic Curves (Extended Version)
This paper improves upon the quantum circuits required for the Shor's attack on binary elliptic curves. We present two types of quantum point addition, taking both qubit count and circuit depth into consideration.
In summary, we propose an in-place point addition that improves upon the work of Banegas et al. from CHES'21, reducing the qubit count – depth product by more than $73\%$ – $81\%$ depending on the variant. Furthermore, we develop an out-of-place point addition by using additional qubits. This method achieves the lowest circuit depth and offers an improvement of over $92\%$ in the qubit count – quantum depth product (for a single step).
To the best of our knowledge, our work improves from all previous works (including the CHES'21 paper by Banegas et al., the IEEE Access'22 paper by Putranto et al., and the CT-RSA'23 paper by Taguchi and Takayasu) in terms of circuit depth and qubit count – depth product.
Equipped with the implementations, we discuss the post-quantum security of the binary elliptic curve cryptography. Under the MAXDEPTH metric (proposed by the US government's NIST), the quantum circuit with the highest depth in our work is $2^{24}$, which is significantly lower than the MAXDEPTH limit of $2^{40}$. For the gate count – full depth product, a metric for estimating quantum attack cost (proposed by NIST), the highest complexity in our work is $2^{60}$ for the curve having degree 571 (which is comparable to AES-256 in terms of classical security), considerably below the post-quantum security level 1 threshold (of the order of $2^{156}$).
Founding Zero-Knowledge Proofs of Training on Optimum Vicinity
Zero-knowledge proofs of training (zkPoT) allow a party to prove that a model is trained correctly on a committed dataset without revealing any additional information about the model or the dataset. Existing zkPoT protocols prove the entire training process in zero knowledge; i.e., they prove that the final model was obtained in an iterative fashion starting from the training data and a random seed (and potentially other parameters) and applying the correct algorithm at each iteration. This approach inherently requires the prover to perform work linear to the number of iterations.
In this paper, we take a different approach to proving the correctness of model training. Our approach is motivated by efficiency but also more urgently by the observation that the prover's ability to pick the random seed used for training introduces the potential for it to bias the model. In other words, if the input to the training algorithm is biased, the resulting model will be biased even if the prover correctly ran the training algorithm. Rather than prove the correctness of the training process, we thus directly prove the correctness of the training model using a notion we call optimum vicinity, which bounds the distance between the trained model and the mathematically optimal model for models that can be viewed as the solution to a convex optimization problem. We show both theoretically and experimentally that this ensures the trained model behaves similarly to the optimal model, and show this is not true for existing approaches. We also demonstrate significant performance improvements as compared to the existing zkPoT paradigm: the statement proven in ZK in our protocol has a size independent of the number of training iterations, and our Boolean (respectively arithmetic) circuit size is up to $246\times$ (respectively $5\times$) smaller than that of a baseline zkPoT protocol that verifies the whole training process.
More Efficient Lattice-Based Electronic Voting from NTRU
In recent years, there has been much focus on developing core cryptographic primitives based on lattice assumptions, driven by the NIST call for post-quantum key encapsulation and digital signature algorithms. However, more work must be conducted on efficient privacy-preserving protocols based on quantum-safe assumptions.
Electronic voting is one such privacy-preserving protocol whose adoption is increasing across the democratic world. E-voting offers both a fast and convenient alternative to postal voting whilst further ensuring cryptographic privacy of votes and offering full verifiability of the process. Owing to the sensitivity of voting and its infrastructure challenges, it is crucial to ensure security against quantum computers is baked into e-voting solutions.
We present an e-voting scheme from quantum-safe assumptions based on the hardness of the RLWE and NTRU lattice problems, providing concrete parameters and an efficient implementation. Our design achieves a factor $5.3 \times$ reduction in ciphertext size, $2.5 \times$ reduction in total communication cost, and $2 \times$ reduction in total computation time compared to the state-of-the-art lattice-based voting scheme by Aranha et al. (ACM CCS 2023). We argue that the efficiency of this scheme makes it suitable for real-world elections.
Our scheme makes use of non-ternary NTRU secrets to achieve optimal parameters. In order to compute the security of our design, we extend the ternary-NTRU work of Ducas and van Woerden (ASIACRYPT 2021) by determining the concrete fatigue point (for general secrets) of NTRU to be $q = 0.0058 \cdot \sigma^2 \cdot d^{2.484}$ (above which parameters become overstretched) for modulus $q$, ring dimension $d$, and secrets drawn from a Gaussian of parameter $\sigma$. We consider this relation to be of independent interest and demonstrate its significance by improving the efficiency of the (partially) blind signature scheme by del Pino and Katsumata (CRYPTO 2022).
Proximity Gaps in Interleaved Codes
A linear error-correcting code exhibits proximity gaps if each affine line of words either consists entirely of words which are close to the code or else contains almost no such words. In this short note, we prove that for each linear code which exhibits proximity gaps within the unique decoding radius, that code's interleaved code also does. Combining our result with a recent argument of Angeris, Evans and Roh ('24), we extend those authors' sharpening of the tensor-based proximity gap of Diamond and Posen (Commun. Cryptol. '24) up to the unique decoding radius, at least in the Reed–Solomon setting.
AGATE: Augmented Global Attested Trusted Execution in the Universal Composability framework
A Trusted Execution Environment (TEE) is a security technology,
implemented by CPU manufacturers, which guarantees integrity and confidentiality
on a restricted execution environment to any remote verifier through attestation. TEEs are deployed
on various consumer and commercial hardware platforms, and have been widely adopted as a component in the design of cryptographic protocols both theoretical and practical.
Within the provable security community, the use of TEEs as a setup assumption
has converged to a standard ideal definition in the Universal Composability
setting ($G_{\mathsf{att}}$, defined by Pass et al., Eurocrypt '17). However, it is unclear
whether any real TEE design can actually realise such a level of security, or whether the diverse capabilities of today's TEE implementations will in fact converge to a single standard. Therefore, it is necessary for cryptographers and protocol designers to specify what assumptions are necessary for the TEE they are using to support the correctness and security of their protocol.
To this end, this paper provides a more careful treatment of trusted execution
than the existing literature, focusing on the capabilities of enclaves and
adversaries. Our goal is to provide meaningful patterns for comparing different
classes of TEEs, particularly how a weaker TEE functionality can implement a
stronger one given an appropriate mechanism to bridge the two. We introduce a
new, "modular" definition of TEEs that captures a broad range of pre-existing
functionalities defined in the literature while maintaining their high level of
abstraction. While our goal is not directly to model implementations of specific
commercial TEE providers, our modular definition provides a way to capture more
meaningful and realistic hardware capabilities. We propose to
characterise TEE capabilities along the following terms:
- the set of trusted features available to the enclave;
- the set of possible attacks on an enclave;
- the content of attestation signatures.
We then define various possible ideal modular $G_{\mathsf{att}}$ functionality
instantiations that capture existing variants in the literature. Finally, we
conclude the paper by constructing a protocol template to realise
stronger $G_{\mathsf{att}}$ setups from weaker ones, and provide an example of removing an attack.
Extreme Algebraic Attacks
When designing filter functions in Linear Feedback Shift Registers (LFSR) based stream ciphers, algebraic criteria of Boolean functions such as the Algebraic Immunity (AI) become key characteristics because they guarantee the security of ciphers against the powerful algebraic attacks. In this article, we investigate a generalization of the algebraic attacks proposed by Courtois and Meier on filtered LFSR twenty years ago. We consider how the standard algebraic attack can be generalized beyond filtered LFSR to stream ciphers applying a Boolean filter function to an updated state. Depending on the updating process, we can use different sets of annihilators than the ones used in the standard algebraic attack; it leads to a generalization of the concept of algebraic immunity, and more efficient attacks. To illustrate these strategies, we focus on one of these generalizations and introduce a new notion called Extreme Algebraic Immunity (EAI).
We perform a theoretic study of the EAI criterion and explore its relation to other algebraic criteria. We prove the upper bound of the EAI of an n-variable Boolean function and further show that the EAI can be lower bounded by the AI restricted to a subset, as defined by Carlet, Méaux and Rotella at FSE 2017. We also exhibit functions with EAI guaranteed to be lower than the AI, in particular we highlight a pathological case of functions with optimal algebraic immunity and EAI only n/4. As applications, we determine the EAI of filter functions of some existing stream ciphers and discuss how extreme algebraic attacks using EAI could apply to some ciphers.
Our generalized algebraic attack does not give a better complexity than Courtois and Meier's result on the existing stream ciphers. However, we see this work as a study to avoid weaknesses in the construction of future stream cipher designs.
Separating Broadcast from Cheater Identification
Secure Multiparty Computation (MPC) protocols that achieve Identifiable Abort (IA) guarantee honest parties that in the event that they are denied output, they will be notified of the identity of at least one corrupt party responsible for the abort. Cheater identification provides recourse in the event of a protocol failure, and in some cases can even be desired over Guaranteed Output Delivery. However, protocols in the literature typically make use of broadcast as a necessary tool in identifying cheaters. In many deployments, the broadcast channel itself may be the most expensive component.
In this work, we investigate if it is inherent that MPC with IA should bear the full complexity of broadcast. As the implication of broadcast from IA has been established in previous work, we relax our target to circumvent this connection: we allow honest parties to differ in which cheaters they identify, nonetheless retaining the ability to prove claims of cheating to an auditor.
We show that in the honest majority setting, our notion of Provable Identifiable Selective Abort (PISA) can be achieved without a traditional broadcast channel. Indeed, broadcast in this setting---which we term Broadcast with Selective Identifiable Abort (BC-IA)---is achievable in only two point-to-point rounds with a simple echoing technique. On the negative side, we also prove that BC-IA is impossible to achieve in the dishonest majority setting.
As a general result, we show that any MPC protocol that achieves IA with $r$ broadcasts, can be compiled to one that achieves PISA with $2(r+1)$ point to point rounds. As a practical application of this methodology, we design, implement, and benchmark a six-round honest majority threshold ECDSA protocol that achieves PISA, and can be deployed in any environment with point to point communication alone.
Post-Quantum Privacy for Traceable Receipt-Free Encryption
Traceable Receipt-free Encryption (TREnc) has recently been introduced as a verifiable public-key encryption primitive endowed with a unique security model. In a nutshell, TREnc allows randomizing ciphertexts in transit in order to remove any subliminal information up to a public trace that ensures the non-malleability of the underlying plaintext. A remarkable property of TREnc is the indistinguishability of the randomization of chosen ciphertexts against traceable chosen-ciphertext attacks (TCCA). The main application lies in voting systems by allowing voters to encrypt their votes, tracing whether a published ballot takes their choices into account, and preventing them from proving how they
voted. While being a very promising primitive, the few existing TREnc mechanisms solely rely on discrete-logarithm related assumptions making them vulnerable to the well-known record-now/decrypt-later attack in the wait of quantum computers.
We address this limitation by building the first TREnc whose privacy withstands the advent of quantum adversaries in the future. To design our construction, we first generalize the original TREnc primitive that is too restrictive to be easily compatible with built-in lattice-based semantically-secure encryption. Our more flexible model keeps all the ingredients generically implying receipt-free voting. Our instantiation relies on Ring Learning With Errors (RLWE) with pairing-based statistical zero-knowledge simulation sound proofs from Groth-Sahai, and further enjoys a public-coin common reference string removing the need of a trusted setup.
Black-Box Registered ABE from Lattices
This paper presents the first black-box registered ABE for circuit from lattices. The selective security is based on evasive LWE assumption [EUROCRYPT'22, CRYPTO'22]. The unique prior Reg-ABE scheme from lattices is derived from non-black-box construction based on function-binding hash and witness encryption [CRYPTO'23]. Technically, we first extend the black-box registration-based encryption from standard LWE [CRYPTO'23] so that we can register a public key with a function; this yields a LWE-based Reg-ABE with ciphertexts of size $L \cdot \mathsf{polylog}(L)$ where $L$ is the number of users. We then make use of the special structure of its ciphertext to reduce its size to $\mathsf{polylog}(L)$ via an algebraic obfuscator based on evasive LWE [CRYPTO'24].
Concretely Efficient Private Set Union via Circuit-based PSI
Private set intersection (PSI) is a type of private set operation (PSO) for which concretely efficient linear-complexity protocols do exist. However, the situation is currently less satisfactory for other relevant PSO problems such as private set union (PSU): For PSU, the most promising protocols either rely entirely on computationally expensive public-key operations or suffer from substantial communication overhead.
In this work, we present the first PSU protocol that is mainly based on efficient symmetric-key primitives yet enjoys comparable communication as public-key-based alternatives. Our core idea is to re-purpose state-of-the-art circuit-based PSI to realize a multi-query reverse private membership test (mq-RPMT), which is instrumental for building PSU. We carefully analyze a privacy leakage issue resulting from the hashing paradigm commonly utilized in circuit-based PSI and show how to mitigate this via oblivious pseudorandom function (OPRF) and new shuffle sub-protocols. Our protocol is modularly designed as a sequential execution of different building blocks that can be easily replaced by more efficient variants in the future, which will directly benefit the overall performance.
We implement our resulting PSU protocol, showing a run-time improvement of 10% over the state-of-the-art public-key-based protocol of Chen et al. (PKC'24) for input sets of size $2^{20}$. Furthermore, we improve communication by $1.6\times$ over the state-of-the-art symmetric-key-based protocol of Zhang et al. (USENIX Sec'23).
Cryptojacking detection using local interpretable model-agnostic explanations
Cryptojacking, the unauthorised use of computing resources to mine cryptocurrency, has emerged as a critical threat in today’s digital landscape. These attacks not only compromise system integrity but also result in increased costs, reduced hardware lifespan, and heightened network security risks. Early and accurate detection is essential to mitigate the adverse effects of cryptojacking. This study focuses on developing a semi-supervised machine learning (ML) approach that leverages an autoencoder for feature extraction and a random forest (RF) model for classification. The objective is to enhance cryptojacking detection while maintaining a balance between accuracy and interpretability. The proposed methodology is further enhanced with explainable artificial intelligence (XAI) techniques such as local interpretable model-agnostic explanations (LIME) to offer insights into model predictions. Results from datasets such as UGRansome and BitcoinHeist indicate that the semi-supervised approach achieves accuracy rates ranging from 70% to 99%. The study demonstrates that the proposed model provides an efficient, interpretable, and scalable solution for real-time cryptojacking detection across various scenarios.
ProbeShooter: A New Practical Approach for Probe Aiming
Electromagnetic side-channel analysis is a powerful method for monitoring processor activity and compromising cryptographic systems in air-gapped environments. As analytical methodologies and target devices evolve, the importance of leakage localization and probe aiming becomes increasingly apparent for capturing only the desired signals with a high signal-to-noise ratio. Despite its importance, there remains substantial reliance on unreliable heuristic approaches and inefficient exhaustive searches. Furthermore, related studies often fall short in terms of feasibility, practicality, and performance, and are limited to controlled DUTs and low-end MCUs.
To address the limitations and inefficiencies of the previous approaches, we propose a novel methodology―${\rm P{\tiny ROBE}S{\tiny HOOTER}}$―for leakage localization and probe aiming. This approach leverages new insights into the spatial characteristics of amplitude modulation and intermodulation distortion in processors. As a result, ${\rm P{\tiny ROBE}S{\tiny HOOTER}}$ provides substantial improvements in various aspects: $\boldsymbol 1)$ it is applicable to not only simple MCUs but also complex SoCs, $\boldsymbol 2)$ it effectively handles multi-core systems and dynamic frequency scaling, $\boldsymbol 3)$ it is adoptable to uncontrollable DUTs, making it viable for constrained real-world attacks, and $\boldsymbol 4)$ it performs significantly faster than previous methods. To demonstrate this, we experimentally evaluate ${\rm P{\tiny ROBE}S{\tiny HOOTER}}$ on a high-end MCU (the NXP i.MX RT1061 featuring a single ARM Cortex-M7 core) and a complex SoC (the Broadcom BCM2711 equipped with the Raspberry Pi 4 Model B, featuring four ARM Cortex-A72 cores).
Theoretical Approaches to Solving the Shortest Vector Problem in NP-Hard Lattice-Based Cryptography with Post-SUSY Theories of Quantum Gravity in Polynomial Time
The Shortest Vector Problem (SVP) is a cornerstone of lattice-based cryptography, underpinning the security of numerous cryptographic schemes like NTRU. Given its NP-hardness, efficient solutions to SVP have profound implications for both cryptography and computational complexity theory. This paper presents an innovative framework that integrates concepts from quantum gravity, noncommutative geometry, spectral theory, and post-SUSY particle physics to address SVP. By mapping high-dimensional lattice points to spin foam networks and by means of Hamiltonian engineering, it is theoretically possible to devise new algorithms that leverage the interactions topologically protected Majorana fermion particles have with the gravitational field through the spectral action principle to loop through these spinfoam networks where SVP vectors could then be encoded onto the spectrum of the corresponding Dirac-like dilation operators within the system. We establish a novel approach that leverages post-SUSY physics and theories of quantum gravity to achieve algorithmic speedups beyond those expected by conventional quantum computers. This interdisciplinary methodology not only proposes potential polynomial-time algorithms for SVP but also bridges gaps between theoretical physics and cryptographic applications, providing further insights into the Riemann Hypothesis (RH) and the Hilbert-Polya Conjecture.
Quantum-safe Signatureless DNSSEC
We present $\mathsf{SL\text{-}DNSSEC}$: a backward-compatible protocol that leverages a quantum-safe KEM and a MAC to perform signature-less $\mathsf{(SL)}$ DNSSEC validations in a single UDP query/response style. Our experiments targeting NIST level I security for QTYPE A query resolution show that $\mathsf{SL\text{-}DNSSEC}$ is practically equivalent to the presently deployed RSA-2048 in terms of bandwidth usage and resolution speeds. Compared to post-quantum signatures, $\mathsf{SL\text{-}DNSSEC}$ reduces bandwidth consumption and resolution times by up to $95\%$ and $60\%$, respectively. Moreover, with response size $<$ query size $\leq 1232$ bytes, $\mathsf{SL\text{-}DNSSEC}$ obviates the long-standing issues of IP fragmentation, TCP re-transmits and DDoS amplification attacks.
On the gap between terms in an addition chain
In this paper, we study the distribution of the \textit{gap} between terms in an addition chain. In particular, we show that if $1,2,\ldots,s_{\delta(n)}=n$ is an addition chain of length $\delta(n)$ leading to $n$, then $$\underset{1\leq l\leq \delta(n)}{\mathrm{sup}}(s_{l+k}-s_l)\gg k\frac{n}{\delta(n)}$$ and $$\underset{1\leq l\leq \delta(n)}{\mathrm{inf}}(s_{l+k}-s_l)\ll k\frac{n}{\delta(n)}$$ for fixed $k\geq 1$.
Advancements in Distributed RSA Key Generation: Enhanced Biprimality Tests
RSA is widely used in modern cryptographic practice, with certain RSA-based protocols relying on the secrecy of $p$ and $q$. A common approach is to use secure multiparty computation to address the privacy concerns of $p$ and $q$.
Specifically constrained to distributed RSA modulus generation protocols, the biprimality test for Blum integers $N=pq$, where $p\equiv q\equiv 3 \mod4$ are two primes, proposed by Boneh and Franklin ($2001$) is the most commonly used. Over the past $20 $ years, the worst-case acceptance rate of this test has been consistently assumed to be $1/2$ under the condition $\gcd(pq,p+q-1)=1$.This paper demonstrates that the acceptance probability for the Boneh-Franklin test is at most $1/4$, rather than $1/2$, except in the specific case where $p = q = 3$. Notably, $1/4$ is shown to be the tightest upper bound. This result substantially improves the practical effectiveness of the Boneh-Franklin test: achieving the same level of soundness for the RSA modulus now requires only half the iterations previously considered necessary.
Furthermore, we propose a generalized biprimality test based on the Lucas sequence. In the worst case, the acceptance rate of the proposed test is at most $1/4 + 1.25/(p_{\min} -3)$, where $p_{\min}$ is the smallest prime factors of $N$. To validate our approach, we implemented the variant Miller-Rabin test, the Boneh-Franklin test, and our proposed test, performing pairwise comparisons of their effectiveness. Simulations indicate that the proposed test is generally more efficient than the Boneh-Franklin test in detecting cases where $N$ is not an RSA modulus. Additionally, this test is applicable to generating RSA moduli for arbitrary odd primes $p,q$.
A corresponding protocol is developed for this test, validated for resilience against semi-honest adversaries, and shown to be applicable to most known distributed RSA modulus generation protocols. After thoroughly analyzing and comparing well-known protocols for Blum integers, including Burkhardt et al.'s protocol (CCS 2023), and the Boneh-Franklin protocol, our protocol is competitive for generating distributed RSA moduli.
ABLE: Optimizing Mixed Arithmetic and Boolean Garbled Circuit
Privacy and security have become critical priorities in many scenarios. Privacy-preserving computation (PPC) is a powerful solution that allows functions to be computed directly on encrypted data. Garbled circuit (GC) is a key PPC technology that enables secure, confidential computing. GC comes in two forms: Boolean GC supports all operations by expressing functions as logic circuits; arithmetic GC is a newer technique to efficiently compute a set of arithmetic operations like addition and multiplication. Mixed GC combines both Boolean and arithmetic GC, in an attempt to optimize performance by computing each function in its most natural form. However, this introduces additional costly conversions between the two forms. It remains unclear if and when the efficiency gains of arithmetic GC outweigh these conversion costs.
In this paper, we present A̲rithmetic Ḇoolean Ḻogic E̲xchange for Garbled Circuit, the first real implementation of mixed GC. ABLE profiles the performance of Boolean and arithmetic GC operations along with their conversions. We assess not only communication but also computation latency, a crucial factor in evaluating the end-to-end runtime of GC. Based on these insights, we propose a method to determine whether it is more efficient to use general Boolean GC or mixed GC for a given application. Rather than implementing both approaches to identify the better solution for each unique case, our method enables users to select the most suitable GC realization early in the design process. This method evaluates whether the benefits of transitioning operations from Boolean to arithmetic GC offset the associated conversion costs. We apply this approach to a neural network task as a case study.
Implementing ABLE reveals opportunities for further GC optimization. We propose a heuristic to reduce the number of primes in arithmetic GC, cutting communication by 14.1% and compute latency by 15.7% in a 16-bit system. Additionally, we optimize mixed GC conversions with row reduction technique, achieving a 48.6% reduction in garbled table size for bit-decomposition and a 50% reduction for bit-composition operation. These improvements reduce communication overhead in stream GC and free up storage in the GC with preprocessing approach. We open source our code for community use.
Time-Lock Puzzles from Lattices
Time-lock puzzles (TLP) are a cryptographic tool that allow one to encrypt a message into the future, for a predetermined amount of time $T$. At present, we have only two constructions with provable security: One based on the repeated squaring assumption and the other based on obfuscation. Basing TLP on any other assumption is a long-standing question, further motivated by the fact that known constructions are broken by quantum algorithms.
In this work, we propose a new approach to construct time-lock puzzles based on lattices, and therefore with plausible post-quantum security. We obtain the following main results:
* In the preprocessing model, where a one-time public-coin preprocessing is allowed, we obtain a time-lock puzzle with encryption time $\log(T)$.
* In the plain model, where the encrypter does all the computation, we obtain a time-lock puzzle with encryption time $\sqrt{T}$.
Both constructions assume the existence of any sequential function $f$, and the hardness of the circular small-secret learning with errors (LWE) problem. At the heart of our results is a new construction of succinct randomized encodings (SRE) for $T$-folded repeated circuits, where the complexity of the encoding is $\sqrt{T}$. This is the first construction of SRE where the overall complexity of the encoding algorithm is sublinear in the runtime $T$, and which is not based on obfuscation. As a direct corollary, we obtain a non-interactive RAM delegation scheme with sublinear complexity (in the number of steps $T$).
The Meta-Complexity of Secret Sharing
A secret-sharing scheme allows the distribution of a secret $s$ among $n$ parties, such that only certain predefined “authorized” sets of parties can reconstruct the secret, while all other “unauthorized” sets learn nothing about $s$. The collection of authorized/unauthorized sets is defined by a monotone function $f: \{0,1\}^n \rightarrow \{0,1\}$. It is known that any monotone function can be realized by a secret-sharing scheme; thus, the smallest achievable \emph{total share size}, $S(f)$, serves as a natural complexity measure.
In this paper, we initiate the study of the following meta-complexity question: Given a monotone function $f$, is it possible to efficiently distinguish between cases where the secret-sharing complexity of $f$ is small versus large? We examine this question across several computational models, yielding the following main results.
(Hardness for formulas and circuits): Given a monotone formula $f$ of size $L$, it is coNP-hard to distinguish between ``cheap'' functions, where the maximum share size is 1 bit and the total share size is $O(L^{0.01})$, and ``expensive'' functions, where the maximum share size is $\Omega(\sqrt{L})$ and the total share size is $\Omega(L/\log L)$.
This latter bound nearly matches known secret-sharing constructions yielding a total share size of $L$ bits. For monotone circuits, we strengthen the bound on the expensive case to a maximum share size of $\Omega(L/\log L)$ and a total share size of $\Omega(L^2/\log L)$. These results rule out the existence of instance-optimal compilers that map a formula $f$ to a secret-sharing scheme with complexity polynomially related to $S(f)$.
(Hardness for truth tables): Under cryptographic assumptions, either (1) every $n$-bit slice function can be realized by a $\text{poly}(n)$-size secret-sharing scheme, or (2) given a truth-table representation of $f$ of size $N = 2^n$, it is computationally infeasible to distinguish in time $\text{poly}(N)$ between cases where $S(f) = \text{poly}(n)$ and $S(f) = n^{\omega(1)}$. Option (1) would be considered a breakthrough result, as the best-known construction for slices has a sub-exponential complexity of $2^{\tilde{O}(\sqrt{n})}$ (Liu, Vaikuntanathan, and Wee; Eurocrypt 2018). Our proof introduces a new worst-case-to-average-case reduction for slices, which may be of independent interest.
(Hardness for graphs): We examine the simple case where $f$ is given as a 2-DNF, represented by a graph $G$ whose edges correspond to 2-terms, and ask whether it is possible to distinguish between cases where the share size is constant and those where the share size is large, say $\Omega(\log n)$. We establish several connections between this question and questions in communication complexity. For instance, we show that graphs admitting constant-cost secret sharing form a subclass of graphs with constant randomized communication complexity and constant-size adjacency sketches (Harms, Wild, and Zamaraev; STOC 2022). We leverage these connections to establish new lower bounds for specific graph families, derive a combinatorial characterization of graphs with constant-size linear secret-sharing schemes, and show that a natural class of myopic algorithms fails to distinguish cheap graphs from expensive ones.
Use of Simple Arithmetic Operations to Construct Efficiently Implementable Boolean functions Possessing High Nonlinearity and Good Resistance to Algebraic Attacks
We describe a new class of Boolean functions which provide the presently best known trade-off between low computational complexity, nonlinearity and (fast) algebraic immunity. In particular, for $n\leq 20$, we show that there are functions in the family achieving a combination of nonlinearity and (fast) algebraic immunity which is superior to what is achieved by any other efficiently implementable function. The main novelty of our approach is to apply a judicious combination of simple integer and binary field arithmetic to Boolean function construction.
IND-CPA$^{\text{C}}$: A New Security Notion for Conditional Decryption in Fully Homomorphic Encryption
Fully Homomorphic Encryption (FHE) allows a server to perform computations directly over the encrypted data. In general FHE protocols, the client is tasked with decrypting the computation result using its secret key. However, certain FHE applications benefit from the server knowing this result, especially without the aid of the client. Providing the server with the secret key allows it to decrypt all the data, including the client's private input. Protocols such as Goldwasser et. al. (STOC'13) have shown that it is possible to provide the server with the capability of conditional decryption that allows it to decrypt the result of some pre-defined computation and nothing else. While beneficial to an honest-but-curious server to aid in providing better services, a malicious server may utilize this added advantage to perform active attacks on the overall FHE application to leak secret information. Existing security notions fail to capture this scenario since they assume that only the client has the ability to decrypt FHE ciphertexts. Therefore, in this paper, we propose a new security notion named IND-CPA$^{\text{C}}$, that provides the adversary with access to a conditional decryption oracle. We then show that none of the practical exact FHE schemes are secure under this notion by demonstrating a generic attack that utilizes this restricted decryption capability to recover the underlying errors in the FHE ciphertexts. Given the security guarantee of the underlying (R)LWE hardness problem collapses with the leakage of these error values, we show a full key recovery attack. Finally, we propose a technique to convert any IND-CPA secure FHE scheme into an IND-CPA$^\text{C}$ secure FHE scheme. Our technique utilizes Succinct Non-Interactive Argument of Knowledge, where the server is forced to generate a proof of an honest computation of the requested function along with the computation result. During decryption, the proof is verified, and the decryption proceeds only if the verification holds. Both the verification and decryption steps run inside a Garbled Circuit and thus are not observable or controllable by the server.
NICE-PAKE: On the Security of KEM-Based PAKE Constructions without Ideal Ciphers
The interest in realizing generic PQC KEM-based PAKEs has increased significantly in the last few years. One such PAKE is the CAKE protocol, proposed by Beguinet et al. (ACNS ’23). However, despite its simple design based on the well-studied PAKE protocol EKE by Bellovin and Merritt (IEEE S&P ’92), both CAKE and its variant OCAKE do not fully protect against quantum adversaries, as they rely on the Ideal Cipher (IC) model. Related and follow-up works, including Pan and Zeng (ASIACRYPT ’23), Dos Santos et al. (EUROCRYPT ’23), Alnahawi et al. (CANS ’24), and Arragia et al. (IACR ’24/308) although touching on that issue, still rely on an IC. Considering the lack of a quantum IC model and the difficulty of using the classical IC to achieve secure instantiations on public keys in general and PQC in particular, we set out to eliminate it from PAKE design. In this paper, we present the No IC Encryption (NICE)-PAKE, a (semi)-generic PAKE framework providing a quantum-safe alternative for the IC, utilizing simpler cryptographic components for the authentication step. To give a formal proof for our construction, we introduce the notions of A-Part-Secrecy (A-SEC-CCA), Splittable Collision Freeness (A-CFR-CCA) and Public Key Uniformity (SPLIT-PKU) for splittable LWE KEMs. We show the relation of the former to the Non-uniform LWE and the Weak Hint LWE assumptions, as well as its application to ring and module LWE. Notably, this side quest led to some surprising discoveries, as we concluded that the new notion is not directly interchangeable between the LWE variants, or at least not in a straightforward manner. Further, we show that our approach requires some tedious tweaking for the parameter choices in both FrodoKEM and CRYSTALS-Kyber to obtain a secure PAKE construction. We also address some fundamental issues with the common IC usage and identify differences between lattice KEMs regarding their suitability for generic PQC PAKEs, especially regarding the structure of their public keys. We believe that this work marks a further step towards achieving complete security against quantum adversaries in PQC PAKEs.
Registered ABE and Adaptively-Secure Broadcast Encryption from Succinct LWE
Registered attribute-based encryption (ABE) is a generalization of public-key encryption that enables fine-grained access control to encrypted data (like standard ABE), but without needing a central trusted authority. In a key-policy registered ABE scheme, users choose their own public and private keys and then register their public keys together with a decryption policy with an (untrusted) key curator. The key curator aggregates all of the individual public keys into a short master public key which serves as the public key for an ABE scheme.
Currently, we can build registered ABE for restricted policies (e.g., Boolean formulas) from pairing-based assumptions and for general policies using witness encryption or indistinguishability obfuscation. In this work, we construct a key-policy registered ABE for general policies (specifically, bounded-depth Boolean circuits) from the $\ell$-succinct learning with errors (LWE) assumption in the random oracle model. The ciphertext size in our registered ABE scheme is $\mathsf{poly}(\lambda, d)$, where $\lambda$ is a security parameter and $d$ is the depth of the circuit that computes the policy circuit $C$. Notably, this is independent of the length of the attribute $\mathbf{x}$ and is optimal up to the $\mathsf{poly}(d)$ factor.
Previously, the only lattice-based instantiation of registered ABE uses witness encryption, which relies on private-coin evasive LWE, a stronger assumption than $\ell$-succinct LWE. Moreover, the ciphertext size in previous registered ABE schemes that support general policies (i.e., from obfuscation or witness encryption) scales with $\mathsf{poly}(\lambda, |\mathbf{x}|, |C|)$. The ciphertext size in our scheme depends only on the depth of the circuit (and not the length of the attribute or the size of the policy). This enables new applications to identity-based distributed broadcast encryption.
Our techniques are also useful for constructing adaptively-secure (distributed) broadcast encryption, and we give the first scheme from the $\ell$-succinct LWE assumption in the random oracle model. Previously, the only lattice-based broadcast encryption scheme with adaptive security relied on witness encryption in the random oracle model. All other lattice-based broadcast encryption schemes only achieved selective security.
Bitwise Garbling Schemes --- A Model with $\frac{3}{2}\kappa$-bit Lower Bound of Ciphertexts
At Eurocrypt 2015, Zahur, Rosulek, and Evans proposed the model of Linear Garbling Schemes.
This model proved a $2\kappa$-bit lower bound of ciphertexts for a broad class of garbling schemes.
At Crypto 2021, Rosulek and Roy presented the innovative "three-halves" garbling scheme in which AND gates cost $1.5\kappa+5$ bits and XOR gates are free.
A noteworthy aspect of their scheme is the slicing-and-dicing technique,
which is applicable universally to all AND gates when garbling a boolean circuit.
Following this revelation, Rosulek and Roy presented several open problems.
Our research primarily addresses one of them: ``Is $1.5\kappa$ bits optimal for garbled AND gates in a more inclusive model than Linear Garbling Schemes?''
In this paper, we propose theBitwise Garbling Schemes.
Our key revelation is that $1.5\kappa$ bits is indeed optimal for arbitrary garbled AND gates in our model.
Moreover, we prove the necessity of the free-XOR technique:
If free-XOR is forbidden, we prove a $2\kappa$-bit lower bound.
As an extension, we apply our idea to construct a model for fan-in 3 gates.
Somewhat unexpectedly, we prove a $\frac{7}{4}\kappa$-bit lower bound.
Unfortunately, the corresponding construction
is not suitable for 3-input AND gates.
This construction may be of independent interest.
Adaptor Signatures: New Security Definition and A Generic Construction for NP Relations
An adaptor signatures (AS) scheme is an extension of digital signatures that allows the signer to generate a pre-signature for an instance of a hard relation. This pre-signature can later be adapted to a full signature with a corresponding witness. Meanwhile, the signer can extract a witness from both the pre-signature and the signature. AS have recently garnered more attention due to its scalability and interoperability. Dai et al. [INDOCRYPT 2022] proved that AS can be constructed for any NP relation using a generic construction. However, their construction has a shortcoming: the associated witness is exposed by the adapted signature. This flaw poses limits the applications of AS, even in its motivating setting, i.e., blockchain, where the adapted signature is typically uploaded to the blockchain and is public to everyone.
To address this issue, in this work we augment the security definition of AS by a natural property which we call witness hiding. We then prove the existence of AS for any NP relation, assuming the existence of one-way functions. Concretely, we propose a generic construction of witness-hiding AS from signatures and a weak variant of trapdoor commitments, which we term trapdoor commitments with a specific adaptable message. We instantiate the latter based on the Hamiltonian cycle problem. Since the Hamiltonian cycle problem is NP-complete, we can obtain witness hiding adaptor signatures for any NP relation.
Voting with coercion resistance and everlasting privacy using linkable ring signatures
We propose an e-voting protocol based on a novel linkable ring signature scheme with unconditional anonymity. In our system, all voters create private credentials and register their public counterparts. To vote, they create a ring (anonymity set) consisting of public credentials together with a proof of knowledge of their secret credential via our signature. Its unconditional anonymity prevents an attacker, no matter how powerful, from deducing the identity of the voter, thus attaining everlasting privacy. Additionally, our protocol provides coercion resistance in the JCJ framework; when an adversary tries to coerce a voter, the attack can be evaded by creating a signature with a fake but indistinguishable credential. During a moment of privacy, they will cast their real vote. Our scheme also provides verifiability and ballot secrecy.
Semi-Quantum Tokenized Signatures
Quantum tokenized signature schemes (Ben-David and Sattath, QCrypt 2017) allow a sender to generate and distribute quantum unclonable states which grant their holder a one-time permission to sign in the name of the sender. Such schemes are a strengthening of public-key quantum money schemes, as they imply public-key quantum money where some channels of communication in the system can be made classical.
An even stronger primitive is semi-quantum tokenized signatures, where the sender is classical and can delegate the generation of the token to a (possibly malicious) quantum receiver. Semi-quantum tokenized signature schemes imply a powerful version of public-key quantum money satisfying two key features:
1. The bank is classical and the scheme can execute on a completely classical communication network. In addition, the bank is \emph{stateless} and after the creation of a banknote, does not hold any information nor trapdoors except the balance of accounts in the system. Such quantum money scheme solves the main open problem presented by Radian and Sattath (AFT 2019).
2. Furthermore, the classical-communication transactions between users in the system are \emph{direct} and do not need to go through the bank. This enables the transactions to be both classical and private.
While fully-quantum tokenized signatures (where the sender is quantum and generates the token by itself) are known based on quantum-secure indistinguishability obfuscation and injective one-way functions, the semi-quantum version is not known under any computational assumption. In this work we construct a semi-quantum tokenized signature scheme based on quantum-secure indistinguishability obfuscation and the sub-exponential hardness of the Learning with Errors problem.
In the process, we show new properties of quantum coset states and a new hardness result on indistinguishability obfuscation of classical subspace membership circuits.
SoK: Time to be Selfless?! Demystifying the Landscape of Selfish Mining Strategies and Models
Selfish mining attacks present a serious threat to Bitcoin security, enabling a miner with less than 51% of the network hashrate to gain higher rewards than when mining honestly. A growing body of works has studied the impact of such attacks and presented numerous strategies under a variety of model settings. This led to a complex landscape with conclusions that are often exclusive to certain model assumptions. This growing complexity makes it hard to comprehend the state of the art and distill its impact and trade-offs.
In this paper, we demystify the landscape of selfish mining by presenting a systematization framework of existing studies and evaluating their strategies under realistic model adaptations. To the best of our knowledge, our work is the first of its kind. We develop a multi-dimensional systematization framework assessing prior works based on their strategy formulation and targeted models. We go on to distill a number of insights and gaps clarifying open questions and understudied areas. Among them, we find that most studies target the block-reward setting and do not account for transaction fees, and generally study the single attacker case. To bridge this gap, we evaluate many of the existing strategies in the no-block-reward setting—so miner’s incentives come solely from transaction fees—for both the single and multi-attacker scenarios. We also extend their models to include a realistic honest-but-rational miners showing how such adaptations could garner more-performant strategy variations.
Siniel: Distributed Privacy-Preserving zkSNARK
Uncategorized
Uncategorized
Zero-knowledge Succinct Non-interactive Argument of Knowledge (zkSNARK) is a powerful cryptographic primitive, in which a prover convinces a verifier that a given statement is true without leaking any additional information. However, existing zkSNARKs suffer from high computation overhead in the proof generation. This limits the applications of zkSNARKs, such as private payments, private smart contracts, and anonymous credentials. Private delegation has become a prominent way to accelerate proof generation.
In this work, we propose Siniel, an efficient private delegation framework for zkSNARKs constructed from polynomial interactive oracle proof (PIOP) and polynomial commitment scheme (PCS). Our protocol allows a computationally limited prover (a.k.a. delegator) to delegate its expensive prover computation to several workers without leaking any information about the private witness. Most importantly, compared with the recent work EOS (USENIX'23), the state-of-the-art zkSNARK prover delegation framework, a prover in Siniel needs not to engage in the MPC protocol after sending its shares of private witness. This means that a Siniel prover can outsource the entire computation to the workers.
We compare Siniel with EOS and show significant performance advantages of the former. The experimental results show that, under low bandwidth conditions (10MBps), Siniel saves about 16% time for delegators than that of EOS, whereas under high bandwidth conditions (1000MBps), Siniel saves about 80% than EOS.
UTRA: Universe Token Reusability Attack and Verifiable Delegatable Order-Revealing Encryption
As dataset sizes grow, users increasingly rely on encrypted data and secure range queries on cloud servers, raising privacy concerns about potential data leakage.
Order-revealing encryption (ORE) enables efficient operations on numerical datasets, and Delegatable ORE (DORE) extends this functionality to multi-client environments, but it faces risks of token forgery. Secure DORE (SEDORE) and Efficient DORE (EDORE) address some vulnerabilities, with EDORE improving speed and storage efficiency. However, we find that both schemes remain susceptible to token forgery.
To address this issue, we propose the concept of Verifiable Delegatable Order-Revealing Encryption (VDORE) with a formal definition of token unforgeability. We then construct a new VDORE scheme $\mathsf{TUDORE}$ (Token Unforgebale DORE), which ensures token unforgeability. Furthermore, our $\mathsf{TUDORE}$ achieves about 1.5× speed-up in token generation compared to SEDORE and EDORE.
Extending Groth16 for Disjunctive Statements
Two most common ways to design non-interactive zero knowledge (NIZK) proofs are based on Sigma ($\Sigma$)-protocols (an efficient way to prove algebraic statements) and zero-knowledge succinct non-interactive arguments of knowledge (zk-SNARK) protocols (an efficient way to prove arithmetic statements). However, in the applications of cryptocurrencies such as privacy-preserving credentials, privacy-preserving audits, and blockchain-based voting systems, the zk-SNARKs for general statements are usually implemented with encryption, commitment, or other algebraic cryptographic schemes. Moreover, zk-SNARKs for many different arithmetic statements may also be required to be implemented together. Clearly, a typical solution is to extend the zk-SNARK circuit to include the code for algebraic part. However, complex cryptographic operations in the algebraic algorithms will significantly increase the circuit size, which leads to impractically large proving time and CRS size. Thus, we need a flexible enough proof system for composite statements including both algebraic and arithmetic statements. Unfortunately, while the conjunction of zk-SNARKs is relatively natural and numerous effective solutions are currently available (e.g. by utilizing the commit-and-prove technique), the disjunction of zk-SNARKs is rarely discussed in detail.
In this paper, we mainly focus on the disjunctive statements of Groth16, and we propose a Groth16 variant---CompGroth16, which provides a framework for Groth16 to prove the disjunctive statements that consist of a mix of algebraic and arithmetic components. Specifically, we could directly combine CompGroth16 with $\Sigma$-protocol or even CompGroth16 with CompGroth16 just like the logical composition of $\Sigma$-protocols. From this, we can gain many good properties, such as broader expression, better prover's efficiency and shorter CRS. In addition, for the combination of CompGroth16 and $\Sigma$-protocol, we also present two representative application scenarios to demonstrate the practicality of our construction.
Structural Results for Maximal Quaternion Orders and Connecting Ideals of Prime Power Norm in $B_{p,\infty}$
Fix odd primes $p, \ell$ with $p \equiv 3 \mod 4$ and $\ell \neq p$. Consider the rational quaternion algebra ramified at $p$ and $\infty$ with a fixed maximal order $\mathcal{O}_{1728}$. We give explicit formulae for bases of all cyclic norm $\ell^n$ ideals of $\mathcal{O}_{1728}$ and their right orders, in Hermite Normal Form (HNF). Further, in the case $\ell \mid p+1$, or more generally, $-p$ is a square modulo $\ell$, we derive a parametrization of these bases along paths of the $\ell$-ideal graph, generalising the results of [1]. With such orders appearing as the endomorphism rings of supersingular elliptic curves defined over $\overline{\mathbb{F}_{p}}$, we note several potential applications to isogeny-based cryptography including fast ideal sampling algorithms. We also demonstrate how our findings may lead to further structural observations, by using them to prove a result on the successive minima of endomorphism rings of supersingular curves defined over $\mathbb{F}_p$.
[1] = https://eprint.iacr.org/2025/033
Keyed-Verification Anonymous Credentials with Highly Efficient Partial Disclosure
An anonymous credential (AC) system with partial disclosure allows users to prove possession of a credential issued by an issuer while selectively disclosing a subset of their attributes to a verifier in a privacy-preserving manner. In keyed-verification AC (KVAC) systems, the issuer and verifier share a secret key. Existing KVAC schemes rely on computationally expensive zero-knowledge proofs during credential presentation, with the presentation size growing linearly with the number of attributes. In this work, we propose two highly efficient KVAC constructions that eliminate the need for zero-knowledge proofs during the credential presentation and achieve constant-size presentations.
Our first construction adapts the approach of Fuchsbauer et al. (JoC'19), which achieved constant-size credential presentation in a publicly verifiable setting using their proposed structure-preserving signatures on equivalence classes (SPS-EQ) and set commitment schemes, to the KVAC setting. We introduce structure-preserving message authentication codes on equivalence classes (SP-MAC-EQ) and designated-verifier set commitments (DVSC), resulting in a KVAC system with constant-size credentials (2 group elements) and presentations (4 group elements). To avoid the bilinear groups and pairing operations required by SP-MAC-EQ, our second construction uses a homomorphic MAC with a simplified DVSC. While this sacrifices constant-size credentials ($n+2$ group elements, where $n$ is the number of attributes), it retains constant-size presentations (2 group elements) in a pairingless setting.
We formally prove the security of both constructions and provide open-source implementation results demonstrating their practicality. We extensively benchmarked our KVAC protocols and, additionally, bechmarked the efficiency of our SP-MAC-EQ scheme against the original SPS-EQ scheme, showcasing significant performance improvements.
On Quantum Simulation-Soundness
Non-interactive zero-knowledge (NIZK) proof systems are a cornerstone of modern cryptography, but their security has received little attention in the quantum settings. Motivated by improving our understanding of this fundamental primitive against quantum adversaries, we propose a new definition of security against quantum adversary. Specifically, we define the notion of quantum simulation soundness (SS-NIZK), that allows the adversary to access the simulator in superposition.
We show a separation between post-quantum and quantum security of SS-NIZK, and prove that Sahai’s construction for SS-NIZK (in the CRS model) can be made quantumly-simulation-sound. As an immediate application of our new notion, we prove the security of the Naor-Yung paradigm in the quantum settings, with respect to a strong quantum IND-CCA security notion. This provides the quantum analogue of the classical dual key approach to prove the security of encryption schemes. Along the way, we introduce a new notion of quantum-query advantage functions, which may be used as a general framework to show classical/quantum separation for other cryptographic primitives, and it may be of independent interest.
Designing a General-Purpose 8-bit (T)FHE Processor Abstraction
Making the most of TFHE programmable bootstrapping to evaluate functions or operators otherwise challenging to perform with only the native addition and multiplication of the scheme is a very active line of research. In this paper, we systematize this approach and apply it to build an 8-bit FHE processor abstraction, i.e., a software entity that works over FHE-encrypted 8-bit data and presents itself to the programmer by means of a conventional-looking assembly instruction set. In doing so, we provide several homomorphic LookUp Table (LUT) dereferencing operators based on variants of the tree-based method and show that they are the most efficient option for manipulating encryptions of 8-bit data (optimally represented as two basis 16 digits). We then systematically apply this approach over a set of around 50 instructions, including, notably, conditional assignments, divisions, or fixed-point arithmetic operations. We further test the approach on several simple algorithms, including the execution of a neuron with a sigmoid activation function with 16-bit precision. We conclude the paper by comparing our work to the FHE compilers available in the state of the art. Finally, this work reveals that a very limited set of functional bootstrapping patterns is versatile and efficient enough to achieve general-purpose FHE computations beyond the boolean circuit approach. As such, these patterns may be an appropriate target for further works on advanced software optimizations or hardware implementations.
Bundled Authenticated Key Exchange: A Concrete Treatment of (Post-Quantum) Signal's Handshake Protocol
The Signal protocol relies on a special handshake protocol, formerly X3DH and now PQXDH, to set up secure conversations. Prior analyses of these protocols (or proposals for post-quantum alternatives) have all used highly tailored models to the individual protocols and generally made ad-hoc adaptations to "standard" AKE definitions, making the concrete security attained unclear and hard to compare between similar protocols. Indeed, we observe that some natural Signal handshake protocols cannot be handled by these tailored models. In this work, we introduce Bundled Authenticated Key Exchange (BAKE), a concrete treatment of the Signal handshake protocol. We formally model prekey bundles and states, enabling us to define various levels of security in a unified model. We analyze Signal's classically secure X3DH and harvest-now-decrypt-later-secure PQXDH, and show that they do not achieve what we call optimal security (as is documented). Next, we introduce RingXKEM, a fully post-quantum Signal handshake protocol achieving optimal security; as RingXKEM shares states among many prekey bundles, it could not have been captured by prior models. Lastly, we provide a security and efficiency comparison of X3DH, PQXDH, and RingXKEM.
Deletions and Dishonesty: Probabilistic Data Structures in Adversarial Settings
Probabilistic data structures (PDS) are compact representations of high-volume data that provide approximate answers to queries about the data. They are commonplace in today's computing systems, finding use in databases, networking and more. While PDS are designed to perform well under benign inputs, they are frequently used in applications where inputs may be adversarially chosen. This may lead to a violation of their expected behaviour, for example an increase in false positive rate.
In this work, we focus on PDS that handle approximate membership queries (AMQ). We consider adversarial users with the capability of making adaptive insertions, deletions and membership queries to AMQ-PDS, and analyse the performance of AMQ-PDS under such adversarial inputs.
We argue that deletions significantly empower adversaries, presenting a challenge to enforcing honest behaviour when compared to insertion-only AMQ-PDS. To address this, we introduce a new concept of an honest setting for AMQ-PDS with deletions. By leveraging simulation-based security definitions, we then quantify how much harm can be caused by adversarial users to the functionality of AMQ-PDS. Our resulting bounds only require calculating the maximal false positive probability and insertion failure probability achievable in our novel honest setting.
We apply our results to Cuckoo filters and Counting filters. We show how to protect these AMQ-PDS at low cost, by replacing or composing the hash functions with keyed pseudorandom functions in their construction. This strategy involves establishing practical bounds for the probabilities mentioned above. Using our new techniques, we demonstrate that achieving security against adversarial users making both insertions and deletions remains practical.
Compact Proofs of Partial Knowledge for Overlapping CNF Formulae
At CRYPTO '94, Cramer, Damgaard, and Schoenmakers introduced a general technique for constructing
honest-verifier zero-knowledge proofs of partial knowledge (PPK), where a prover Alice wants to prove to a verifier Bob she knows $\tau$ witnesses for $\tau$ claims out of $k$ claims without revealing the indices of those $\tau$ claims.
Their solution starts from a base honest-verifier zero-knowledge proof of knowledge $\Sigma$ and requires to run in parallel $k$ execution of the base protocol, giving a complexity of $O(k\gamma(\Sigma))$, where $\gamma(\Sigma)$ is the communication complexity of the base protocol.
However, modern practical scenarios require communication-efficient zero-knowledge proofs tailored to handle partial knowledge in specific application-dependent formats.
In this paper we propose a technique to compose a large class of $\Sigma$-protocols for atomic statements into $\Sigma$-protocols for PPK over formulae in conjunctive normal form (CNF) that overlap, in the sense that there is a common subset of literals among all clauses of the formula.
In such formulae, the statement is expressed as a conjunction of $m$ clauses, each of which consists of a disjunction of $k$ literals (i.e., each literal is an atomic statement) and $\ell$ literals are shared among clauses.
The prover, for a threshold parameter $\tau \le k$, proves knowledge of at least $\tau$ witnesses for $\tau$ distinct literals in each clause.
At the core of our protocol, there is a new technique to compose $\Sigma$-protocols for regular CNF relations (i.e., when $ \tau=1$) that exploits the overlap among clauses and
that we then generalize to formulae where $\tau>1$ providing improvements over state-of-the-art constructions.
VDORAM: Towards a Random Access Machine with Both Public Verifiability and Distributed Obliviousness
Verifiable random access machines (vRAMs) serve as a foundational model for expressing complex computations with provable security guarantees, serving applications in areas such as secure electronic voting, financial auditing, and privacy-preserving smart contracts. However, no existing vRAM provides distributed obliviousness, a critical need in scenarios where multiple provers seek to prevent disclosure against both other provers and the verifiers.
Implementing a publicly verifiable distributed oblivious RAM (VDORAM) presents several challenges. Firstly, the development of VDORAM is hindered by the limited availability of sophisticated publicly verifiable multi-party computation (MPC) protocols. Secondly, the lack of readily available front-end implementations for multi-prover zero-knowledge proofs (ZKPs) poses a significant obstacle to developing practical applications. Finally, directly adapting existing RAM designs to the VDORAM paradigm may prove either impractical or inefficient due to the inherent complexities of reconciling oblivious computation with the generation of publicly verifiable proofs.
To address these challenges, we introduce CompatCircuit, the first multi-prover ZKP front-end implementation to our knowledge. CompatCircuit integrates collaborative zkSNARKs to implement publicly verifiable MPC protocols with rich functionalities beyond those of an arithmetic circuit, enabling the development of multi-prover ZKP applications. Building upon CompatCircuit, we present VDORAM, the first publicly verifiable distributed oblivious RAM. By combining distributed oblivious architectures with verifiable RAM, VDORAM achieves an efficient RAM design that balances communication overhead and proof generation time. We have implemented CompatCircuit and VDORAM in approximately 15,000 lines of code, demonstrating usability by providing a practical and efficient implementation. Our performance evaluation result reveals that the system still provides moderate performance with distributed obliviousness.
Forking the RANDAO: Manipulating Ethereum's Distributed Randomness Beacon
Proof-of-stake consensus protocols often rely on distributed randomness beacons (DRBs) to generate randomness for leader selection. This work analyses the manipulability of Ethereum's DRB implementation, RANDAO, in its current consensus mechanism. Even with its efficiency, RANDAO remains vulnerable to manipulation through the deliberate omission of blocks from the canonical chain. Previous research has shown that economically rational players can withhold blocks --~known as a block withholding attack or selfish mixing~-- when the manipulated RANDAO outcome yields greater financial rewards.
We introduce and evaluate a new manipulation strategy, the RANDAO forking attack. Unlike block withholding, whereby validators opt to hide a block, this strategy relies on selectively forking out an honest proposer's block to maximize transaction fee revenues and block rewards.
In this paper, we draw attention to the fact that the forking attack is significantly more harmful than selfish mixing for two reasons. Firstly, it exacerbates the unfairness among validators. More importantly, it significantly undermines the reliability of the blockchain for the average user by frequently causing already published blocks to be forked out. By doing so, the attacker can fork the chain without losing slots, and we demonstrate that these are later fully compensated for. Our empirical measurements, investigating such manipulations on Ethereum mainnet, revealed no statistically significant traces of these attacks to date.
Blink: An Optimal Proof of Proof-of-Work
Designing light clients to securely and efficiently read Proof-of-Work blockchains has been a foundational problem since the inception of blockchains. Nakamoto themselves, in the original Bitcoin paper, presented the first client protocol, i.e., the Simplified Payment Verification, which consumes an amount of bandwidth, computational, and storage resources that grows linearly in the system's lifetime $\mathcal{C}$.
Today, the blockchain ecosystem is more mature and presents a variety of applications and protocols deployed on-chain and, often, cross-chain. In this landscape, light clients have become the cornerstone of decentralized bridges, playing a pivotal role in the security and efficiency of cross-chain operations.
These new use cases, combined with the growth of blockchains over time, raise the need for more minimalist clients, which further reduce the resource requirements and, when applicable, on-chain costs.
Over the years, the light client resource consumption has been reduced from $\mathcal{O}( \mathcal{C})$ to $\mathcal{O}(\text{polylog}( \mathcal{C}))$, and then down to $\mathcal{O}(1)$ with zero-knowledge techniques at the cost of often assuming a trusted setup.
In this paper, we present Blink, the first interactive provably secure $\mathcal{O}(1)$ PoW light client without trusted setup. Blink can be used for a variety of applications ranging from payment verification and bootstrapping, to bridges.
We prove Blink secure in the Bitcoin Backbone model, and we evaluate its proof size demonstrating that, at the moment of writing, Blink obtains a commitment to the current state of Bitcoin by downloading only 1.6KB, instead of 67.3MB and 197KB for SPV and zk-based clients, respectively.
A Framework for Generating S-Box Circuits with Boyar-Peralta Algorithm-Based Heuristics, and Its Applications to AES, SNOW3G, and Saturnin
In many lightweight cryptography applications, low area and latency are required for efficient implementation. The gate count in the cipher and the circuit depth must be low to minimize these two metrics. Many optimization strategies have been developed for the linear layer, led by the Boyar-Peralta (BP) algorithm. The Advanced Encryption Standard (AES) has been a focus of extensive research in this area. However, while the linear layer uses only XOR gates, the S-box, which is an essential nonlinear component in symmetric cryptography, uses various gate types, making optimization challenging, particularly as the bit size increases.
In this paper, we propose a new framework for a heuristic search to optimize the circuit depth or XOR gate count of S-box circuits. Existing S-box circuit optimization studies have divided the nonlinear and linear layers of the S-box, optimizing each separately, but limitations still exist in optimizing large S-box circuits. To extend the optimization target from individual internal components to the entire S-box circuit, we extract the XOR information of each node in the target circuit and reconstruct the nodes based on nonlinear gates. Next, we extend the BP algorithm-based heuristics to address nonlinear gates and incorporate this into the framework. It is noteworthy that the effects of our framework occur while maintaining the AND gate count and AND depth without any increase.
To demonstrate the effectiveness of the proposed framework, we apply it to the AES, SNOW3G, and Saturnin S-box circuits. Our results include depth improvements by about 40% and 11% compared to the existing AES S-box [BP10] and Saturnin super S-box [CDL+20] circuits, respectively. We implement a new circuit for the SNOW3G S-box, which has not previously been developed, and apply our framework to reduce its depth. We expect the proposed framework to contribute to the design and implementation of various symmetric-key cryptography solutions.
Cauchyproofs: Batch-Updatable Vector Commitment with Easy Aggregation and Application to Stateless Blockchains
Stateless blockchain designs have emerged to address the challenge of growing blockchain size by utilizing succinct global states. Previous works have developed vector commitments that support proof updates and aggregation to be used as such states. However, maintaining proofs for multiple users still demands significant computational resources, particularly in updating proofs with every transaction. This paper introduces Cauchyproofs, a batch-updatable vector commitment enabling proof-serving nodes to efficiently update proofs in quasi-linear time relative to the number of users and transactions, utilizing an optimized KZG scheme to achieve complexity $O\left(\left(\left|\vec{\alpha}\right| + \left|\vec{\beta}\right|\right) \log^2 (\left|\vec{\alpha}\right| + \left|\vec{\beta}\right|)\right)$, compared to previous $O\left(\left|\vec{\alpha}\right|\cdot|\vec{\beta}|\right)$ approaches. This advancement reduces the computational burden on proof-serving nodes, allowing for efficient proof maintenance across large user groups. We demonstrate that our approach is approximately five times faster than the naive approach at the Ethereum-level block size. Additionally, we present a novel matrix representation for KZG proofs utilizing Cauchy matrices, enabling faster all-proof computations with reduced elliptic curve operations. Finally, we propose an algorithm for history proof query, supporting retrospective proof generation with high efficiency. Our contributions substantially enhance the scalability and practicality of proof-serving nodes in stateless blockchain frameworks.
Chosen-Ciphertext Security for Inner Product FE: Multi-Client and Multi-Input, Generically
Functional Encryption is a powerful cryptographic primitive that allows for fine-grained access control over encrypted data. In the multi-user setting, especially Multi-Client and Multi-Input, a plethora of works have been proposed to study on concrete function classes, improving security, and more. However, the CCA-security for such schemes is still an open problem, where the only known works are on Public-Key Single-Client FE ($\mathit{e.g.}$ Benhamouda, Bourse, and Lipmaa, PKC'17).
This work provides the first generic construction of CCA-secure Multi-Client FE for Inner Products, and Multi-Input FE for Inner Products, with instantiations from $\mathsf{SXDH}$ and $\mathsf{DLIN}$ for the former, and $\mathsf{DDH}$ or $\mathsf{DCR}$ for the latter. Surprisingly, in the case of secret key MIFE we attain the same efficiency as in the public key single-client setting. In the MCFE setting, a toolkit of CCA-bootstrapping techniques is developed to achieve CCA-security in its $\mathit{secret~key}$ setting.
A Survey of Interactive Verifiable Computing: Utilizing Randomness in Low-Degree Polynomials
This survey provides a comprehensive examination of verifiable computing, tracing its evolution from foundational complexity theory to modern zero-knowledge succinct non-interactive arguments of knowledge (ZK-SNARKs). We explore key developments in interactive proof systems, knowledge complexity, and the application of low-degree polynomials in error detection and verification protocols. The survey delves into essential mathematical frameworks such as the Cook-Levin Theorem, the sum-check protocol, and the GKR protocol, highlighting their roles in enhancing verification efficiency and soundness. By systematically addressing the limitations of traditional NP-based proof systems and then introducing advanced interactive proof mechanisms to overcome them, this work offers an accessible step-by-step introduction for newcomers while providing detailed mathematical analyses for researchers. Ultimately, we synthesize these concepts to elucidate the GKR protocol, which serves as a foundation for contemporary verifiable computing models. This survey not only reviews the historical and theoretical advancements in verifiable computing over the past three decades but also lays the groundwork for understanding recent innovations in the field.
All-You-Can-Compute: Packed Secret Sharing for Combined Resilience
Unprotected cryptographic implementations are vulnerable to implementation attacks, such as passive side-channel attacks and active fault injection attacks. Recently, countermeasures like polynomial masking and duplicated masking have been introduced to protect implementations against combined attacks that exploit leakage and faults simultaneously.
While duplicated masking requires $O(t * e)$ shares to resist an adversary capable of probing $t$ values and faulting $e$ values, polynomial masking requires only $O(t + e)$ shares, which is particularly beneficial for affine computation.
At CHES'$24$, Arnold et al. showed how to further improve the efficiency of polynomial masking in the presence of combined attacks by embedding two secrets into one polynomial sharing. This essentially reduces the complexity of previous constructions by half. The authors also observed that using techniques from packed secret sharing (Grosso et al., CHES'$13$) cannot easily achieve combined resilience to encode an arbitrary number of secrets in one polynomial encoding.
In this work, we resolve these challenges and show that it is possible to embed an arbitrary number of secrets in one encoding and propose gadgets that are secure against combined attacks. We present two constructions that are generic and significantly improve the computational and randomness complexity of existing compilers, such as the laOla compiler presented by Berndt et al. at CRYPTO'$23$ and its improvement by Arnold et al.
For example, for an AES evaluation that protects against $t$ probes and $e$ faults, we improve the randomness complexity of the state-of-the-art construction when $t+e>3$, leading to an improvement of up to a factor of $2.41$.
Adversary Resilient Learned Bloom Filters
The Learned Bloom Filter is a recently proposed data structure that combines the Bloom Filter with a Learning Model while preserving the Bloom Filter's one-sided error guarantees. Creating an adversary-resilient construction of the Learned Bloom Filter with provable guarantees is an open problem. We define a strong adversarial model for the Learned Bloom Filter. Our adversarial model extends an existing adversarial model designed for the Classical (i.e. not ``Learned'') Bloom Filter by prior work and considers computationally bounded adversaries that run in probabilistic polynomial time (PPT). Using our model, we construct an adversary-resilient variant of the Learned Bloom Filter called the Downtown Bodega Filter. We show that: if pseudo-random permutations exist, then an Adversary Resilient Learned Bloom Filter may be constructed with $2\lambda$ extra bits of memory and at most one extra pseudo-random permutation in the critical path. We construct a hybrid adversarial model for the case where a fraction of the query workload is chosen by an adversary. We show realistic scenarios where using the Downtown Bodega Filter gives better performance guarantees compared to alternative approaches in this hybrid model.
Scalable Post-Quantum Oblivious Transfers for Resource-Constrained Receivers
It is imperative to modernize traditional core cryptographic primitives, such as Oblivious Transfer (OT), to address the demands of the new digital era, where privacy-preserving computations are executed on low-power devices. This modernization is not merely an enhancement but a necessity to ensure security, efficiency, and continued relevance in an ever-evolving technological landscape.
This work introduces two scalable OT schemes: (1) Helix OT, a $1$-out-of-$n$ OT, and (2) Priority OT, a $t$-out-of-$n$ OT. Both schemes provide unconditional security, ensuring resilience against quantum adversaries. Helix OT achieves a receiver-side download complexity of $O(1)$. In big data scenarios, where certain data may be more urgent or valuable, we propose Priority OT. With a receiver-side download complexity of $O(t)$, this scheme allows data to be received based on specified priorities. By prioritizing data transmission, Priority OT ensures that the most important data is received first, optimizing bandwidth, storage, and processing resources. Performance evaluations indicate that Helix OT completes the transfer of 1 out of $n=$ 16,777,216 messages in 9 seconds, and Priority OT handles $t=$ 1,048,576 out of $n$ selections in 30 seconds. Both outperform existing $t$-out-of-$n$ OTs (when $t\geq 1$), underscoring their suitability for large-scale applications. To the best of our knowledge, Helix OT and Priority OT introduce unique advancements that distinguish them from previous schemes.
Statistical testing of random number generators and their improvement using randomness extraction
Random number generators (RNGs) are notoriously challenging to build and test, especially for cryptographic applications. While statistical tests cannot definitively guarantee an RNG’s output quality, they are a powerful verification tool and the only universally applicable testing method. In this work, we design, implement, and present various post-processing methods, using randomness extractors, to improve the RNG output quality and compare them through statistical testing. We begin by performing intensive tests on three RNGs—the 32-bit linear feedback shift register (LFSR), Intel’s ‘RDSEED,’ and IDQuantique’s ‘Quantis’—and compare their performance. Next, we apply the different post-processing methods to each RNG and conduct further intensive testing on the processed output. To facilitate this, we introduce a comprehensive statistical testing environment, based on existing test suites, that can be parametrised for lightweight (fast) to intensive testing.
Zero-Knowledge Proofs for Set Membership: Efficient, Succinct, Modular
We consider the problem of proving in zero knowledge that an element of a public set satisfies a given property without disclosing the element, i.e., for some $u$, ``$u \in S$ and $P(u)$ holds''. This problem arises in many applications (anonymous cryptocurrencies, credentials or whitelists) where, for privacy or anonymity reasons, it is crucial to hide certain data while ensuring properties of such data.
We design new \textit{modular} and \textit{efficient} constructions for this problem through new \textit{commit-and-prove zero-knowledge systems for set membership}, i.e. schemes proving $u \in S$ for a value $u$ that is in a public commitment $c_u$. We also extend our results to support {\em non-membership proofs}, i.e. proving $u \notin S$.
Being commit-and-prove, our solutions can act as plug-and-play modules in statements of the form ``$u \in S$ and $P(u)$ holds'' by combining our set (non-)membership systems with any other commit-and-prove scheme for $P(u)$. Also, they work with Pedersen commitments over prime order groups which makes them compatible with popular systems such as Bulletproofs or Groth16.
We implemented our schemes as a software library, and tested experimentally their performance. Compared to previous work that achieves similar properties---the clever techniques combining zkSNARKs and Merkle Trees in Zcash---our solutions offer more flexibility, shorter public parameters and $3.7 \times$--$30\times$ faster proving time for a set of size $2^{64}$.
BIP32-Compatible Threshold Wallets
Cryptographic wallets are an essential tool to securely store and maintain users’ secret keys and consequently their funds in Blockchain networks. A compelling approach to construct such wallets is to share the user’s secret key among several devices, such that an adversary must corrupt multiple machines to extract the entire secret key. Indeed, many leading cryptocurrency companies such as Coinbase, Binance, or ZenGo have started offering such distributed wallets to their customers. An important feature of a cryptographic wallet is its compatibility with the so-called BIP32 specification, the most widely adopted standard for cryptographic wallets. Essentially, BIP32 specifies the notion of a hierarchical deterministic wallet, which allows to create a key hierarchy in a deterministic fashion. Unfortunately, despite significant interest, no practically efficiently solution for a fully distributed wallet scheme, that also follows the BIP32 standard, exists.
In this work, we show the first concretely efficient construction of a fully distributed wallet that is compliant with the BIP32 standard. To this end, we first provide a game-based notion of threshold signatures with rerandomizable keys and show an instantiation via the Gennaro and Goldfeder threshold ECDSA scheme (CCS’18). We then observe that one of BIP32’s key derivation mechanisms, the so-called hardened derivation, cannot efficiently be translated to the threshold setting. Instead, we devise a novel and efficient hardened derivation mechanism for the threshold setting that satisfies the same properties as the original mechanism as specified by BIP32. As a final contribution, we evaluate our solution with respect to its running time and communication cost.
Hash your Keys before Signing: BUFF Security of the Additional NIST PQC Signatures
In this work, we analyze the so-called Beyond UnForgeability Features (BUFF) security of the submissions to the current standardization process of additional signatures by NIST. The BUFF notions formalize security against maliciously generated keys and have various real-world use cases, where security can be guaranteed despite misuse potential on a protocol level. Consequently, NIST declared the security against the BUFF notions as desirable features. Despite NIST's interest, only $6$ out of $40$ schemes consider BUFF security at all, but none give a detailed analysis. We close this gap by analyzing the schemes based on codes, isogenies, lattices, and multivariate equations. The results vary from schemes that achieve neither notion (e.g., Wave) to schemes that achieve all notions (e.g., PROV). In particular, we dispute certain claims by SQUIRRELS and VOX regarding their BUFF security. Resulting from our analysis, we observe that three schemes (CROSS, HAWK and PROV) achieve BUFF security without having the hash of public key and message as part of the signature, as BUFF transformed schemes would have. HAWK and PROV essentially use the lighter PS-3 transform by Pornin and Stern (ACNS'05). We further point out whether this transform suffices for the other schemes to achieve the BUFF notions, with both positive and negative results.
ZODA: Zero-Overhead Data Availability
Uncategorized
Uncategorized
We introduce ZODA, short for 'zero-overhead data availability,' which is a protocol for proving that symbols received from an encoding (for tensor codes) were correctly constructed. ZODA has optimal overhead for both the encoder and the samplers. Concretely, the ZODA scheme incurs essentially no incremental costs (in either computation or size) beyond those of the tensor encoding itself. In particular, we show that a slight modification to the encoding scheme for tensor codes allows sampled rows and columns of this modified encoding to become proofs of their own correctness. When used as part of a data availability sampling protocol, both encoders (who encode some data using a tensor code with some slight modifications) and samplers (who sample symbols from the purported encoding and verify that these samples are correctly encoded) incur no incremental communication costs and only small additional computational costs over having done the original tensor encoding. ZODA additionally requires no trusted setup and is plausibly post-quantum secure.
Parametrizing Maximal Orders Along Supersingular $\ell$-Isogeny Paths
Suppose you have a supersingular $\ell$-isogeny graph with vertices given by $j$-invariants defined over $\mathbb{F}_{p^2}$, where $p = 4 \cdot f \cdot \ell^e - 1$ and $\ell \equiv 3 \pmod{4}$. We give an explicit parametrization of the maximal orders in $B_{p,\infty}$ appearing as endomorphism rings of the elliptic curves in this graph that are $\leq e$ steps away from a root vertex with $j$-invariant 1728. This is the first explicit parametrization of this kind and we believe it will be an aid in better understanding the structure of supersingular $\ell$-isogeny graphs that are widely used in cryptography. Our method makes use of the inherent directions in the supersingular isogeny graph induced via Bruhat-Tits trees, as studied in [1]. We also discuss how in future work other interesting use cases, such as $\ell=2$, could benefit from the same methodology.
Concurrent Security of Anonymous Credentials Light, Revisited
We revisit the concurrent security guarantees of the well-known Anonymous Credentials Light (ACL) scheme (Baldimtsi and Lysyanskaya, CCS'13). This scheme was originally proven secure when executed sequentially, and its concurrent security was left as an open problem.
A later work of Benhamouda et al. (EUROCRYPT'21) gave an efficient attack on ACL when executed concurrently, seemingly resolving this question once and for all.
In this work, we point out a subtle flaw in the attack of Benhamouda et al. on ACL and show, in spite of popular opinion, that it can be proven concurrently secure.
Our modular proof in the algebraic group model uses an ID scheme as an intermediate step and leads to a major simplification of the complex security argument for Abe's Blind Signature scheme by Kastner et al. (PKC'22).
A New Paradigm for Server-Aided MPC
The server-aided model for multiparty computation (MPC) was introduced to capture a real-world scenario where clients wish to off-load the heavy computation of MPC protocols to dedicated servers. A rich body of work has studied various trade-offs between security guarantees (e.g., semi-honest vs malicious), trust assumptions (e.g., the threshold on corrupted servers), and efficiency.
However, all existing works make the assumption that all clients must agree on employing the same servers, and accept the same corruption threshold. In this paper, we challenge this assumption and introduce a new paradigm for server-aided MPC, where each client can choose their own set of servers and their own threshold of corrupted servers. In this new model, the privacy of each client is guaranteed as long as their own threshold is satisfied, regardless of the other servers/clients. We call this paradigm per-party private server-aided MPC to highlight both a security and efficiency guarantee: (1) per-party privacy, which means that each party gets their own privacy guarantees that depend on their own choice of the servers; (2) per-party complexity, which means that each party only needs to communicate with their chosen servers. Our primary contribution is a new theoretical framework for server-aided MPC. We provide two protocols to show feasibility, but leave it as a future work to investigate protocols that focus on concrete efficiency.
Round-Optimal Compiler for Semi-Honest to Malicious Oblivious Transfer via CIH
A central question in the theory of cryptography is whether we can build protocols that achieve stronger security guarantees, e.g., security against malicious adversaries, by combining building blocks that achieve much weaker security guarantees, e.g., security only against semi-honest adversaries; and with the minimal number of rounds. An additional focus is whether these building blocks can be used only as a black-box. Since Oblivious Transfer (OT) is the necessary and sufficient building block to securely realize any two-party (and multi-party) functionality, theoreticians often focus on proving whether maliciously secure OT can be built from a weaker notion of OT.
There is a rich body of literature that provides (black-box) compilers that build malicious OT from OTs that achieve weaker security such as semi-malicious OT and defensibly secure OT, within the minimal number of rounds. However, no round-optimal compiler exists that builds malicious OT from the weakest notion of semi-honest OT, in the plain model.
Correlation intractable hash (CIH) functions are special hash functions whose properties allow instantiating the celebrated Fiat-Shamir transform, and hence reduce the round complexity of public-coin proof systems.
In this work, we devise the first round-optimal compiler from semi-honest OT to malicious OT, by a novel application of CIH for collapsing rounds in the plain model. We provide the following contributions. First, we provide a new CIH-based round-collapsing construction for general cut-and-choose. This gadget can be used generally to prove the correctness of the evaluation of a function. Then, we use our gadget to build the first round-optimal compiler from semi-honest OT to malicious OT.
Our compiler uses the semi-honest OT protocol and the other building blocks in a black-box manner. However, for technical reasons, the underlying CIH construction requires the upper bound of the circuit size of the semi-honest OT protocol used. The need for this upper-bound makes our protocol not fully black-box, hence is incomparable with existing, fully black-box, compilers.
A Key-Recovery Attack on a Leaky Seasign Variant
We present a key-recovery attack on a variant of the Seasign signature scheme presented by [Kim24], which attempts to avoid rejection sampling by presampling vectors $\mathbf{f}$ such that the $\mathbf{f}-\mathbf{e}$ is contained in an acceptable bound, where $\mathbf{e}$ is the secret key. We show that this choice leads to a bias of these vectors such that, in a small number of signatures, the secret key can either be completely recovered or its keyspace substantially reduced. In particular, on average, given $20$ signatures, with parameter set II of their paper, the attack reduces the private key to 128 possibilities
Blind accumulators for e-voting
We present a novel cryptographic primitive, blind accumulator, aimed at constructing e-voting systems. Blind accumulators collect private keys of eligible voters in a decentralized manner not getting information about the keys. Once the accumulation is complete, a voter processes the resulting accumulator and derives a public key which refers to a private key previously added by this voter. Public keys are derived deterministically and can therefore stand as fixed voter pseudonyms. The voter can prove that the derived key refers to some accumulated private key without revealing neither that key nor the voter itself. The voter uses the accumulated private key to sign a ballot. The corresponding public key is used to verify the signature. Since the public key is fixed, it is easy to achieve verifiability, to protect against multiple submissions of ballots by the same voter or, conversely, to allow multiple submissions but count only the last one. We suggest a syntax of blind accumulators and security requirements for them. We embed blind accumulators in the Pseudonymous Key Generation (PKG) protocol which details the use of accumulators in practical settings close to e-voting. We propose an instantiation of the blind accumulator scheme whose main computations resemble the Diffie-Hellman protocol. We justify the security of the proposed instantiation.
Delegated Multi-party Private Set Intersection from Secret Sharing
In this work, we address the problem of Delegated PSI (D-PSI), where a cloud server is introduced to handle most computational and communication tasks. D-PSI enables users to securely delegate their private sets to the cloud, ensuring the privacy of their data while allowing efficient computation of the intersection. The cloud operates under strict security requirements, learning nothing about the individual sets or the intersection result. Moreover, D-PSI minimizes user-to-user communication and supports "silent" processing, where the cloud can perform computations independently without user interaction, apart from set delegation and result retrieval.
We formally define the D-PSI problem and propose a novel construction that extends beyond two-party scenarios to support multi-party settings. Our construction adheres to the D-PSI requirements, including security against semi-honest adversaries, and achieves computational and communication complexities close to the ideal "perfect" D-PSI protocol. Additionally, we demonstrate the practicality of our approach through a baseline implementation and an optimized version that further reduces computational overhead. Our results establish a strong foundation for secure and efficient PSI in real-world cloud computing scenarios.
Last updated: 2025-01-08
Compressing Encrypted Data Over Small Fields
$\newcommand{\gen}{\mathsf{Gen}}\newcommand{\enc}{\mathsf{Enc}}\newcommand{\dec}{\mathsf{Dec}}$
A recent work of Fleischhacker, Larsen, and Simkin (Eurocrypt 2023) shows how to efficiently compress encrypted sparse vectors.
Subsequently, Fleischhacker, Larsen, Obremski, and Simkin (Eprint 2023) improve upon their work and provide more efficient constructions solving the same problem.
Being able to efficiently compress such vectors is very useful in a variety of applications, such as private information retrieval, searchable encryption, and oblivious message retrieval.
Concretely, assume one is given a vector $(m_1, \dots, m_n)$ with at most $t$ distinct indices $i \in [n]$, such that $m_i \neq 0$ and assume $(\gen,\enc, \dec)$ is an additively homomorphic encryption scheme.
The authors show that one can compress $(\enc(k, m_1), \dots, \enc(k, m_n))$, without knowing the secret key $k$, into a digest with size dependent on the upper bound on the sparsity $t$, but not on the total vector length $n$.
Unfortunately, all existing constructions either only work for encryption schemes that have sufficiently large plaintext spaces or require additional encrypted auxiliary information about the plaintext vector.
In this work, we show how to generalize the results of Fleischhacker, Larsen, and Simkin to encryption schemes with arbitrarily small plaintext spaces.
Our construction follows the same general ideas laid out in previous works but modifies them in a way that allows compressing the encrypted vectors correctly with high probability, independently of the size of the plaintext space.
Highly Efficient Server-Aided Multiparty Subfield VOLE Distribution Protocol
In recent development of secure multi-party computation (MPC), pseudorandom correlations of subfield vector oblivious linear evaluation (sVOLE) type become popular due to their amazing applicability in multi-dimensional MPC protocols such as privacy-preserving biometric identification and privacy-preserving machine learning protocols. In this paper, we introduce a novel way of VOLE distribution in three-party and four-party honest majority settings with the aid of a trusted server. This new method significantly decreases the communication cost and the memory storage of random sVOLE instances. On the other hand, it also enables a streamline distribution process that can generate a sVOLE instance of an arbitrary length, which results in 100 percent of utility rate of random sVOLE in multi-dimensional MPC protocols while preserving complete precomputability.
Efficient Multi-party Private Set Union Resistant to Maximum Collusion Attacks
Multi-party Private Set Union (MPSU) enables multiple participants to jointly compute the union of their private sets without leaking any additional information beyond the resulting union. Liu et al. (ASIACRYPT 2023) proposed the first scalable MPSU protocol fully based on symmetric key encryption (SKE), which designates one participant as the "leader" responsible for obtaining the final union. However, the protocol assumes that the leader does not collude with other participants, which weakens its practicality. In this work, we design a scalable MPSU protocol, $\Pi_\text{MPSU}^\text{one-leader}$, which tolerates maximum collusion. The protocol relies primarily on SKE supplemented with additive homomorphic encryption (AHE), with the designated leader obtaining the union result. Furthermore, to address the issue of fairness in scenarios where obtaining the result early provides an advantage, we extend $\Pi_\text{MPSU}^\text{one-leader}$ and propose a protocol that allows all participants to receive the union result simultaneously, called $\Pi_\text{MPSU}^\text{leaderless}$. We implement our proposed schemes and conduct a comprehensive comparison against state-of-the-art solutions. The result shows that, for input sizes of $2^{12}$ at a comparable security level, $\Pi_\text{MPSU}^{\text{one-leader}}$ achieves a $663$ times speedup in online runtime compared to the state-of-the-art. Furthermore, it also remains $22$ times faster than half-collusion-tolerant protocol.
Constant time lattice reduction in dimension 4 with application to SQIsign
In this paper we propose a constant time lattice reduction algorithm for integral dimension-4 lattices. Motivated by its application in the SQIsign post-quantum signature scheme, we provide for the first time a constant time LLL-like algorithm with guarantees on the length of the shortest output vector. We implemented our algorithm and ensured through various tools that it indeed operates in constant time. Our experiments suggest that in practice our implementation outputs a Minkowski reduced basis and thus can replace a non constant time lattice reduction subroutine in SQIsign.
How to use your brain for cryptography without trustworthy machines
In this work, we study cryptosystems that can be executed securely without fully trusting all machines, but only trusting the user's brain. This paper focuses on signature scheme. We first introduce a new concept called ``server-aided in-brain signature,'' which is a cryptographic protocol between a human brain and multiple servers to sign a message securely even if the user's device and servers are not completely trusted. Second, we propose a concrete scheme that is secure against mobile hackers in servers and malware infecting user's devices.
A Heuristic Proof of P $\neq$ NP
The question of whether the complexity class P equals NP is a major unsolved problem in theoretical computer science. In this paper, we introduce a new language, the Add/XNOR problem, which has the simplest structure and perfect randomness, by extending the subset sum problem. We prove that P $\neq$ NP as it shows that the square-root complexity is necessary to solve the Add/XNOR problem. That is, problems that are verifiable in polynomial time are not necessarily solvable in polynomial time.
Furthermore, by giving up commutative and associative properties, we design a magma equipped with a permutation and successfully achieve Conjecture 1. Based on this conjecture, we obtain the Add/XOR/XNOR problem and one-way functions, which are supposed to be solvable or invertible only by exhaustive search.
Multi-Client Functional Encryption with Public Inputs and Strong Security
Recent years have witnessed a significant development for functional encryption (FE) in the multi-user setting, particularly with multi-client functional encryption (MCFE). The challenge becomes more important when combined with access control, such as attribute-based encryption (ABE), which was actually not covered syntactically by the public-key FE nor semantically by the secret-key MCFE frameworks. On the other hand, as for complex primitives, many works have studied the admissibility of adversaries to ensure that the security model encompasses all real threats of attacks.
- At a conceptual level, by adding a public input to FE/MCFE, we cover many previous primitives, notably attribute-based function classes. Furthermore, with the strongest admissibility for inner-product functionality, our framework is quite versatile, as it encrypts multiple sub-vectors, allows repetitions and corruptions, and eventually also encompasses public-key FE and classical ABE, bridging the $\mathit{private}$ setting of MCFE with the $\mathit{public}$ setting of FE and ABE.
- Finally, we propose an MCFE with public inputs with the class of functions that combines inner-products (on private inputs) and attribute-based access-control (on public inputs) for LSSS policies. We achieve the first AB-MCFE for inner products with strong admissibility (from Nguyen et al., ACNS'23) and with adaptive security.
In the end, our concrete MCFE leads to MIFE for inner products, public-key single-input inner-product FE with LSSS key-policy, and KP-ABE for LSSS, with adaptive security. Previous AB-MCFE constructions are either restricted in terms of weaker admissibility (Nguyen et al., ASIACRYPT'22)
or considers a slightly larger functionality of attribute-weighted sum but with only selective security (Agrawal et al., CRYPTO'23).
Efficient CPA Attack on Hardware Implementation of ML-DSA in Post-Quantum Root of Trust
Side-channel attacks (SCA) pose a significant threat to cryptographic implementations, including those designed to withstand the computational power of quantum computers.
This paper introduces the first side-channel attack on an industry-grade post-quantum cryptography implementation.
Specifically, we present a Correlation Power Analysis (CPA) attack targeting the open-source hardware implementation of ML-DSA within a Silicon Root of Trust framework developed through a multi-party collaboration involving leading technology companies.
Our attack focuses on the modular reduction process that follows the Number Theoretic Transform-based polynomial pointwise multiplication.
By exploiting side-channel leakage from a distinctive unique reduction algorithm and leveraging the zeroization mechanism used to securely erase sensitive information by clearing internal registers, we significantly enhance the attack's efficacy.
Our findings reveal that an adversary can extract the secret keys using only 10,000 power traces.
With access to these keys, an attacker could forge signatures for certificate generation, thereby compromising the integrity of the root of trust.
This work highlights the vulnerabilities of industry-standard root-of-trust systems to side-channel attacks. It underscores the urgent need for robust countermeasures to secure commercially deployed systems against such threats.
How to Compress Garbled Circuit Input Labels, Efficiently
Garbled Circuits are essential building blocks in cryptography, and extensive research has explored their construction from both applied and theoretical perspectives. However, a challenge persists: While theoretically designed garbled circuits offer optimal succinctness--remaining constant in size regardless of the underlying circuit’s complexit--and are reusable for multiple evaluations, their concrete computational costs are prohibitively high. On the other hand, practically efficient garbled circuits, inspired by Yao’s garbled circuits, encounter limitations due to substantial communication bottlenecks and a lack of reusability.
To strike a balance, we propose a novel concept: online-offline garbling. This approach leverages instance-independent and (partially) reusable preprocessing during an offline phase, to enable the creation of constant-size garbled circuits in an online phase, while maintaining practical efficiency. Specifically, during the offline stage, the garbler generates and transmits a reference string, independent of the computation to be performed later. Subsequently, in the online stage, the garbler efficiently transforms a circuit into a constant-size garbled circuit. The evaluation process relies on both the reference string and the garbled circuit.
We demonstrate that by leveraging existing tools such as those introduced by Applebaum et al. (Crypto’13) and Chongwon et al. (Crypto’17), online-offline garbling can be achieved under a variety of assumptions, including the hardness of Learning With Errors (LWE), Computational Diffie-Hellman (CDH), and factoring. In contrast, without the help of an offline phase, constant-size garbling is only feasible under the LWE and circular security assumptions, or the existence of indistinguishability obfuscation. However, these schemes are still very inefficient, several orders of magnitude more costly than Yao-style garbled circuits.
To address this, we propose a new online-offline garbling scheme based on Ring LWE. Our scheme offers both asymptotic and concrete efficiency. It serves as a practical alternative to Yao-style garbled circuits, especially in scenarios where online communication is constrained. Furthermore, we estimate the concrete latency using our approach in realistic settings and demonstrate that it is 2-20X faster than using Yao-style garbled circuits. This improvement is estimated without taking into account parallelization of computation, which can lead to further performance improvement using our scheme.
Attribute-Based Threshold Issuance Anonymous Counting Tokens and Its Application to Sybil-Resistant Self-Sovereign Identity
Self-sovereign identity (SSI) systems empower users to (anonymously) establish and verify their identity when accessing both digital and real-world resources, emerging as a promising privacy-preserving solution for user-centric identity management. Recent work by Maram et al. proposes the privacy-preserving Sybil-resistant decentralized SSI system CanDID (IEEE S&P 2021). While this is an important step, notable shortcomings undermine its efficacy. The two most significant among them being the following: First, unlinkability breaks in the presence of a single malicious issuer. Second, it introduces interactiveness, as the users are required to communicate each time with issuers to collect credentials intended for use in interactions with applications. This contradicts the goal of SSI, whose aim is to give users full control over their identities. This paper first introduces the concept of publicly verifiable attribute-based threshold anonymous counting tokens (tACT). Unlike recent approaches confined to centralized settings (Benhamouda et al., ASIACRYPT 2023), tACT operates in a distributed-trust environment. Accompanied by a formal security model and a provably secure instantiation, tACT introduces a novel dimension to token issuance, which, we believe, holds independent interest. Next, the paper leverages the proposed tACT scheme to construct an efficient Sybil-resistant SSI system. This system supports various functionalities, including threshold issuance, unlinkable multi-show selective disclosure, and non-interactive, non-transferable credentials that offer constant-size credentials. Finally, our benchmark results show an efficiency improvement in our construction when compared to CanDID all while accommodating a greater number of issuers and additionally reducing to a one-round protocol that can be run in parallel with all issuers.
Committing AE from Sponges: Security Analysis of the NIST LWC Finalists
Committing security has gained considerable attention in the field of authenticated encryption (AE). This can be traced back to a line of recent attacks, which entail that AE schemes used in practice should not only provide confidentiality and authenticity, but also committing security. Roughly speaking, a committing AE scheme guarantees that ciphertexts will decrypt only for one key. Despite the recent research effort in this area, the finalists of the NIST lightweight cryptography standardization process have not been put under consideration yet. We close this gap by providing an analysis of these schemes with respect to their committing security. Despite the structural similarities the finalists exhibit, our results are of a quite heterogeneous nature: We break four of the schemes with effectively no costs, while for two schemes our attacks are costlier, yet still efficient. For the remaining three schemes ISAP, Ascon, and (a slightly modified version of) Schwaemm, we give formal security proofs. Our analysis reveals that sponges are well-suited for building committing AE schemes. Furthermore, we show several negative results when applying the zero-padding method to the NIST finalists.
Attestation Proof of Association – provability that attestation keys are bound to the same hardware and person
We propose a wallet provider issued attestation called Wallet Trust Evidence (WTE) and three related specific instructions for the European Digital Identity (EUDI) Wallet cryptographic hardware, most notably the generation of a Proof of Association (PoA). These allow the EUDI Wallet providing verifiable assurance to third parties (issuers, relying parties) that attestation private keys are not only bound to conformant cryptographic hardware but also that they are bound to the same such hardware. This allows the EUDI Wallet meeting eIDAS Level of Assurance ``high'' as well as operating in a privacy friendly manner. The instructions specified in this document cater for convenient implementation in all envisioned EUDI Wallet architectures including those based on a GlobalPlatform based Secure Element such as an eID-card or an embedded SIM (eSIM). By their simplicity, the three instructions also allow for convenient Common Criteria certification. This document is a further refinement and cryptographic concretization of the WTE/PoA logic specified in the wallet Architecture and Reference Framework (ARF), which is based on the EPIC-09 result developed in a cooperation between the NI-Scy consortium and the eIDAS expert group. However, the present draft document is meant for discussion only and not approved by the NI-Scy consortium, the eIDAS expert group or Dutch government. This paper concentrates on irrefutable PoAs but also indicates how refutable PoAs can be formed providing plausible deniability which can be beneficial in
some use cases.
As a side note this paper introduces in an annex the construction of Self Generated Verifiable Pseudonyms (SGVPs). These allow a wallet/user to generate pseudonyms based on information agreed with a relying party, e.g. an URL, and to prove these are correctly formed. Together with the proof of association this allows cryptographically binding (disclosed parts of) attestations with these pseudonyms. This enables various use cases such as an employee representing an organisation in a privacy friendly way using an chamber of commerce attestation cryptographically bound to a separate SGVP-pseudonym. Such functionality currently forms the privacy basis of the Dutch eRecognition scheme (eherkenning.nl).
Quantum-resistant secret handshakes with dynamic joining, leaving, and banishment: GCD revisited
Secret handshakes, introduced by Balfanz et al. [3], allow users associated with various groups to determine if they share a common affiliation. These protocols ensure crucial properties such as fairness (all participants learn the result simultaneously), affiliation privacy (failed handshakes reveal no affiliation information), and result-hiding (even participants within a shared group cannot infer outcomes of unrelated handshakes). Over time, various secret-handshake schemes have been proposed, with a notable advancement being the modular framework by Tsudik and Xu. Their approach integrates three key components: group signature schemes, centralized secure channels for each group, and decentralized group key-agreement protocols.
Building upon this modularity, we propose significant updates. By addressing hidden complexities and revising the security model, we enhance both the efficiency and the privacy guarantees of the protocol. Specifically, we achieve the novel property of Self distinction—the ability to distinguish between two users in a session without revealing their identities—by replacing the group signature primitive with a new construct, the List MAC. This primitive is inherently untraceable, necessitating adjustments to the original syntax to support stronger privacy guarantees. Consequently, we introduce the Traitor Catching paradigm, where the transcript of a handshake reveals only the identity of a traitor, preserving the anonymity of all other participants.
To showcase the flexibility and robustness of our updated framework, we present two post-quantum instantiations (a hash-based one and another based on lattices). Our approach not only corrects prior limitations but also establishes a new benchmark for privacy and security in secret handshakes.
The cool and the cruel: separating hard parts of LWE secrets
Sparse binary LWE secrets are under consideration for standardization for Homomorphic Encryption and its applications to private computation [20]. Known attacks on sparse binary LWE secrets include the sparse dual attack [5] and the hybrid sparse dual-meet in the middle attack [19], which requires significant memory. In this paper, we provide a new statistical attack with low memory requirement. The attack relies on some initial lattice reduction. The key observation is that, after lattice reduction is applied to the rows of a q-ary-like embedded random matrix A, the entries with high variance are concentrated in the early columns of the extracted matrix. This allows us to separate out the “hard part” of the LWE secret. We can first solve the sub-problem of finding the “cruel” bits of the secret in the early columns, and then find the remaining “cool” bits in linear time. We use statistical techniques to distinguish distributions to identify both cruel and cool bits of the secret. We recover secrets in dimensions n = 256, 512, 768, 1024 and provide concrete attack timings. For the lattice reduction stage, we leverage recent improvements in lattice reduction (flatter [34]) applied in parallel. We also apply our new attack to RLWE with 2-power cyclotomic rings, showing that these RLWE instances are much more vulnerable to this attack than LWE.
Concrete Quantum Cryptanalysis of Shortest Vector Problem
This paper presents quantum circuits for the Nguyen–Vidick (NV) sieve algorithm to solve the Shortest Vector Problem (SVP) in lattice-based cryptography. We focus on optimizing the circuit depth of the quantum NV sieve, leveraging Grover’s algorithm to reduce the search complexity. Using the proposed quantum NV sieve, we estimate the quantum re- sources required to solve SVP for various dimensions. Specifically, for a dimension size of 512 (the parameter for Kyber-512), our implemen- tation achieves a quantum attack cost of 2126.0045 in terms of the gate count–depth product metric used by National Institute of Standards and Technology (NIST). To optimize circuit depth, we employ carry-lookahead and carry-save adders for efficient multi-addition operations. Further, our quantum NV sieve performs precise sieving by implementing fixed-point arithmetic, incorporating essential components (such as input setting, up-scaling, and two’s complement). To the best of our knowledge, previous work on quantum cryptanaly- sis of SVP using the sieve algorithm has remained theoretical, without proposing quantum circuits. Our work humbly demonstrates that the post-quantum security of lattice- based cryptography (with respect to the quantum attack complexity) falls between that of multivariate-based and code-based cryptography.
Cryptography is Rocket Science: Analysis of BPSec
Space networking has become an increasing area of development with the advent of commercial satellite networks such as those hosted by Starlink and Kuiper, and increased satellite and space presence by governments around the world. Yet, historically such network designs have not been made public, leading to limited formal cryptographic analysis of the security offered by them. One of the few public protocols used in space networking is the Bundle Protocol, which is secured by Bundle Protocol Security (BPSec), an Internet Engineering Task Force (IETF) standard. We undertake a first analysis of BPSec under its default security context, building a model of the secure channel security goals stated in the IETF standard, and note issues therein with message loss detection. We prove BPSec secure, and also provide a stronger construction, one that supports the Bundle Protocol's functionality goals while also ensuring destination awareness of missing message components.
Security Guidelines for Implementing Homomorphic Encryption
Fully Homomorphic Encryption (FHE) is a cryptographic primitive that allows performing arbitrary operations on encrypted data. Since the conception of the idea in [RAD78], it was considered a holy grail of cryptography. After the first construction in 2009 [Gen09], it has evolved to become a practical primitive with strong security guarantees. Most modern constructions are based on well-known lattice problems such as Learning with Errors (LWE). Besides its academic appeal, in recent years FHE has also attracted significant attention from industry, thanks to its applicability to a considerable number of real-world use-cases. An upcoming standardization effort by ISO/IEC aims to support the wider adoption of these techniques. However, one of the main challenges that standards bodies, developers, and end users usually encounter is establishing parameters. This is particularly hard in the case of FHE because the parameters are not only related to the security level of the system, but also to the type of operations that the system is able to handle. In this paper, we provide examples of parameter sets for LWE targeting particular security levels that can be used in the context of FHE constructions. We also give examples of complete FHE parameter sets, including the parameters relevant for correctness and performance, alongside those relevant for security. As an additional contribution, we survey the parameter selection support offered in open-source FHE libraries.
Leveled Functional Bootstrapping via External Product Tree
Multi-input and large-precision lookup table (LUT) evaluation pose significant challenges in Fully Homomorphic Encryption (FHE). Currently, two modes are employed to address this issue. One is tree-based functional bootstrapping (TFBS), which uses multiple blind rotations to construct a tree for LUT evaluation. The second is circuit bootstrapping, which decomposes all inputs into bits and utilizes a CMux tree for LUT evaluation. In this work, we propose a novel mode that is leveled functional bootstrapping. This mode utilizes the external product tree to perform multi-input functional bootstrapping. We implement TFBS and LFBS within the OpenFHE library. The results demonstrate that our method outperforms TFBS in both computational efficiency and bootstrapping key size. Specifically, for 12-bit and 16-bit input LUTs, our method is approximately two to three orders of magnitude faster than TFBS.
Finally, we introduce a novel scheme-switching framework that supports large-precision polynomial and non-polynomial evaluations. The framework leverages digital extraction techniques to enable seamless switching between the BFV and LFBS schemes.
On Witness Encryption and Laconic Zero-Knowledge Arguments
Witness encryption (WE) (Garg et al, STOC’13) is a powerful cryptographic primitive that is closely related to the notion of indistinguishability obfuscation (Barak et, JACM’12, Garg et al, FOCS’13). For a given NP-language $L$, WE for $L$ enables encrypting a message $m$ using an instance $x$ as the public-key, while ensuring that efficient decryption is possible by anyone possessing a witness for $x \in L$, and if $x\notin L$, then the encryption is hiding. We show that this seemingly sophisticated primitive is equivalent to a communication-efficient version of one of the most classic cryptographic primitives—namely that of a zero-knowledge argument (Goldwasser et al, SIAM’89, Brassard et al, JCSS’88): for any NP-language $L$, the following are equivalent:
- There exists a witness encryption for L;
- There exists a laconic (i.e., the prover communication is bounded by $O(\log n)$) special-honest verifier zero-knowledge (SHVZK) argument for $L$.
Our approach is inspired by an elegant (one-sided) connection between (laconic) zero-knowledge arguments and public-key encryption established by Berman et al (CRYPTO’17) and Cramer-Shoup (EuroCrypt’02), and the equivalence between a notion of so-called “predictable arguments”
and witness encryption by Faonio, Nielsen, and Venturi (PKC’17).
Efficient Authentication Protocols from the Restricted Syndrome Decoding Problem
In this paper, we introduce an oracle version of the Restricted Syndrome Decoding Problem (RSDP) and propose novel authentication protocols based on the hardness of this problem. They follow the basic structure of the HB-family of authentication protocols and later improvements but demonstrate several advantages.
An appropriate choice of multiplicative subgroup and ring structure gives rise to a very efficient hardware implementation compared to other \emph{Learning Parity with Noise} based approaches. In addition, the new protocols also have lower key size, lower communication costs, and potentially better completeness/soundness compared to learning-based alternatives. This is appealing in the context of low-cost, low-powered authenticating devices such as radio frequency identification (RFID) systems.
Lastly, we show that with additional assumptions, RSDP can be used to instantiate a Man-in-the-Middle secured authentication protocol.
Asymptotically Optimal Adaptive Asynchronous Common Coin and DKG with Silent Setup
This paper presents the first optimal-resilient, adaptively secure asynchronous common coin protocol with $O(\lambda n^2)$ communication complexity and $O(1)$ rounds, requiring only a public silent setup. Our protocol immediately implies a sequence of quadratic-communication, constant-round asynchronous Byzantine agreement protocols and asynchronous distributed key generation with a silent setup. Along the way, we formulate a new primitive called asynchronous subset alignment and introduce a simple framework to reason about specific composition security suitable for asynchronous common coin, which may be of independent interest.
Lossy Cryptography from Code-Based Assumptions
Over the past few decades, we have seen a proliferation of advanced cryptographic primitives with lossy or homomorphic properties built from various assumptions such as Quadratic Residuosity, Decisional Diffie-Hellman, and Learning with Errors. These primitives imply hard problems in the complexity class $\mathcal{SZK}$ (statistical zero-knowledge); as a consequence, they can only be based on assumptions that are broken in $\mathcal{BPP}^{\mathcal{SZK}}$. This poses a barrier for building advanced primitives from code-based assumptions, as the only known such assumption is Learning Parity with Noise (LPN) with an extremely low noise rate $\frac{\log^2 n}{n}$, which is broken in quasi-polynomial time.
In this work, we propose a new code-based assumption: Dense-Sparse LPN, that falls in the complexity class $\mathcal{BPP}^{\mathcal{SZK}}$ and is conjectured to be secure against subexponential time adversaries. Our assumption is a variant of LPN that is inspired by McEliece's cryptosystem and random $k\mbox{-}$XOR in average-case complexity. Roughly, the assumption states that
\[(\mathbf{T}\, \mathbf{M}, \mathbf{s} \,\mathbf{T}\, \mathbf{M} + \mathbf{e}) \quad \text{is indistinguishable from}\quad (\mathbf{T} \,\mathbf{M}, \mathbf{u}),\] for a random (dense) matrix $\mathbf{T}$, random sparse matrix $\mathbf{M}$, and sparse noise vector $\mathbf{e}$ drawn from the Bernoulli distribution with inverse polynomial noise probability.
We leverage our assumption to build lossy trapdoor functions (Peikert-Waters STOC 08). This gives the first post-quantum alternative to the lattice-based construction in the original paper. Lossy trapdoor functions, being a fundamental cryptographic tool, are known to enable a broad spectrum of both lossy and non-lossy cryptographic primitives; our construction thus implies these primitives in a generic manner. In particular, we achieve collision-resistant hash functions with plausible subexponential security, improving over a prior construction from LPN with noise rate $\frac{\log^2 n}{n}$ that is only quasi-polynomially secure.
Leakage-Free Probabilistic Jasmin Programs
This paper presents a semantic characterization of leakage-freeness through timing side-channels for Jasmin programs. Our characterization covers probabilistic Jasmin programs that are not constant-time. In addition, we provide a characterization in terms of probabilistic relational Hoare logic and prove the equivalence between both definitions. We also prove that our new characterizations are compositional and relate our new definitions to existing ones from prior work, which could only be applied to deterministic programs.
To provide practical evidence, we use the Jasmin framework to develop a rejection sampling algorithm and provide an EasyCrypt proof that ensures the algorithm's implementation is leakage-free while not being constant-time.
Ringtail: Practical Two-Round Threshold Signatures from Learning with Errors
A threshold signature scheme splits the signing key among $\ell$ parties, such that any $t$-subset of parties can jointly generate signatures on a given message. Designing concretely efficient post-quantum threshold signatures is a pressing question, as evidenced by NIST's recent call.
In this work, we propose, implement, and evaluate a lattice-based threshold signature scheme, Ringtail, which is the first to achieve a combination of desirable properties:
(i) The signing protocol consists of only two rounds, where the first round is message-independent and can thus be preprocessed offline.
(ii) The scheme is concretely efficient and scalable to $t \leq 1024$ parties. For $128$-bit security and $t = 1024$ parties, we achieve $13.4$ KB signature size and $10.5$ KB of online communication.
(iii) The security is based on the standard learning with errors (LWE) assumption in the random oracle model. This improves upon the state-of-the-art (with comparable efficiency) which either has a three-round signing protocol [Eurocrypt'24] or relies on a new non-standard assumption [Crypto'24].
To substantiate the practicality of our scheme, we conduct the first WAN experiment deploying a lattice-based threshold signature, across 8 countries in 5 continents. We observe that an overwhelming majority of the end-to-end latency is consumed by network latency, underscoring the need for round-optimized schemes.
Fiat-Shamir Bulletproofs are Non-Malleable (in the Random Oracle Model)
Bulletproofs (Bünz et al. IEEE S&P 2018) are a celebrated ZK proof system that allows for short and efficient proofs, and have been implemented and deployed in several real-world systems. In practice, they are most often implemented in their non-interactive version obtained using the Fiat-Shamir transform. A security proof for this setting is necessary for ruling out malleability attacks. These attacks can lead to very severe vulnerabilities, as they allow an adversary to forge proofs re-using or modifying parts of the proofs provided by the honest parties. An earlier version of this work (Ganesh et al. EUROCRYPT 2022) provided evidence for non-malleability of Fiat-Shamir Bulletproofs. This was done by proving simulation-extractability, which implies non-malleability, in the algebraic group model.
In this work, we generalize the former result and prove simulation extractability in the programmable random oracle model, removing the need for the algebraic group model. Along the way, we establish a generic chain of reductions for Fiat-Shamir-transformed multi-round public-coin proofs to be simulation-extractable in the (programmable) random oracle model, which may be of independent interest.
Foundations of Platform-Assisted Auctions
Today, many auctions are carried out with the help of intermediary platforms like Google and eBay. These platforms serve as a rendezvous point for the buyers and sellers, and charge a fee for its service. We refer to such auctions as platform-assisted auctions. Traditionally, the auction theory literature mainly focuses on designing auctions that incentivize the buyers to bid truthfully, assuming that the platform always faithfully implements the auction. In practice, however, the platforms have been found to manipulate the auctions to earn more profit, resulting in high-profile anti-trust lawsuits.
We propose a new model for studying platform-assisted auctions in the permissionless setting, where anyone can register and participate in the auction. We explore whether it is possible to design a dream auction in this new model, such that honest behavior is the utility-maximizing strategy for each individual buyer, the platform, the seller, as well as platform-seller or platform-buyer coalitions. Through a collection of feasibility and infeasibility results, we carefully characterize the mathematical landscape of platform-assisted auctions.
Interestingly, our work reveals exciting connections between cryptography and mechanism design. We show how cryptography can lend to the design of an efficient platform-assisted auction with dream properties. Although a line of works have also used multi-party computation (MPC) or the blockchain to remove the reliance on a trusted auctioneer, our work is distinct in nature in several dimensions. First, we initiate a systematic exploration of the game theoretic implications when the service providers (e.g., nodes that implement the MPC or blockchain protocol) are strategic and can collude with sellers or buyers. Second, we observe that the full simulation paradigm used in the standard MPC literature is too stringent and leads to high asymptotical costs. Specifically, because every player has a different private outcome in an auction protocol, to the best of our knowledge, running any generic MPC protocol among the players would incur at least $n^2$ total cost where $n$ is the number of buyers.We propose a new notion of simulation called {\it utility-dominated emulation} that is sufficient for guaranteeing the game-theoretic properties needed in an auction. Under this new notion of simulation, we show how to design efficient auction protocols with quasilinear efficiency, which gives an $n$-fold improvement over any generic approach.
Privacy-Preserving Dijkstra
Given a graph $G(V,E)$, represented as a secret-sharing of an adjacency list, we show how to obliviously convert it into an alternative, MPC-friendly secret-shared representation, so-called $d$-normalized replicated adjacency list (which we abbreviate to $d$-normalized), where the size of our new data-structure is only 4x larger -- compared to the original (secret-shared adjacency list) representation of $G$. Yet, this new data structure enables us to execute oblivious graph algorithms that simultaneously improve underlying graph algorithms' round, computation, and communication complexity. Our $d$-normalization proceeds in two steps:
First, we show how for any graph $G$, given a secret-shared adjacency list, where vertices are arbitrary alphanumeric strings that fit into a single RAM memory word, we can efficiently and securely rename vertices to integers from $1$ to $V$ that will appear in increasing order in the resulting secret-shared adjacency list. The secure renaming takes $O(\log V)$ rounds and $O((V+E)\log V)$ secure operations. Our algorithm also outputs two secret-shared arrays: a mapping from integers to alphanumeric names and its sorted inverse. Of course, if the adjacency list is already in such a format, this step could be omitted.
Second, given a secret-shared adjacency list for any graph $G$, where vertices are integers from $1$ to $V$ and are sorted, we show an oblivious $d$-normalization algorithm that takes $O(1)$ rounds and $O(V+E)$ secure operations.
We believe that both conversions are of independent interest. We demonstrate the power of our data structures by designing a privacy-preserving Dijkstra's single-source shortest-path algorithm that simultaneously achieves $O((V +E) \cdot \log V)$ secure operations and $O(V \cdot \log V \cdot \log \log\log V)$ rounds. Our secure Dijkstra algorithm works for any adjacency list representation as long as all vertex labels and weights can individually fit into a constant number of RAM memory words. Our algorithms work for two or a constant number of servers in the honest but curious setting. The limitation of our result (to only a constant number of servers) is due to our reliance on linear work and constant-round secure shuffle.
On the Independence Assumption in Quasi-Cyclic Code-Based Cryptography
Cryptography based on the presumed hardness of decoding codes -- i.e., code-based cryptography -- has recently seen increased interest due to its plausible security against quantum attackers. Notably, of the four proposals for the NIST post-quantum standardization process that were advanced to their fourth round for further review, two were code-based. The most efficient proposals -- including HQC and BIKE, the NIST submissions alluded to above -- in fact rely on the presumed hardness of decoding structured codes. Of particular relevance to our work, HQC is based on quasi-cyclic codes, which are codes generated by matrices consisting of two cyclic blocks.
In particular, the security analysis of HQC requires a precise understanding of the Decryption Failure Rate (DFR), whose analysis relies on the following heuristic: given random "sparse" vectors $e_1,e_2$ (say, each coordinate is i.i.d. Bernoulli) multiplied by fixed "sparse" quasi-cyclic matrices $A_1,A_2$, the weight of resulting vector $e_1A_1+e_2A_2$ is very concentrated around its expectation. In the documentation, the authors model the distribution of $e_1A_1+e_2A_2$ as a vector with independent coordinates (and correct marginal distribution). However, we uncover cases where this modeling fails. While this does not invalidate the (empirically verified) heuristic that the weight of $e_1A_1+e_2A_2$ is concentrated, it does suggest that the behavior of the noise is a bit more subtle than previously predicted. Lastly, we also discuss implications of our result for potential worst-case to average-case reductions for quasi-cyclic codes.
How Much Public Randomness Do Modern Consensus Protocols Need?
Modern blockchain-based consensus protocols
aim for efficiency (i.e., low communication and round complexity) while maintaining security against adaptive adversaries.
These goals are usually achieved using a public randomness beacon to select roles for each participant. We examine to what extent this randomness is necessary.
Specifically, we provide tight bounds on the amount of entropy a Byzantine Agreement protocol must consume from a beacon in order to enjoy efficiency and adaptive security. We first establish that no consensus protocol can simultaneously be efficient, be adaptively secure, and use $O(\log n)$ bits of beacon entropy. We then show this bound is tight and, in fact, a trilemma by presenting three consensus protocols that achieve any two of these three properties.
Stronger Security and Constructions of Multi-Designated Verifier Signatures
Off-the-Record (OTR) messaging is a two-party message authentication protocol that also provides plausible deniability: there is no record that can later convince a third party what messages were actually sent. To extend OTR to group messaging we need to consider issues that are not present in the 2-party case. In group OTR (as in two-party OTR), the sender should be able to authenticate (or sign) his messages so that group members can verify who sent a message (that is, signatures should be unforgeable, even by group members). Also as in the two-party case, we want the off-the-record property: even if some verifiers are corrupt and collude, they should not be able to prove the authenticity of a message to any outsider. Finally, we need consistency, meaning that a corrupt sender cannot create confusion in the group as to what he said: if any group member accepts a signature, then all of them do.
To achieve these properties it is natural to consider Multi-Designated Verifier Signatures (MDVS), which intuitively seem to target exactly the properties we require. However, existing literature defines and builds only limited notions of MDVS, where (a) the off-the-record property (referred to as source hiding) only holds when all verifiers could conceivably collude, and (b) the consistency property is not considered.
The contributions of this paper are two-fold: stronger definitions for MDVS, and new constructions meeting those definitions. We strengthen source-hiding to support any subset of corrupt verifiers, and give the first formal definition of consistency.
We give several constructions of our stronger notion of MDVS: one from generic standard primitives such as pseudorandom functions, pseudorandom generators, key agreement and NIZKs; one from specific instances of these primitives (for concrete efficiency); and one from functional encryption. The third construction requires an involved trusted setup step — including verification keys derived from a master secret — but this trusted setup buys us verifier-identity-based signing, for which such trusted setup is unavoidable. Additionally, in the third construction, the signature size can be made smaller by assuming a bound on colluding verifiers.
Probabilistic Attacks and Enhanced Security for "Private Set Intersection in the Internet Setting from Lightweight Oblivious PRF"
Privacy Set Intersection (PSI) has been an important research topic within privacy computation. Its main function is to allow two parties to compute the intersection of their private sets without revealing any other private information. Therefore, PSI can be applied to various real-world scenarios.
Chase and Miao presented an impressive construction ``Private set intersection in the Internet setting from lightweight oblivious prf'' (CM20 for short) at Crypto 2020, highlighting its convenient structure and optimal communication cost. However, it does have some security vulnerabilities. Let $X$ be the privacy set of user $P_1$, $Y$ be the privacy set of user $P_2$. The CM20 protocol uses a pseudorandom function (PRF) to encrypt the privacy $x\in X$ of $P_1$ into $D_1$ and the privacy $y\in Y$ of $P_2$ into $D_2$, $D_1 = D_2$ as $x=y$. It then sends random data $F_1$ to user $P_1$ and random data $F_2$ to user $P_2$ using a random oblivious transfer technique. User $P_2$ computes $\delta=D_2\oplus F_2$ and sends $\delta$ to user $P_1$, and user $P_1$ uses $\delta$ and $F_1$ to re-encrypt $D_1$. Repeat this until $P_1$ re-encrypts all the privacy in all the privacy sets $X$, packages them up and sends them to $P_2$, who completes the privacy set intersection. However, if an adversary obtains $\delta$ and $F_2$ by any means, they can recover the PRF's encryption of the user's privacy, and the recovery process is non-trivial. This significantly weakens the security of the CM20 protocol.
In this paper, we make three main contributions. First, based on the above analysis, we present a method for attacking CM20, called {\em probabilistic attacks}. This attack is based on estimating and analysing the frequency distribution of the encrypted data from the PRF and the probability distribution of the original private data, and determining the relationship between the two. Although not 100\% effective, this method of attack poses a significant threat to the security of user data.
Secondly, we introduce a new tool called the {\em perturbed pseudorandom generator} (PPRG). We show that the PPRG can overcome probabilistic attacks by replacing the random oblivious transfer and one of the hash functions (originally there were two) in CM20.
Finally, we provide a dedicated indistinguishability against chosen-plaintext attack (IND-CPA) security model for this PSI protocol. The efficiency analysis shows that the proposed PSI is comparable to CM20's PSI, whether on a PC, MAC, pad or mobile phone.
Leuvenshtein: Efficient FHE-based Edit Distance Computation with Single Bootstrap per Cell
This paper presents a novel approach to calculating the Levenshtein (edit) distance within the framework of Fully Homomorphic Encryption (FHE), specifically targeting third-generation schemes like TFHE. Edit distance computations are essential in applications across finance and genomics, such as DNA sequence alignment. We introduce an optimised algorithm that significantly reduces the cost of edit distance calculations called Leuvenshtein. This algorithm specifically reduces the number of programmable bootstraps (PBS) needed per cell of the calculation, lowering it from approximately 28 operations—required by the conventional Wagner-Fisher algorithm—to just 1. Additionally, we propose an efficient method for performing equality checks on characters, reducing ASCII character comparisons to only 2 PBS operations. Finally, we explore the potential for further performance improvements by utilizing preprocessing when one of the input strings is unencrypted. Our Leuvenshtein achieves up to $205\times$ faster performance compared to the best available TFHE implementation and up to $39\times$ faster than an optimised implementation of the Wagner-Fisher algorithm. Moreover, when offline preprocessing is possible due to the presence of one unencrypted input on the server side, an additional $3\times$ speedup can be achieved.
Dynamically Available Common Subset
Internet-scale consensus protocols used by blockchains are designed to remain operational in the presence of unexpected temporary crash faults (the so-called sleepy model of consensus) -- a critical feature for the latency-sensitive financial applications running on these systems.
However, their leader-based architecture, where a single block proposer is responsible for creating the block at each height, makes them vulnerable to short-term censorship attacks, in which the proposers profit at the application layer by excluding certain transactions.
In this work, we introduce an atomic broadcast protocol, secure in the sleepy model, that ensures the inclusion of all transactions within a constant expected time, provided that at least one participating node is non-censoring at all times.
Unlike traditional approaches, our protocol avoids designating a single proposer per block height, instead leveraging a so-called dynamically available common subset (DACS) protocol -- the first of its kind in the sleepy model. Additionally, our construction guarantees deterministic synchronization -- once an honest node confirms a block, all other honest nodes do so within a constant time, thus addressing a shortcoming of many low-latency sleepy protocols.
Publicly-Detectable Watermarking for Language Models
We present a publicly-detectable watermarking scheme for LMs: the detection algorithm contains no secret information, and it is executable by anyone. We embed a publicly-verifiable cryptographic signature into LM output using rejection sampling and prove that this produces unforgeable and distortion-free (i.e., undetectable without access to the public key) text output. We make use of error-correction to overcome periods of low entropy, a barrier for all prior watermarking schemes. We implement our scheme and find that our formal claims are met in practice.
A New Method for Solving Discrete Logarithm Based on Index Calculus
Index Calculus (IC) algorithm is the most effective probabilistic algorithm for solving discrete logarithms over finite fields of prime numbers, and it has been widely applied to cryptosystems based on elliptic curves. Since the IC algorithm was proposed in 1920, the research on it has never stopped, especially discretization of prime numbers on the finite fields, both the algorithm itself and its application have been greatly developed. Of course, there has been some research on elliptic curves,but with little success. For the IC algorithm, scholars pay more attention to how to improve the probability of solving and reduce the time complexity of calculation. It is the first time for the IICA to study the optimization problem of the IC by using the method of integer. However, the IICA only studies the case of integer up, and fails to consider the case of integer down. It is found that the integer direction of the IICA can be integer up or integer down, but the concept of modular multiplication needs to be used when integer down. After optimizing the IICA, the probability of successful solution of discrete logarithm is increased by nearly 2 times, and the number of transformations is also reduced to a certain extent, thus reducing the time complexity of solution. The re-optimized the IC algorithm greatly improves the probability of successful the IC solution. This research result poses a serious challenge to cryptosystems based on finite fields of prime numbers.
Leverage Staking with Liquid Staking Derivatives (LSDs): Opportunities and Risks
In the Proof of Stake (PoS) Ethereum ecosystem, users can stake ETH on Lido to receive stETH, a Liquid Staking Derivative (LSD) that represents staked ETH and accrues staking rewards. LSDs improve the liquidity of staked assets by facilitating their use in secondary markets, such as for collateralized borrowing on Aave or asset exchanges on Curve. The composability of Lido, Aave, and Curve enables an emerging strategy known as leverage staking, an iterative process that enhances financial returns while introducing potential risks. This paper establishes a formal framework for leverage staking with stETH and identifies 442 such positions on Ethereum over 963 days. These positions represent a total volume of 537,123 ETH (877m USD). Our data reveal that 81.7% of leverage staking positions achieved an Annual Percentage Rate (APR) higher than conventional staking on Lido. Despite the high returns, we also recognize the potential risks. For example, the Terra crash incident demonstrated that token devaluation can impact the market. Therefore, we conduct stress tests under extreme conditions of significant stETH devaluation to evaluate the associated risks. Our simulations reveal that leverage staking amplifies the risk of cascading liquidations by triggering intensified selling pressure through liquidation and deleveraging processes. Furthermore, this dynamic not only accelerates the decline of stETH prices but also propagates a contagion effect, endangering the stability of both leveraged and ordinary positions.
SPY-PMU: Side-Channel Profiling of Your Performance Monitoring Unit to Leak Remote User Activity
The Performance Monitoring Unit (PMU), a standard feature in all modern computing systems, presents significant security risks by leaking sensitive user activities through microarchitectural event data. This work demonstrates the feasibility of remote side-channel attacks leveraging PMU data, revealing vulnerabilities that compromise user privacy and enable covert surveillance without physical access to the target machine. By analyzing the PMU feature space, we create distinct micro-architectural fingerprints for benchmark applications, which are then utilized in machine learning (ML) models to detect the corresponding benchmarks. This approach allows us to build a pre-trained model for benchmark detection using the unique micro-architectural fingerprints derived from PMU data. Subsequently, when an attacker remotely accesses the victim’s PMU data, the pre-trained model enables the identification of applications used by the victim with high accuracy. In our proof-of-concept demonstration, the pre-trained model successfully identifies applications used by a victim when the attacker remotely accesses PMU data, showcasing the potential for malicious exploitation of PMU data. We analyze stress-ng benchmarks and build our classifiers using logistic regression, decision tree, k-nearest neighbors, and random forest ML models. Our proposed models achieve an average prediction accuracy of 98%, underscoring the potential risks associated with remote side-channel analysis using PMU data and emphasizing the need for more robust safeguards. This work underscores the urgent need for robust countermeasures to protect against such vulnerabilities and provides a foundation for future research in micro-architectural security.
Wave Hello to Privacy: Efficient Mixed-Mode MPC using Wavelet Transforms
This paper introduces new protocols for secure multiparty computation (MPC) leveraging Discrete Wavelet Transforms (DWTs) for computing nonlinear functions over large domains. By employing DWTs, the protocols significantly reduce the overhead typically associated with Lookup Table-style (LUT) evaluations in MPC. We state and prove foundational results for DWT-compressed LUTs in MPC, present protocols for 9 of the most common activation functions used in ML, and experimentally evaluate the performance of our protocols for large domain sizes in the LAN and WAN settings. Our protocols are extremely fast -- for instance, when considering 64-bit inputs, computing 1000 parallel instances of the sigmoid function, with an error less than $2^{-24}$ takes only a few hundred milliseconds incurs just 29\,KiB of online communication (40 bytes per evaluation).
A Note on the Minimality of One-Way Functions in Post-Quantum Cryptography
In classical cryptography, one-way functions (OWFs) play a central role as the minimal primitive that (almost) all primitives imply. The situation is more complicated in quantum cryptography, in which honest parties and adversaries can use quantum computation and communication, and it is known that analogues of OWFs in the quantum setting might not be minimal.
In this work we ask whether OWFs are minimal for the intermediate setting of post-quantum cryptography, in which the protocols are classical while they shall resist quantum adversaries. We show that for a wide range of natural settings, if a primitive Q implies OWFs, then so does its (uniformly or non-uniformly secure) post-quantum analogue. In particular, we show that if a primitive Q implies any other primitive P that has a 2-message security game (e.g., OWFs) through a black-box classical security reduction R, then one can always (efficiently) turn any polynomial-size quantum adversary breaking P into a polynomial-size quantum adversary breaking Q. Note that this result holds even if the implementation of P using that of Q is arbitrarily non-black-box.
We also prove extensions of this result for when the reduction R anticipates its oracle adversary to be deterministic, whenever either of the following conditions hold: (1) the adversary needs to win the security game of Q only with non-negligible probability (e.g., Q is collision-resistant hashing) or (2) that either of P and Q have "falsifiable" security games (this is the case when P is OWFs). Our work leaves open answering our main question when Q implies OWFs through a non-black-box security reduction, or when P uses a more complicated security game than a two-message one.