All papers (24037 results)
Vector Commitment Design, Analysis, and Applications: A Survey
Due to their widespread applications in decentralized and privacy preserving technologies, commitment schemes have become increasingly important cryptographic primitives. With a wide variety of applications, many new constructions have been proposed, each enjoying different features and security guarantees. In this paper, we systematize the designs, features, properties, and applications of vector commitments (VCs). We define vector, polynomial, and functional commitments and we discuss the relationships shared between these types of commitment schemes. We first provide an overview of the definitions of the commitment schemes we will consider, as well as their security notions and various properties they can have. We proceed to compare popular constructions, taking into account the properties each one enjoys, their proof/update information sizes, and their proof/commitment complexities. We also consider their effectiveness in various decentralized and privacy preserving applications. Finally, we conclude by discussing some potential directions for future work.
Adaptive Robustness of Hypergrid Johnson-Lindenstrauss
Johnson and Lindenstrauss (Contemporary Mathematics, 1984) showed that for , a scaled random projection from to is an approximate isometry on any set of size at most exponential in . If is larger, however, its points can contract arbitrarily under . In particular, the hypergrid is expected to contain a point that is contracted by a factor of , where .
We give evidence that finding such a point exhibits a statistical-computational gap precisely up to . On the algorithmic side, we design an online algorithm achieving , inspired by a discrepancy minimization algorithm of Bansal and Spencer (Random Structures \& Algorithms, 2020). On the hardness side, we show evidence via a multiple overlap gap property (mOGP), which in particular captures online algorithms; and a reduction-based lower bound, which shows hardness under standard worst-case lattice assumptions.
As a cryptographic application, we show that the rounded Johnson-Lindenstrauss embedding is a robust property-preserving hash function (Boyle, Lavigne and Vaikuntanathan, TCC 2019) on the hypergrid for the Euclidean metric in the computationally hard regime. Such hash functions compress data while preserving distances between inputs up to some distortion factor, with the guarantee that even knowing the hash function, no computationally bounded adversary can find any pair of points that violates the distortion bound.
MProve-Nova: A Privacy-Preserving Proof of Reserves Protocol for Monero
A proof of reserves (PoR) protocol enables a cryptocurrency exchange to prove to its users that it owns a certain amount of coins, as a first step towards proving that it is solvent. We present the design, implementation, and security analysis of MProve-Nova, a PoR protocol for Monero that leverages the Nova recursive SNARK to achieve two firsts (without requiring any trusted setup). It is the first Monero PoR protocol that reveals only the number of outputs owned by an exchange; no other information about the outputs or their key images is revealed. It is also the first Monero PoR protocol where the proof size and proof verification time are constant, i.e. they are independent of the number of outputs on the Monero blockchain and the number of outputs owned by the exchange. To achieve constant verification times, MProve-Nova requires a pre-processing step which creates two Merkle trees from all the outputs and key images on the Monero blockchain.
MProve-Nova consists of two Nova-based subprotocols, a reserves commitment generator (RCG) protocol used to compute a commitment to the total reserves owned by an exchange and a non-collusion (NC) protocol used to prove non-collusion between two exchanges. For the RCG protocol, we observed proof sizes of about 28 KB and verification times of 4.3 seconds. For the NC protocol, we observed proof sizes of about 24 KB and verification times of 0.2 seconds. Proving times for both protocols increase linearly with the number of outputs owned by the exchange but remain independent of the number of outputs on the Monero blockchain. On average, the RCG protocol required about 42 minutes per 1000 outputs and the NC protocol required about 5 minutes per 1000 outputs.
Publicly Verifiable Generalized Secret Sharing Schemes and Their Applications
Generalized secret sharing (GSS), which accommodates monotone access structures, has been under-explored in distributed computing over the past decades. In this paper, we propose the publicly verifiable generalized secret sharing (PVGSS) scheme, enhancing the applicability of GSS in transparent systems. PVGSS not only enables a dealer to share a secret with fine-grained access structures, but also allows anyone to verify whether the dealer and shareholders are acting honestly or not. We begin by introducing two approaches to implement GSS schemes: one based on recursive Shamir secret sharing and another utilizing linear secret sharing scheme (LSSS). Then, we present PVGSS constructions by integrating non-interactive zero-knowledge proofs into the GSS schemes. Further, we prove that the proposed PVGSS schemes achieve IND1-secrecy under DDH assumption. To showcase the practical applicability of PVGSS schemes, we implement a decentralized exchange (DEX) protocol that enables fair atomic swaps of ERC-20 tokens. A sophisticated access structure is devised to: (1) enable fair atomic swaps during normal protocol execution, and (2) incorporate fault-tolerant passive watchers to provide accountable arbitration when disputes occur. Our benchmarks on the BN128 curve demonstrate the computational efficiency of PVGSS schemes, while Ethereum gas cost analysis confirms the viability of the DEX implementation.
Intermundium-DL: Assessing the Resilience of Current Schemes to Discrete-Log-Computation Attacks on Public Parameters
We consider adversaries able to perform a nonzero but small number of discrete logarithm computations, as would be expected with near-term quantum computers. Schemes with public parameters consisting of a few group elements are now at risk; could an adversary knowing the discrete logarithms of these elements go on to easily compromise the security of many users? We study this question for known schemes and find, across them, a perhaps surprising variance in the answers. In a first class are schemes, including Okamoto and Katz-Wang signatures, that we show fully retain security even when the discrete logs of the group elements in their parameters are known to the adversary. In a second class are schemes like Cramer-Shoup encryption and the SPAKE2 password-authenticated key exchange protocol that we show retain some partial but still meaningful and valuable security. In the last class are schemes we show by attack to totally break. The distinctions uncovered by these results shed light on the security of classical schemes in a setting of immediate importance, and help make choices moving forward.
Attribute-Based Publicly Verifiable Secret Sharing
Can a dealer share a secret without knowing the shareholders? We provide a positive answer to this question by introducing the concept of an attribute-based secret sharing (AB-SS) scheme. With AB-SS, a dealer can distribute a secret based on attributes rather than specific individuals or shareholders. Only authorized users whose attributes satisfy a given access structure can recover the secret. Furthermore, we introduce the concept of attribute-based publicly verifiable secret sharing (AB-PVSS). An AB-PVSS scheme allows external users to verify the correctness of all broadcast messages from the dealer and shareholders, similar to a traditional PVSS scheme. Additionally, AB-SS (or AB-PVSS) distinguishes itself from traditional SS (or PVSS) by enabling a dealer to generate shares according to an arbitrary monotone access structure. To construct an AB-PVSS scheme, we first implement a decentralized ciphertext-policy attribute-based encryption (CP-ABE) scheme. The proposed CP-ABE scheme offers a smaller ciphertext size and requires fewer computational operations, although it is not fully-fledged as a trade-off. We then incorporate non-interactive zero-knowledge (NIZK) proofs to enable public verification of the CP-ABE ciphertext. Based on the CP-ABE and NIZK proofs, we construct an AB-PVSS primitive. Furthermore, we present an intuitive implementation of optimistic fair exchange based on the AB-PVSS scheme. Finally, we conduct security analysis and comprehensive experiments on the proposed CP-ABE and AB-PVSS schemes. The results demonstrate that both schemes exhibit plausible performance compared to related works.
An LLM Framework For Cryptography Over Chat Channels
Recent advancements in Large Language Models (LLMs) have transformed communication, yet their role in secure messaging remains underexplored, especially in surveillance-heavy environments. At the same time, many governments all over the world are proposing legislation to detect, backdoor, or even ban encrypted communication. That emphasizes the need for alternative ways to communicate securely and covertly over open channels. We propose a novel cryptographic embedding framework that enables covert Public Key or Symmetric Key encrypted communication over public chat channels with human-like produced texts. Some unique properties of our framework are: 1. It is LLM agnostic, i.e., it allows participants to use different local LLM models independently; 2. It is pre- or post-quantum agnostic; 3. It ensures indistinguishability from human-like chat-produced texts. Thus, it offers a viable alternative where traditional encryption is detectable and restricted.
Eccfrog512ck2: An Enhanced 512-bit Weierstrass Elliptic Curve
Whilst many key exchange and digital signature methods use the NIST P256 (secp256r1) and secp256k1 curves, there is often a demand for increased security. With these curves, we have a 128-bit security. These security levels can be increased to 256-bit security with NIST P-521 Curve 448 and Brainpool-P512. This paper outlines a new curve - Eccfrog512ck2 - and which provides 256-bit security and enhanced performance over NIST P-521. Along with this, it has side-channel resistance and is designed to avoid weaknesses such as related to the MOV attack. It shows that Eccfrog512ck2 can have a 61.5% speed-up on scalar multiplication and a 33.3% speed-up on point generation over the NIST P-521 curve.
Scalable and Fine-Tuned Privacy Pass from Group Verifiable Random Functions
Abstract—Anonymous token schemes are cryptographic
protocols for limiting the access to online resources to
credible users. The resource provider issues a set of access
tokens to the credible user that they can later redeem
anonymously, i.e., without the provider being able to link
their redemptions. When combined with credibility tests such
as CAPTCHAs, anonymous token schemes can significantly
increase user experience and provider security, without
exposing user access patterns to providers.
Current anonymous token schemes such as the Privacy
Pass protocol by Davidson et al. rely on oblivious
pseudorandom functions (OPRFs), which let server and user
jointly compute randomly looking access tokens. For those
protocols, token issuing costs are linear in the number of
requested tokens.
In this work, we propose a new approach for building
anonymous token schemes. Instead of relying on two-party
computation to realize a privacy-preserving pseudorandom
function evaluation, we propose to offload token generation
to the user by using group verifiable random functions
(GVRFs). GVRFs are a new cryptographic primitive
that allow users to produce verifiable pseudorandomness.
Opposed to standard VRFs, verification is anonymous within
the group of credible users. We give a construction of group
VRFs from the Dodis-Yampolskiy VRF and Equivalence-
Class Signatures, based on pairings and a new Diffie-
Hellman inversion assumption that we analyze in the Generic
Group Model. Our construction enjoys compact public keys
and proofs, while evaluation and verification costs are only
slightly increased compared to the Dodis-Yampolskiy VRF.
By deploying a group VRF instead of a OPRF, we
obtain an anonymous token scheme where communication
as well as server-side computation during the issuing phase
is constant and independent of the number of tokens a
user requests. Moreover, by means of our new concept of updatable token policies, the number of unspent tokens in
circulation can retrospectively (i.e., even after the credibility
check) be decreased or increased in order to react to
the current or expected network situation. Our tokens are
further countable and publicly verifiable. This comes at the
cost of higher computational efforts for token redemption
and verification as well as somewhat weaker unlinkability
guarantees compared to Privacy Pass.
Efficient Verifiable Mixnets from Lattices, Revisited
Mixnets are powerful building blocks for providing anonymity
in applications like electronic voting and anonymous messaging. The en-
cryption schemes upon which traditional mixnets are built, as well as the
zero-knowledge proofs used to provide verifiability, will, however, soon
become insecure once a cryptographically-relevant quantum computer is
built. In this work, we construct the most compact verifiable mixnet that
achieves privacy and verifiability through encryption and zero-knowledge
proofs based on the hardness of lattice problems, which are believed to
be quantum-safe.
A core component of verifiable mixnets is a proof of shuffle. The starting
point for our construction is the proof of shuffle of Aranha et al. (CT-
RSA 2021). We first identify an issue with the soundness proof in that
work, which is also present in the adaptation of this proof in the mixnets
of Aranha et al. (ACM CCS 2023) and Hough et al. (IACR CiC 2025).
The issue is that one cannot directly adapt classical proofs of shuffle
to the lattice setting due to the splitting structure of the rings used in
lattice-based cryptography. This is not just an artifact of the proof, but
a problem that manifests itself in practice, and we successfully mount an
attack against the implementation of the first of the mixnets. We fix the
problem and introduce a general approach for proving shuffles in split-
ting rings that can be of independent interest.
The efficiency improvement of our mixnet over prior work is achieved by
switching from re-encryption mixnets (as in the works of Aranha et al.
and Hough et al.) to decryption mixnets with very efficient layering based
on the hardness of the LWE and LWR problems over polynomial rings.
The ciphertexts in our scheme are smaller by approximately a factor of
10X and 2X over the aforementioned instantiations, while the linear-size
zero-knowledge proofs are smaller by a factor of 4X and 2X.
Key Derivation Functions Without a Grain of Salt
Key derivation functions (KDFs) are integral to many cryptographic protocols. Their functionality is to turn raw key material, such as a Diffie-Hellman secret, into a strong cryptographic key that is indistinguishable from random. This guarantee was formalized by Krawczyk together with the seminal introduction of HKDF (CRYPTO 2010), in a model where the KDF only takes a single key material input. Modern protocol designs, however, regularly need to combine multiple secrets, possibly even from different sources, with the guarantee that the derived key is secure as long as at least one of the inputs is good. This is particularly relevant in settings like hybrid key exchange for quantum-safe migration. Krawczyk's KDF formalism does not capture this goal, and there has been surprisingly little work on the security considerations for KDFs since then.
In this work, we thus revisit the syntax and security model for KDFs to treat multiple, possibly correlated inputs. Our syntax is assertive: We do away with salts, which are needed in theory to extract from arbitrary sources in the standard model, but in practice, they are almost never used (or even available) and sometimes even misused, as we argue. We use our new model to analyze real-world multi-input KDFs—in Signal's X3DH protocol, ETSI's TS 103 744 standard, and MLS' combiner for pre-shared keys—as well as new constructions we introduce for specialized settings—e.g., a purely blockcipher-based one. We further discuss the importance of collision resistance for KDFs and finally apply our multi-input KDF model to show how hybrid KEM key exchange can be analyzed from a KDF perspective.
Unbounded Multi-Hop Proxy Re-Encryption with HRA Security: An LWE-Based Optimization
Proxy re-encryption (PRE) schemes enable a semi-honest proxy to transform a ciphertext of one user to another user while preserving the privacy of the underlying message. Multi-hop PRE schemes allow a legal ciphertext to undergo multiple transformations, but for lattice-based multi-hop PREs, the number of transformations is typically bounded due to the increase of error terms. Recently, Zhao et al. (Esorics 2024) introduced a lattice-based unbounded multi-hop (homomorphic) PRE scheme that supports an unbounded number of hops. Nevertheless, their scheme only achieves the selective CPA security. In contrast, Fuchsbauer et al. (PKC 2019) proposed a generic framework for constructing HRA-secure unbounded multi-hop PRE schemes from FHE. Despite this, when instantiated with state-of-the-art FHEW-like schemes, the overall key size and efficiency remain unsatisfactory.
In this paper, we present a lattice-based unbounded multi-hop PRE scheme with the stronger adaptive HRA security (i.e. security against honest re-encryption attacks), which is more suitable for practical applications. Our scheme features an optimized re-encryption process based on the FHEW-like blind rotation, which resolves the incompatibility between the noise flooding technique and Fuchsbauer et al. 's framework when instantiated with FHEW-like schemes. This results in reduced storage requirements for public keys and offers higher efficiency. Moreover, our optimized unbounded multi-hop PRE scheme can be modified to an unbounded homomorphic PRE, a scheme allowing for arbitrary homomorphic computations over fresh, re-encrypted, and evaluated ciphertexts.
Taking AI-Based Side-Channel Attacks to a New Dimension
This paper revisits the Hamming Weight (HW) labelling function for machine learning assisted side channel attacks. Contrary to what has been suggested by previous works, our investigation shows that, when paired with modern deep learning architectures, appropriate pre-processing and normalization techniques; it can perform as well as the popular identity labelling functions and sometimes even beat it. In fact, we hereby introduce a new machine learning method, dubbed, that helps solve the class imbalance problem associated to HW, while significantly improving the performance of unprofiled attacks. We additionally release our new, easy to use python package that we used in our experiments, implementing a broad variety of machine learning driven side channel attacks as open source, along with a new dataset AES_nRF, acquired on the nRF52840 SoC.
ECDSA Cracking Methods
The ECDSA (Elliptic Curve Digital Signature Algorithm) is used in many blockchain networks for digital signatures. This includes the Bitcoin and the Ethereum blockchains. While it has good performance levels and as strong current security, it should be handled with care. This care typically relates to the usage of the nonce value which is used to create the signature. This paper outlines the methods that can be used to break ECDSA signatures, including revealed nonces, weak nonce choice, nonce reuse, two keys and shared nonces, and fault attack.
Fission: Distributed Privacy-Preserving Large Language Model Inference
The increased popularity of large language models (LLMs) raises serious privacy concerns, where users' private queries are sent to untrusted servers. Many cryptographic techniques have been proposed to provide privacy, such as secure multiparty computation (MPC), which enables the evaluation of LLMs directly on private data. However, cryptographic techniques have been deemed impractical as they introduce large communication and computation. On the other hand, many obfuscation techniques have been proposed, such as split inference, where part of the model is evaluated on edge devices to hide the input data from untrusted servers, but these methods provide limited privacy guarantees.
We propose Fission, a privacy-preserving framework that improves latency while providing strong privacy guarantees. Fission utilizes an MPC network for linear computations, while nonlinearities are computed on a separate evaluator network that receives shuffled values in the clear and returns nonlinear functions evaluated at these values back to the MPC network. As a result, each evaluator only gets access to parts of the shuffled data, while the model weights remain private. We evaluate fission on a wide set of LLMs and compare it against prior works. Fission results in up to eight times faster inference and eight times reduced bandwidth compared to prior works while retaining high accuracy. Finally, we construct an attack on obfuscation techniques from related works that show significant information leakage, and we demonstrate how Fission enhances privacy.
MultiCent: Secure and Scalable Centrality Measures on Multilayer Graphs
As real-world networks such as social networks and computer networks are often complex and distributed, modeling them as multilayer graphs is gaining popularity. For instance, when studying social interactions across platforms like LinkedIn, Facebook, TikTok, and Bluesky, users may be connected on several of these platforms. To identify important nodes/users, the platforms might wish to analyze user interactions using, e.g., centrality measures when accounting for connections across all platforms. That raises the challenge for platforms to perform such computation while simultaneously protecting their user data to shelter their own business as well as uphold data protection laws. This necessitates designing solutions that allow for performing secure
computation on a multilayer graph which is distributed among mutually distrusting parties while keeping each party's data hidden.
The work of Asharov et al. (WWW'17) addresses this problem by designing secure solutions for centrality measures that involve computing the truncated Katz score and reach score on multilayer graphs. However, we identify several limitations in that work which render the solution inefficient or even unfeasible for realistic networks with significantly more than 10k nodes. We address these limitations by designing secure solutions that are significantly more efficient and scalable. In more detail, given that real-world graphs are known to be sparse, our solutions move away from an expensive matrix-based representation to a more efficient list-based representation. We design novel, secure, and efficient solutions for computing centrality measures and prove their correctness. Our solutions drastically reduce the asymptotic complexity from the prohibitive even for the fastest solution by Asharov et al. down to , for nodes. To design our solutions, we extend upon the secure graph computation framework of Koti et al. (CCS'24), providing a novel framework with improved capabilities in multiple directions. Finally, we provide an end-to-end implementation of our secure graph analysis framework and establish concrete efficiency improvements over prior work, observing several orders of magnitude improvement.
Low-Latency Bootstrapping for CKKS using Roots of Unity
We introduce a new bootstrapping equation for the CKKS homomorphic encryption scheme of approximate numbers. The original bootstrapping approach for CKKS consists in homomorphically evaluating a polynomial that approximates the modular reduction modulo q. In contrast, our new bootstrapping equation directly embeds the additive group modulo q into the complex roots of unity, which can be evaluated natively in the CKKS scheme. Due to its reduced multiplicative depth, our new bootstrapping equation achieves a 7x latency improvement for a single slot compared to the original CKKS bootstrapping, though it scales less efficiently when applied to a larger number of slots.
ADC-BE: Optimizing Worst-Case Bandwidth in Broadcast Encryption with Boolean Functions
Recently, Dupin and Abelard proposed a broadcast encryption scheme which outperforms the Complete Subtree-based and Subset Difference broadcast encryption in terms of encryption cost and bandwidth requirement. However, Dupin and Abelard acknowledge that the worst-case bound for bandwidth requirement of Complete Subtree approach can be reached in their scheme as well. In this paper, we answer the call to further reduce this bandwidth bottleneck. We first provide concrete analysis to show how this worst-case upper-bound is reached from concrete Boolean functions. Then we present two improved broadcast encryption schemes to significantly reduce this worst-case bandwidth consumption for further optimization of Dupin and Abelard’s technique. Our proposed approach ADC-BE, composed of two algorithms, AD-BE and AC-BE, can significantly optimize this worst-case complexity from n/2 down to 1 for a system of n users. This is efficient especially for large number of users in the system. Our proposed schemes combines the algebraic normal form, disjunctive normal form, and conjunctive normal form to optimize a Boolean function to its minimized representation. In addition, our approaches can be made secure against quantum adversaries and are therefore post-quantum, where both algorithms AD-BE and AC-BE require minimal assumptions based on existence of one-way function.
Guaranteed Termination Asynchronous Complete Secret Sharing with Lower Communication and Optimal Resilience
Asynchronous Complete Secret Sharing (ACSS) is a foundational module for asynchronous networks, playing a critical role in cryptography. It is essential for Asynchronous Secure Multi-Party Computation (AMPC) and, with termination, is widely applied in Validated Asynchronous Byzantine Agreement (VABA) and Asynchronous Distributed Key Generation (ADKG) to support secure distributed systems.
Currently, there are relatively few statistical secure ACSS protocols that can guarantee termination, and their communication complexity is relatively high. To reduce communication complexity, we propose a new multi-receiver signature scheme, ARICP, which supports linear operations on signatures. Leveraging the ARICP scheme and the properties of symmetric polynomials, we propose an ACSS protocol that ensures termination and optimal resilience ( ) with ) bits per sharing. Compared with the best-known result of ACSS protocols that guarantee termination [CP23], the amortized communication complexity of our protocol is reduced by a factor of .
HQC Beyond the BSC: Towards Error Structure-Aware Decoding
In Hamming Quasi-Cyclic (HQC), one of the finalists in the NIST competition for the standardization of post-quantum cryptography, decryption relies on decoding a noisy codeword through a public error-correcting code. The noise vector has a special form that depends on the secret key (a pair of sparse polynomials). However, the decoder, which is currently employed in HQC, is agnostic to the secret key, operating under the assumption that the error arises from a Binary Symmetric Channel (BSC). In this paper, we demonstrate that this special noise structure can instead be leveraged to develop more powerful decoding strategies.
We first study the problem from a coding-theoretic perspective. The current code design, which admits a non-zero decryption failure rate, is close to optimal in the setting of a decoder that is agnostic to the error structure. We show that there are code-decoder pairs with a considerably shorter code length that can guarantee unique decoding by taking the error structure into account. This result is non-constructive, i.e., we do not provide an explicit code construction and it remains open whether efficient decoding is possible. Nevertheless, it highlights the potential that can be tapped by taking the error structure into account.
We then argue that, in practice, the matter of decoding in HQC can be related to solving an instance of the noisy syndrome decoding problem, in which the parity-check matrix is constituted by the polynomials in the secret key. We show that, using decoders for Low-Density Parity-Check (LDPC) and Moderate-Density Parity-Check (MDPC) codes, one can significantly reduce the entity of the noise and, de facto, also the Decoding Failure Rate (DFR) of the HQC decoder.
This preliminary study leaves some open questions and problems. While it shows that decoding in HQC can be improved, the modeling of the DFR gets more complicated: even for the basic decoder we propose in this paper, we have not been able to devise a reliable DFR model. This is likely due to the fact that the decoder structure resembles the iterative nature of LDPC/MDPC decoders, for which devising a reliable DFR estimation is a well-known difficult problem.
Anamorphic Voting: Ballot Freedom Against Dishonest Authorities
Electronic voting schemes typically ensure ballot privacy by
assuming that the decryption key is distributed among tallying authorities, preventing any single authority from decrypting a voter’s ballot.
However, this assumption may fail in a fully dishonest environment where
all tallying authorities collude to break ballot privacy.
In this work, we introduce the notion of anamorphic voting, which enables voters to convey their true voting intention to an auditor while
casting an (apparently) regular ballot. We present new cryptographic
techniques demonstrating that several existing voting schemes can support anamorphic voting.
Secret-Key PIR from Random Linear Codes
Private information retrieval (PIR) allows to privately read a chosen bit from an -bit database with bits of communication. Lin, Mook, and Wichs (STOC 2023) showed that by preprocessing into an encoded database , it suffices to access only bits of per query. This requires , and prohibitively large server circuit size.
We consider an alternative preprocessing model (Boyle et al. and Canetti et al., TCC 2017), where the encoding depends on a client's short secret key. In this secret-key PIR (sk-PIR) model we construct a protocol with communication, for any constant , from the Learning Parity with Noise assumption in a parameter regime not known to imply public-key encryption. This is evidence against public-key encryption being necessary for sk-PIR.
Under a new conjecture related to the hardness of learning a hidden linear subspace of with noise, we construct sk-PIR with similar communication and encoding size in which the server is implemented by a Boolean circuit of size . This is the first candidate PIR scheme with such a circuit complexity.
GIGA Protocol: Unlocking Trustless Parallel Computation in Blockchains
The scalability of modern decentralized blockchain systems is constrained by the requirement that the participating nodes execute the entire chains transactions without the ability to delegate the verification workload across multiple actors trustlessly. This is further limited by the need for sequential transaction execution and repeated block validation, where each node must re-execute all transactions before accepting blocks, also leading to delayed broadcasting in many architectures.
Consequently, throughput is limited by the capacity of individual nodes, significantly preventing scalability.
In this paper, we introduce GIGA, a SNARK-based protocol that enables trustless parallel execution of transactions, processing non-conflicting operations concurrently, while preserving security guarantees and state consistency. The protocol organizes transactions into non-conflicting batches which are executed and proven in parallel, distributing execution across multiple decentralized entities. These batch proofs are recursively aggregated into a single succinct proof that validates the entire block.
As a result, the protocol both distributes the execution workload and removes redundant re-execution from the network, significantly improving blockchain throughput while not affecting decentralization.
Performance estimates demonstrate that, under the same system assumptions (e.g., consensus, networking, and virtual machine architecture) and under high degrees of transaction parallelism (i.e., when most transactions operate on disjoint parts of the state), our protocol may achieve over a 10000x throughput improvement compared to popular blockchain architectures that use sequential execution models, and over a 500x improvement compared to blockchain architectures employing intra-node parallelization schemes.
Furthermore, our protocol enables a significant increase in transaction computational complexity, unlocking a wide range of use cases that were previously unfeasible on traditional blockchain architectures due to the limited on-chain computational capacity.
Additionally, we propose a reward mechanism that ensures the economic sustainability of the proving network, dynamically adjusting to computational demand while fostering competition among provers based on cost-efficiency and reliability.
Attacking at non-harmonic frequencies in screaming-channel attacks
Screaming-channel attacks enable Electromagnetic (EM) Side-Channel Attacks (SCAs) at larger distances due to higher EM leakage energies than traditional SCAs, relaxing the requirement of close access to the victim. This attack can be mounted on devices integrating Radio Frequency (RF) modules on the same die as digital circuits, where the RF can unintentionally capture, modulate, amplify, and transmit the leakage along with legitimate signals. Leakage results from digital switching activity, so the hypothesis of previous works was that this leakage would appear at multiples of the digital clock frequency, i.e., harmonics. This work demonstrates that compromising signals appear not only at the harmonics and that leakage at non-harmonics can be exploited for successful attacks. Indeed, the transformations undergone by the leaked signal are complex due to propagation effects through the substrate and power and ground planes, so the leakage also appears at other frequencies. We first propose two methodologies to locate frequencies that contain leakage and demonstrate that it appears at non-harmonic frequencies. Then, our experimental results show that screaming-channel attacks at non-harmonic frequencies can be as successful as at harmonics when retrieving a 16-byte AES key. As the RF spectrum is polluted by interfering signals, we run experiments and show successful attacks in a more realistic, noisy environment where harmonic frequencies are contaminated by multi-path fading and interference. These attacks at non-harmonic frequencies increase the attack surface by providing attackers with an increased number of potential frequencies where attacks can succeed.
Obfuscation for Deep Neural Networks against Model Extraction: Attack Taxonomy and Defense Optimization
Well-trained deep neural networks (DNN), including large
language models (LLM), are valuable intellectual property assets. To defend against model extraction attacks, one of the major ideas proposed in a large body of previous research is obfuscation: splitting the original DNN and storing the components separately. However, systematically analyzing the methods’ security against various attacks and optimizing the efficiency of defenses are still challenging. In this paper, We propose a taxonomy of model-based extraction attacks, which enables us to identify vulnerabilities of several existing obfuscation methods. We also propose an extremely efficient model obfuscation method called O2Splitter using trusted execution environment (TEE). The secrets we store in TEE have O(1)-size, i.e., independent of model size. Although O2Splitter relies on a pseudo-random function to provide a quantifiable guarantee for protection and noise compression, it does not need any complicated training or filtering of the weights. Our comprehensive experiments show that O2Splitter can mitigate norm-clipping and fine-tuning attacks. Even for
small noise (ϵ = 50), the accuracy of the obfuscated model is close to
random guess, and the tested attacks cannot extract a model with comparable accuracy. In addition, the empirical results also shed light on discovering the relation between DP parameters in obfuscation and the risks of concrete extraction attacks.
A Meta-Complexity Characterization of Quantum Cryptography
We prove the first meta-complexity characterization of a quantum cryptographic primitive. We show that one-way puzzles exist if and only if there is some quantum samplable distribution of binary strings over which it is hard to approximate Kolmogorov complexity. Therefore, we characterize one-way puzzles by the average-case hardness of a uncomputable problem. This brings to the quantum setting a recent line of work that characterizes classical cryptography with the average-case hardness of a meta-complexity problem, initiated by Liu and Pass. Moreover, since the average-case hardness of Kolmogorov complexity over classically polynomial-time samplable distributions characterizes one-way functions, this result poses one-way puzzles as a natural generalization of one-way functions to the quantum setting. Furthermore, our equivalence goes through probability estimation, giving us the additional equivalence that one-way puzzles exist if and only if there is a quantum samplable distribution over which probability estimation is hard. We also observe that the oracle worlds of defined by Kretschmer et. al. rule out any relativizing characterization of one-way puzzles by the hardness of a problem in or , which means that it may not be possible with current techniques to characterize one-way puzzles with another meta-complexity problem.
Scalable Non-Fungible Tokens on Bitcoin
This paper presents a protocol for scaling the creation, management, and trading of non-fungible tokens (NFTs) on Bitcoin by extending bridgeless minting patterns previously used on other blockchains. The protocol leverages on-chain Bitcoin data to handle all aspects of token ownership, including trading, while integrating a secondary consensus system for minting and optionally modifying token metadata. To minimize its on-chain footprint, the protocol utilizes the OP_RETURN mechanism for ownership records, while complementary NFT-related actions are stored on the LAOS blockchain. All data remains permanently on-chain, with no reliance on bridges or third-party operators.
Multi-Party Private Set Operations from Predicative Zero-Sharing
Typical protocols in the multi-party private set operations (MPSO) setting enable parties to perform certain secure computation on the intersection or union of their private sets, realizing a very limited range of MPSO functionalities. Most works in this field focus on just one or two specific functionalities, resulting in a large variety of isolated schemes and a lack of a unified framework in MPSO research. In this work, we present an MPSO framework, which allows parties, each holding a set, to securely compute any set formulas (arbitrary compositions of a finite number of binary set operations, including intersection, union and difference) on their private sets. Our framework is highly versatile and can be instantiated to accommodate a broad spectrum of MPSO functionalities. To the best of our knowledge, this is the first framework to achieve such a level of flexibility and generality in MPSO, without relying on generic secure multi-party computation (MPC) techniques.
Our framework exhibits favorable theoretical and practical performance. The computation and communication complexity scale linearly with the set size , and it achieves optimal complexity that is on par with the naive solution for widely used functionalities, such as multi-party private set intersection (MPSI), MPSI with cardinality output (MPSI-card), and MPSI with cardinality and sum (MPSI-card-sum), in the standard semi-honest model. Furthermore, the instantiations of our framework mainly from symmetric-key techniques yield efficient protocols for MPSI, MPSI-card, MPSI-card-sum, and multi-party private set union (MPSU), with online performance surpassing or matching the state of the art.
At the technical core of our framework is a newly introduced primitive called predicative zero-sharing. This primitive captures the universality of a number of MPC protocols and is composable. We believe it may be of independent interest.
Cryptomania v.s. Minicrypt in a Quantum World
We prove that it is impossible to construct perfect-complete quantum public-key encryption (QPKE) with classical keys from quantumly secure one-way functions (OWFs) in a black-box manner, resolving a long-standing open question in quantum cryptography.
Specifically, in the quantum random oracle model (QROM), no perfect-complete QPKE scheme with classical keys, and classical/quantum ciphertext can be secure. This improves the previous works which require either unproven conjectures or imposed restrictions on key generation algorithms. This impossibility even extends to QPKE with quantum public key if the public key can be uniquely determined by the secret key, and thus is tight to all existing QPKE constructions.
Round-Efficient Adaptively Secure Threshold Signatures with Rewinding
A threshold signature scheme allows distributing a signing key to users, such that any of them can jointly sign, but any cannot. It is desirable to prove \emph{adaptive security} of threshold signature schemes, which considers adversaries that can adaptively corrupt honest users even after interacting with them. For a class of signatures that relies on security proofs with rewinding, such as Schnorr signatures, proving adaptive security entails significant challenges.
This work proposes two threshold signature schemes that are provably adaptively secure with rewinding proofs. Our proofs are solely in the random oracle model (ROM), without relying on the algebraic group model (AGM).
- We give a 3-round scheme based on the algebraic one-more discrete logarithm (AOMDL) assumption. The scheme outputs a standard Schnorr signature.
- We give a 2-round scheme based on the DL assumption. Signatures output by the scheme contain one more scalar than a Schnorr signature.
We follow the recent work by Katsumata, Reichle, and Takemure (Crypto 2024) that proposed the first threshold signature scheme with a rewinding proof of full adaptive security. Their scheme is a 5-round threshold Schnorr scheme based on the DL assumption. Our results significantly improve the round complexity.
Katsumata et al.'s protocol can be viewed as applying a masking technique to Sparkle, a threshold Schnorr signature scheme by Crites, Komlo, and Maller (Crypto 2023). This work shows wider applications of the masking technique. Our first scheme is obtained by masking FROST, a threshold Schnorr protocol by Komlo and Goldberg (SAC 2020). The second scheme is obtained by masking a threshold version of HBMS, a multi-signature scheme by Bellare and Dai (Asiacrypt 2021).
Katsumata et al. masked Sparkle at the cost of 2 additional rounds. Our main insight is that this cost varies across schemes, especially depending on how to simulate signing in the security proofs. The cost is 1 extra round for our first scheme, and is 0 for our second scheme.
A Study of Blockchain Consensus Protocols
When Nakamoto invented Bitcoin, the first generation of cryptocurrencies followed it in applying POW (Proof of Work) consensus mechanism; due to its excessive energy consumption and heavy carbon footprints, new innovations evolved like Proof of Space, POS (Proof of Stake), and a lot more with many variants for each. Furthermore, the emergence of more blockchain applications and kinds beyond just cryptocurrencies needed more consensus mechanisms that is optimized to fit requirements of each application or blockchain kind; examples range from IoT (Internet of Things) blockchains for sustainability applications that often use variants of BFT (Byzantine Fault Tolerance) algorithm, and consensus needed to relay transactions and/or assets between different blockchains in interoperability solutions. Previous studies concentrated on surveying and/or proposing different blockchain consensus rules, on a specific consensus issue like attacks, randomization, or on deriving theoretical results. Starting from discussing most important theoretical results, this paper tries to gather and organize all significant existing material about consensus in the blockchain world explaining design challenges, tradeoffs and research areas. We realize that the topic could fit for a complete textbook, so we summarize the basic concepts and support with tables and appendices. Then we highlight some case examples from interoperability solutions to show how flexible and wide the design space is to fit both general and special purpose systems. The aim is to provide researchers with a comprehensive overview of the topic, along with the links to go deeper into every detail.
Impossible Differential Attack on SAND-64
SAND is an AND-RX-based lightweight block cipher proposed by Chen et al. There are two variants of SAND, namely SAND-64 and SAND-128, due to structural differences. In this paper, we search for impossible differential distinguishers of SAND-64 using the Constraint Programming (CP) and reveal 56 types of impossible differential distinguishers up to 11 rounds. Furthermore, we demonstrate a key recovery attack on 17-round SAND-64. The complexities for the attack require data, encryptions, and bytes of memory, respectively. Although this result currently achieves the best attack on round-reduced SAND-64, this attack does not threaten the security of SAND-64 against impossible differential attack.
Towards Scalable YOSO MPC via Packed Secret-Sharing
The YOSO (You Only Speak Once) model, introduced by Gentry et al. (CRYPTO 2021), helps to achieve strong security guarantees in cryptographic protocols for distributed settings, like blockchains, with large number of parties. YOSO protocols typically employ smaller anonymous committees to execute individual rounds of the protocol instead of having all parties execute the entire protocol. After completing their tasks, parties encrypt protocol messages for the next anonymous committee and erase their internal state before publishing ciphertexts, thereby enhancing security in dynamically changing environments.
In this work, we consider the problem of secure multi-party computation (MPC), a fundamental problem in cryptography and distributed computing. We assume honest majority among the committee members, and work in the online-offline, i.e., preprocessing, setting.
In this context, we present the first YOSO MPC protocol where efficiency---measured as communication complexity---improves as the number of parties increases. Specifically, for and an adversary corrupting out of parties, our MPC protocol exhibits enhanced scalability as increases, where the online phase communication becomes independent of .
Prior YOSO MPC protocols considered as large as , but a significant hurdle persisted in obtaining YOSO MPC with communication that does not scale linearly with the number of committee members, a challenge that is exagerbated when the committee size was large per YOSO's requirements.
We show that, by considering a small ``gap'' of , the sizes of the committees are only marginally increased, while online communication is significantly reduced.
Furthermore, we explicitly consider fail-stop adversaries, i.e., honest participants who may inadvertently fail due to reasons such as denial of service or software/hardware errors. In prior YOSO work, these adversaries were grouped with fully malicious parties. Adding explicit support for them allows us to achieve even better scalability.
Cryptography based on 2D Ray Tracing
We introduce a novel symmetric key cryptographic scheme involving a light ray's interaction with a 2D cartesian coordinate setup, several smaller boxes within this setup, of either reflection or refraction type and , or degree polynomial curves inside each of these smaller boxes. We also incorporate boolean logic gates of types XOR, NOT-Shift and Permutation which get applied to the light ray after each interaction with a reflecting or refracting polynomial curve. This alternating interaction between Optical gates (polynomial curves) and Non-optical gates creates a complex and secure cryptographic system. Furthermore, we design and launch customized attacks on our cryptographic system and discuss the robustness of it against these.
Hybrid-query bounds with partial input control - framework and application to tight M-eTCR
In this paper, we present an improved framework for proving query bounds in the Quantum Random Oracle Model (QROM) for algorithms with both quantum and classical query interfaces, where the classical input is partially controlled by the adversary. By extending existing techniques, we develop a method to bound the progress an adversary can make with such partial-control classical queries. While this framework is applicable to different hash function properties, we decided to demonstrate the impact of the new techniques by giving an analysis of the multi-target extended target collision resistance property (m-eTCR). This new approach allows us to achieve an improved bound that significantly reduces the required function key size. Our proof is tight in terms of query complexity and has significant implications for cryptographic applications, especially for signature schemes in the hash & sign paradigm, enabling more efficient instantiations with reduced salt sizes and smaller signature lengths. For an example of multiple signatures aggregation, we achieve a signature size of 30 kB smaller.
On breaking McEliece keys using brute force
In the McEliece public-key encryption scheme, a private key is almost always not determined uniquely by its associated public key. This paper gives a structural characterization of equivalent private keys, generalizing a result known for the more approachable special case .
These equivalences reduce the cost estimate for a simple private-key search using the support-splitting algorithm (SSA) by a polynomial but practically very substantial factor.
We provide an optimized software implementation of the SSA for this kind of key search and demonstrate its capabilities in practice by solving a key-recovery challenge with a naïve a‑priori cost estimate of bit operations in just core days, testing private-key candidates per core and second in the process.
We stress that the speedup from those equivalences is merely polynomial and does not indicate any weakness in realistic instantiations of the McEliece cryptosystem, whose parameter choices are primarily constrained by decoding attacks rather than ludicrously more expensive key-recovery attacks.
Dyna-hinTS: Silent Threshold Signatures for Dynamic Committees
The works of Garg et al. [S&P'24] (aka hinTS) and Das et al. [CCS'23] introduced the notion of silent threshold signatures (STS) - where a set of signers silently perform local computation to generate a public verification key. To sign a message, any set of signers sign the message non-interactively and these are aggregated into a constant-sized signature. This paradigm avoids performing expensive Distributed Key Generation procedure for each set of signers while keeping the public verification key constant-sized.
In this work, we propose the notion of committee-based silent threshold signature (c-STS) scheme. In a c-STS scheme, a set of signers initially perform a one-time setup to generate the verification key, and then a subset of signers are randomly chosen for an epoch to perform the threshold signing while the other signers are not authorized to sign during that epoch. This captures existing systems like Ethereum Altair and Dfinity where only a specific committee is authorized to sign in a designated epoch. The existing STS schemes cannot be extended to the committee setting because the signature verification only attests to the number of signing parties, not which committee they belong to.
So, we upgrade hinTS to the committee setting by proposing Dyna-hinTS. It is the c-STS scheme and it requires a one-time silent setup and generates a one-time public verification key that does not vary with the committee. Assuming a set of 1024 signers (with corrupt 682 signers), hinTS generates an aggregated signature in 1.7s whereas Dyna-hinTS generates it in s within a committee of signers. This yields a improvement over hinTS for signature generation at the cost of increasing signature verification time by over hinTS. Dyna-hinTS supports general access structure, weighted signatures and improves existing multiverse threshold signatures.
Charge Your Clients: Payable Secure Computation and Its Applications
The online realm has witnessed a surge in the buying and selling of data, prompting the emergence of dedicated data marketplaces. These platforms cater to servers (sellers), enabling them to set prices for access to their data, and clients (buyers), who can subsequently purchase these data, thereby streamlining and facilitating such transactions. However, the current data market is primarily confronted with the following issues. Firstly, they fail to protect client privacy, presupposing that clients submit their queries in plaintext. Secondly, these models are susceptible to being impacted by malicious client behavior, for example, enabling clients to potentially engage in arbitrage activities.
To address the aforementioned issues, we propose payable secure computation, a novel secure computation paradigm specifically designed for data pricing scenarios. It grants the server the ability to securely procure essential pricing information while protecting the privacy of client queries. Additionally, it fortifies the server's privacy against potential malicious client activities. As specific applications, we have devised customized payable protocols for two distinct secure computation scenarios: Keyword Private Information Retrieval (KPIR) and Private Set Intersection (PSI).
We implement our two payable protocols and compare them with the state-of-the-art related protocols that do not support pricing as a baseline. Since our payable protocols are more powerful in the data pricing setting, the experiment results show that they do not introduce much overhead over the baseline protocols.
Our payable KPIR achieves the same online cost as baseline, while the setup is about slower than it. Our payable PSI needs about more communication cost than that of baseline protocol, while the runtime is slower than it depending on the network setting.
Audience Injection Attacks: A New Class of Attacks on Web-Based Authorization and Authentication Standards
We introduce audience injection attacks, a novel class of vulnerabilities that impact widely used Web-based authentication and authorization protocols, including OAuth 2.0, OpenID Connect, FAPI, CIBA, the Device Authorization Grant, and various well-established extensions, such as Pushed Authorization Requests, Token Revocation, Token Introspection, and their numerous combinations.
These protocols underpin services for billions of users across diverse ecosystems worldwide, spanning low-risk applications like social logins to high-risk domains such as open banking, insurance, and healthcare.
Audience injection attacks exploit a critical weakness in a core security mechanism of these protocols - the handling of so-called audiences in signature-based client authentication mechanisms. This vulnerability allows attackers to compromise fundamental security objectives whenever these mechanisms are utilized across two or more server endpoints. They enable the attacker to impersonate users and gain unauthorized access to their resources, even in high-security protocol families specifically designed for sensitive applications.
We responsibly disclosed these vulnerabilities to the relevant standardization bodies, which recognized their severity.
In collaboration with these organizations, we developed fixes and supported a coordinated response, leading to an ongoing effort to update a dozen of standards, numerous major implementations, and far-reaching ecosystems.
Improving the Masked Division for the FALCON Signature
FALCON is a post-quantum signature selected by the National Institute of Standards and Technology (NIST). Although its side-channel resilience has been studied and a masking countermeasure proposed, the division is a major performance bottleneck. This work proposes a different approach to the masked FALCON division. We use the Newton method and a convergent sequence to approximate this operation. The performance of the masked division is improved by a factor 6.7 for two shares and 6.98 for three shares. For the Gaussian sampler, the improvements are of a factor 1.45 for two shares and 1.43 for three shares. Formal security proofs using the MIMO-SNI criteria are also provided.
Everlasting Fully Dynamic Group Signatures
Group signatures allow a user to sign anonymously on behalf of a group of users while allowing a tracing authority to trace the signer's identity in case of misuse. In Chaum and van Heyst's original model (EUROCRYPT'91), the group needs to stay fixed. Throughout various attempts, including partially dynamic group signatures and revocations, Bootle et al. (ACNS'16, J. Cryptol.) formalized the notion of fully dynamic group signatures (FDGS), enabling both enrolling and revoking users of the group. However, in their scheme, the verification process needs to take into account the latest system information, and a previously generated signature will be invalidated as soon as, for example, there is a change in the group. We therefore raise a research question: Is it possible to construct an FDGS under which the validity of a signature can survive future changes in the system information?
In this paper, we propose Everlasting Fully Dynamic Group Signatures (EFDGS) that allow signers to generate signatures that do not require verification with any specific epoch. Specifically, once the signatures are created, they are valid forever. It also guarantees that the signer can only output such a signature when she is a valid user of the system. We realize the above new model by constructing a plausibly post-quantum standard-lattice-based EFDGS.
Tree-based Quantum Carry-Save Adder
Quantum computing is regarded as one of the most significant upcoming advancements in computer science.
Although fully operational quantum computers have yet to be realized, they are expected to solve specific problems that are difficult to solve using classical computers.
Given the limitations of quantum computing resources, it is crucial to design compact quantum circuits for core operations, such as quantum arithmetic.
In this paper, we focus on optimizing the circuit depth of quantum multi-operand addition, which is a fundamental component in quantum implementations (as an example, SHA-2).
Building on the foundational quantum carry-save approach by Phil Gossett, we introduce a tree-based quantum carry-save adder.
Our design integrates the Wallace and Dadda trees to optimize carry handling during multi-operand additions.
To further reduce circuit depth, we utilize additional ancilla qubits for parallel operations and introduce an efficient technique for reusing these ancilla qubits.
Our tree-based carry-save adder achieves the lowest circuit depth ( -depth) and provides an improvement of over 82% (up to 99%) in the qubit count–circuit depth product for multi-operand addition.
Furthermore, we apply our method to multiplication, achieving the lowest circuit depth and an improvement of up to 87% in the qubit count–circuit depth product.
FHECAP: An Encrypted Control System with Piecewise Continuous Actuation
We propose an encrypted controller framework for linear time-invariant systems with actuator non-linearity based on fully homomorphic encryption (FHE). While some existing works explore the use of partially homomorphic encryption (PHE) in implementing linear control systems, the impacts of the non-linear behaviors of the actuators on the systems are often left unconcerned. In particular, when the inputs to the controller become too small or too large, actuators may burn out due to unstable system state oscillations. To solve this dilemma, we design and implement FHECAP, an FHE-based controller framework that can homomorphically apply non-linear functions to the actuators to rectify the system inputs. In FHECAP, we first design a novel data encoding scheme tailored for efficient gain matrix evaluation. Then, we propose a high-precision homomorphic algorithm to apply non-arithmetic piecewise function to realize the actuator normalization. In the experiments, compared with the existing state-of-the-art encrypted controllers, FHECAP achieves -- reduction in computational latency. We evaluate the effectiveness of FHECAP in the real-world application of encrypted control for spacecraft rendezvous. The simulation results show that the FHECAP achieves real-time spacecraft rendezvous with negligible accuracy loss.
Trapdoor one-way functions from tensors
Weyman and Zelevinsky generalised Vandermonde matrices to higher dimensions, which we call Vandermonde-Weyman-Zelevinsky tensors.
We generalise Lagrange interpolation to higher dimensions by devising a nearly linear time algorithm that given a Vandermonde-Weyman-Zelevinsky tensor and a sparse target vector, finds a tuple of vectors that hit the target under tensor evaluation. Tensor evaluation to us means evaluating the usual multilinear form associated with the tensor in all but one chosen dimension. Yet, this interpolation problem phrased with respect to a random tensor appears to be a hard multilinear system. Leveraging this dichotomy, we propose preimage sampleable trapdoor one-way functions in the spirit of Gentry-Peikert-Vaikuntanathan (GPV) lattice trapdoors. We design and analyse ``Hash-and-Sign'' digital signatures from such trapdoor one-way functions, yielding short signatures whose lengths scale nearly linearly in the security parameter. We also describe an encryption scheme.
Our trapdoor is a random Vandermonde-Weyman-Zelevinsky tensor over a finite field and a random basis change. We hide the Vandermonde-Weyman-Zelevinsky tensor under the basis change and publish the resulting pseudorandom tensor. The one way function is the tensor evaluation derived from the public tensor, restricted so as to only map to sparse vectors. We then design the domain sampler and preimage sampler demanded by the GPV framework. The former samples inputs that map to uniform images under the one-way function. The latter samples preimages given supplementary knowledge of the trapdoor. Preimage sampling is a randomised version of interpolation and knowing the basis change allows efficient translation between interpolation corresponding to the public and trapdoor tensors. An adversary seeking a preimage must solve a pseudorandom multilinear system, which seems cryptographically hard.
CertainSync: Rateless Set Reconciliation with Certainty
Set reconciliation is a fundamental task in distributed systems, particularly in blockchain networks, where it enables the synchronization of transaction pools among peers and facilitates block dissemination. Existing traditional set reconciliation schemes are either statistical, providing success probability as a function of the communication overhead and the size of the symmetric difference, or require parametrization and estimation of the size of the symmetric difference, which can be prone to error. In this paper, we present CertainSync, a novel reconciliation framework that, to the best of our knowledge, is the first to guarantee successful set reconciliation without any parametrization or estimators in use. The framework is rateless and adapts to the unknown symmetric difference size. The set reconciliation is guaranteed to be completed successfully whenever the communication overhead reaches a lower bound derived from the symmetric difference size and the universe size. Our framework is based on recent constructions of Invertible Bloom Lookup Tables (IBLTs) ensuring successful element listing as long as the number of elements is bounded. We provide a theoretical analysis to prove the certainty in the set reconciliation for multiple constructions. The approach is also validated by simulations, showing the ability to synchronize sets with efficient communication costs while maintaining reconciliation guarantees compared to other baseline schemes for set reconciliation. To further improve communication overhead for large universes as blockchain networks, CertainSync is extended with a universe reduction technique to minimize communication overhead. We compare and validate the extended framework UniverseReduceSync against the basic CertainSync framework through simulations using real blockchain transaction hash data from the Ethereum blockchain network. The results illustrate a trade-off between improved communication costs and maintaining reconciliation guarantees without relying on parametrization or estimators, offering a comprehensive solution for set reconciliation in diverse scenarios.
Byzantine Reliable Broadcast and Tendermint Consensus with trusted components
Byzantine Reliable Broadcast is one of the most popular communication primitives in distributed systems. Byzantine reliable broadcast ensures that processes agree to deliver a message from an initiator, even if some processes (possibly including the initiator) are Byzantine. In asynchronous settings, it is known since the prominent work of Bracha \cite{Bracha87} that Byzantine reliable broadcast can be implemented deterministically if the total number of processes, denoted by , satisfies where is an upper bound on the number of Byzantine processes. Here, we study Byzantine Reliable Broadcast when processes are equipped with \emph{trusted components}, special software or hardware designed to prevent equivocation. Our contribution is threefold. First, we show that, despite common belief, when each process is equipped with a trusted component, Bracha's algorithm still needs . Second, we present a novel algorithm that uses a single trusted component (at the initiator) that implements Byzantine Reliable Asynchronous Broadcast with .
\yag{Lastly, building on our broadcast algorithm, we present TenderTee, a transformation of the Tendermint consensus algorithm by using trusted component, giving better Byzantine resilience. Tendertee works with , where Tendermint needed .}
SPHINCSLET: An Area-Efficient Accelerator for the Full SPHINCS+ Digital Signature Algorithm
This work presents SPHINCSLET, the first fully standard-compliant and area-efficient hardware implementation of the SLH-DSA algorithm, formerly known as SPHINCS+, a post-quantum digital signature scheme. SPHINCSLET is designed to be parameterizable across different security levels and hash functions, offering a balanced trade-off between area efficiency and performance. Existing hardware implementations either feature a large area footprint to achieve fast signing and verification or adopt a coprocessor-based approach that significantly slows down these operations. SPHINCSLET addresses this gap by delivering a 4.7 reduction in area compared to high-speed designs while achieving a 2.5 to 5 improvement in signing time over the most efficient coprocessor-based designs for a SHAKE256-based SPHINCS+ implementation. The SHAKE256-based SPHINCS+ FPGA implementation targeting the AMD Artix-7 requires fewer than 10.8K LUTs for any security level of SLH-DSA. Furthermore, the SHA-2-based SPHINCS+ implementation achieves a 2 to 4 speedup in signature generation across various security levels compared to existing SLH-DSA hardware, all while maintaining a compact area footprint of 6K to 15K LUTs. This makes it the fastest SHA-2-based SLH-DSA implementation to date. With an optimized balance of area and performance, SPHINCSLET can assist resource-constrained devices in transitioning to post-quantum cryptography.
Need for zkSpeed: Accelerating HyperPlonk for Zero-Knowledge Proofs
(Preprint) Zero-Knowledge Proofs (ZKPs) are rapidly gaining importance in privacy-preserving and verifiable computing. ZKPs enable a proving party to prove the truth of a statement to a verifying party without revealing anything else. ZKPs have applications in blockchain technologies, verifiable machine learning, and electronic voting, but have yet to see widespread adoption due to the computational complexity of the proving process.Recent works have accelerated the key primitives of state-of-the-art ZKP protocols on GPU and ASIC. However, the protocols accelerated thus far face one of two challenges: they either require a trusted setup for each application, or they generate larger proof sizes with higher verification costs, limiting their applicability in scenarios with numerous verifiers or strict verification time constraints. This work presents an accelerator, zkSpeed, for HyperPlonk, a state-of-the-art ZKP protocol that supports both one-time, universal setup and small proof sizes for typical ZKP applications in publicly verifiable, consensus-based systems. We accelerate the entire protocol, including two major primitives: SumCheck and Multi-scalar Multiplications (MSMs). We develop a full-chip architecture using 366.46 mm and 2 TB/s of bandwidth to accelerate the entire proof generation process, achieving geometric mean speedups of 801 over CPU baselines.
Making BBS Anonymous Credentials eIDAS 2.0 Compliant
eIDAS 2.0 (electronic IDentification, Authentication and trust Services) is a very ambitious regulation aimed at equipping European citizens with a personal digital identity wallet (EU Digital Identity Wallet) on a mobile phone that not only needs to achieve a high level of security, but also needs to be available as soon as possible for a large number of citizens and respect their privacy (as per GDPR - General Data Protection Regulation).
In this paper, we introduce the foundations of a digital identity wallet solution that could help move closer to this objective by leveraging the proven anonymous credentials system BBS (Eurocrypt 2023), also known as BBS+, but modifying it to avoid the limitations that have hindered its widespread adoption, especially in certified infrastructures requiring trusted hardware implementation.
In particular, the solution we propose, which we call BBS#, does not rely, contrary to BBS/BBS +, on bilinear maps and pairing-friendly curves (which are not supported by existing hardware) and only depends on the hardware implementation of well-known digital signature schemes such as ECDSA (ISO/IEC 14888-3) or ECSDSA (also known as ECSchnorr, ISO/IEC 14888-3) using classical elliptic curves. More precisely, BBS# can be rolled out without requiring any change in existing hardware or the algorithms that hardware supports.
BBS# , which is proven secure in the random oracle model, retains the well-known security property (unforgeability of the credentials under the (gap) q-SDH assumption) and anonymity properties (multi-show full unlinkability and statistical anonymity of presentation proofs) of BBS/BBS+.
By implementing BBS# on several smartphones using different secure execution environments, we show that it is possible to achieve eIDAS 2.0 transactions which are not only efficient (around 70 ms on Android StrongBox), secure and certifiable at the highest level but also provide strong (optimal) privacy protection for all European ID Wallet users.
Anonymous Self-Credentials and their Application to Single-Sign-On
Modern life makes having a digital identity no longer optional, whether one needs to manage a bank account or subscribe to a newspaper. As the number of online services increases, it is fundamental to safeguard user privacy and equip service providers (SP) with mechanisms enforcing Sybil resistance, i.e., preventing a single entity from showing as many.
Current approaches, such as anonymous credentials and self-sovereign identities, typically rely on identity providers or identity registries trusted not to track users' activities. However, this assumption of trust is no longer appropriate in a world where user data is considered a valuable asset.
To address this challenge, we introduce a new cryptographic notion, Anonymous Self-Credentials (ASC) along with two implementations. This approach enables users to maintain their privacy within an anonymity set while allowing SPs to obtain Sybil resistance. Then, we present a User-issued Unlinkable Single Sign-On (U2SSO) implemented from ASC that solely relies on an identity registry to immutably store identities. A U2SSO solution allows users to generate unlinkable child credentials for each SP using only one set of master credentials.
We demonstrate the practicality and efficiency of our U2SSO solution by providing a complete proof-of-concept.
Multi-Screaming-Channel Attacks: Frequency Diversity for Enhanced Attacks
Side-channel attacks consist of retrieving internal data from a victim system by analyzing its leakage, which usually requires proximity to the victim in the range of a few millimetres.
Screaming channels are EM side channels transmitted at a distance of a few meters.
They appear on mixed-signal devices integrating an RF module on the same silicon die as the digital part.
Consequently, the side channels are modulated by legitimate RF signal carriers and appear at the harmonics of the digital clock frequency.
While initial works have only considered collecting leakage at these harmonics, late work has demonstrated that the leakage is also present at frequencies other than these harmonics.
This result significantly increases the number of available frequencies to perform a screaming-channel attack, which can be convenient in an environment where multiple harmonics are polluted.
This work studies how this diversity of frequencies carrying leakage can be used to improve attack performance.
We first study how to combine multiple frequencies.
Second, we demonstrate that frequency combination can improve attack performance and evaluate this improvement according to the performance of the combined frequencies.
Finally, we demonstrate the interest of frequency combination in attacks at and, for the first time to the best of our knowledge, at meters.
One last important observation is that this frequency combination divides by the number of traces needed to reach a given attack performance.
State Machine Replication Without Borders
A set of unacquainted parties, some of which may misbehave, communicate with each other over an unauthenticated and unreliable gossip network. They wish to jointly replicate a state machine so that each one of them has fair access to its operation. Specifically, assuming parties' computational power is measured as queries to an oracle machine , parties can issue symbols to the state machine in proportion to their queries to at a given fixed rate. Moreover, if such access to the state machine is provided continuously in expected constant time installments we qualify it as fast fairness.
A state machine replication (SMR) protocol in this permissionless setting is expected to offer consistency across parties and reliably process all symbols that honest parties wish to add to it in a timely manner despite continuously fluctuating participation and in the presence of an adversary who commands less than half of the total queries to per unit of time.
A number of protocols strive to offer the above guarantee together with fast settlement — notably, the Bitcoin blockchain offers a protocol that settles against Byzantine adversaries in polylogarithmic rounds, while fairness only holds in a fail-stop adversarial model (due to the fact that Byzantine behavior can bias access to the state machine in the adversary's favor). In this work, we put forth the first Byzantine-resilient protocol solving SMR in this setting with both expected-constant-time settlement and fast fairness. In addition, our protocol is self-sufficient in the sense of performing its own time keeping while tolerating an adaptively fluctuating set of parties.
From at Least to at Most : Correcting the Algebraic Immunity of the Hidden Weight Bit Function
Weightwise degree- functions are Boolean functions that, on each set of fixed Hamming weight, coincide with a function of degree at most . They generalize both symmetric functions and the Hidden Weight Bit Function (HWBF), which has been studied in cryptography for its favorable properties.
In this work, we establish a general upper bound on the algebraic immunity of such functions, a key security parameter against algebraic attacks on stream ciphers like filtered Linear Feedback Shift Registers (LFSRs). We construct explicit low-degree annihilators for WWdd functions with small , and show how to generalize these constructions. As an application, we prove that the algebraic immunity of the HWBF is upper bounded by disproving a result from 2011 that claimed a lower bound of . We then apply our technique to several generalizations of the HWBF proposed since 2021 for homomorphically friendly constructions and LFSR-based ciphers, refining or refuting results from six prior works.
Highly Efficient Actively Secure Two-Party Computation with One-Bit Advantage Bound
Secure two-party computation (2PC) enables two parties to jointly evaluate a function while maintaining input privacy. Despite recent significant progress, a notable efficiency gap remains between actively secure and passively secure protocols. In S\&P'12, Huang, Katz, and Evans formalized the notion of \emph{active security with one-bit leakage}, providing a promising approach to bridging this gap. Protocols derived from this notion have become foundational in designing highly efficient actively secure 2PC protocols. However, a critical challenge identified by Huang, Katz, and Evans remains unexplored: these protocols face significant weaknesses in ensuring fairness for honest parties when employed in standalone settings rather than as components within larger protocols. While the authors proposed two potential solutions to mitigate this issue, both approaches are prohibitively expensive and lack formalization of security guarantees.
In this paper, we first formally define an enhanced notion called \emph{active security with one-bit-advantage bound}, in which the adversaries' advantages are strictly bounded to at most one bit beyond what honest parties obtain. This bound is enforced through a \emph{progressive revelation} mechanism, where the evaluation result is disclosed incrementally bit by bit. In addition, we propose a novel approach leveraging label structures within garbled circuits to design a highly efficient constant-round 2PC protocol that achieves active security with one-bit advantage bound. Our protocol demonstrates \emph{runtime performance nearly identical to that of passively secure garbled-circuit counterparts} in duplex networks (\eg for the {\tt SHA256} circuit in LAN), with \emph{low overhead} for output progressive revelation (only communicated bytes per bit release).
With its strengthened security guarantees and minimal overhead, our protocol is highly suitable for practical 2PC applications.
Low-Latency Rate-Distortion-Perception Trade-off: A Randomized Distributed Function Computation Application
Semantic communication systems, which focus on transmitting the semantics of data rather than its exact reconstruction, redefine the design of communication networks for transformative efficiency in bandwidth-limited and latency-critical applications. Addressing these goals, we tackle the rate-distortion-perception (RDP) problem for image compression, a critical challenge in achieving perceptually realistic reconstructions under rate constraints. Formulated within the randomized distributed function computation (RDFC) framework, we establish an achievable non-asymptotic RDP region, providing finite blocklength trade-offs between rate, distortion, and perceptual quality, aligning with semantic communication objectives. We extend this region to also include a secrecy constraint, providing strong secrecy guarantees against eavesdroppers via physical-layer security methods, ensuring resilience against quantum attacks. Our contributions include (i) establishing achievable bounds for non-asymptotic RDP regions under realism and distortion constraints; (ii) extending these bounds to provide strong secrecy guarantees; (iii) characterizing the asymptotic secure RDP region under a perfect realism constraint; and (iv) illustrating significant reductions in rates and the effects of secrecy constraints and finite blocklengths. Our results provide actionable insights for designing low-latency, high-fidelity, and secure image compression systems with realistic outputs, advancing applications, e.g., in privacy-critical domains.
More NTRU+Sign Signatures from Cyclotomic Trinomials
Recently, + was proposed as a new compact signature scheme, following `Fiat-Shamir with Aborts' (FSwA) framework. Its compactness is mainly based on their novel NTRU-based key structure that fits well with bimodal distributions in the FSwA framework. However, despite its compactness, + fails to provide a diverse set of parameters that can meet some desired security levels. This limitation stems from its reliance on a ring , where is restricted to powers of two, limiting the flexibility in selecting appropriate security levels. To overcome this limitation, we propose a revised version of + by adopting a ring from cyclotomic trinomials, where for some positive integers and . Our parameterization offers three distinct security levels: approximately , , and bits, while preserving the compactness in . We implement these re-parameterized + schemes, showing that the performance of + from cyclotomic trinomials is still comparable to previous lattice-based signature schemes such as and .
Proving CPU Executions in Small Space
zkVMs are SNARKs for verifying CPU execution. They allow an untrusted prover to show that it correctly ran a specified program on a witness, where the program is given as bytecode conforming to an instruction set architecture like RISC-V. Existing zkVMs still struggle with high prover resource costs, notably large runtime and memory usage. We show how to implement Jolt—an advanced, sum-check- based zkVM—with a significantly reduced memory footprint, without relying on SNARK recursion, and with only modest runtime overhead (potentially well below a factor of two). We discuss benefits of this approach compared to prevailing recursive techniques.
Clubcards for the WebPKI: smaller certificate revocation tests in theory and practice
CRLite is a low-bandwidth, low-latency, privacy-preserving mechanism for distributing certificate revocation data. A CRLite aggregator periodically encodes revocation data into a compact static hash set, or membership test, which can can be downloaded by clients and queried privately. We present a novel data-structure for membership tests, which we call a clubcard, and we evaluate the encoding efficiency of clubcards using data from Mozilla's CRLite infrastructure.
As of November 2024, the WebPKI contains over 900 million valid certificates and over 8 million revoked certificates. We describe an instantiation of CRLite that encodes the revocation status of these certificates in a 6.7 MB package. This is smaller than the original instantiation of CRLite presented at the 2017 IEEE Symposium on Security and Privacy, and it is smaller than the lower bound claimed in that work.
A sequence of clubcards can encode a dynamic dataset like the WebPKI revocation set. Using data from late 2024 again, we find that clubcards encoding 6 hour delta updates to the WebPKI can be compressed to 26.8 kB on average---a size that makes CRLite truly practical.
We have extended Mozilla's CRLite infrastructure so that it can generate clubcards, and we have added client-side support for this system to Firefox. We report on some performance aspects of our implementation, which is currently the default revocation checking mechanism in Firefox Nightly, and we propose strategies for further reducing the bandwidth requirements of CRLite.
Random Oracle Combiners: Merkle-Damgård Style
A Random Oracle Combiner (ROC), introduced by Dodis et al. (CRYPTO ’22), takes two hash functions from m bits to n bits and outputs a new hash function from ' to ' bits. This function C is guaranteed to be indifferentiable from a fresh random oracle as long as one of and (say, ) is a random oracle, while the other h2 can “arbitrarily depend” on .
The work of Dodis et al. also built the first length-preserving ROC, where ′ = . Unfortunately, despite this feasibility result, this construction has several deficiencies. From the practical perspective, it could not be directly applied to existing Merkle-Damgård-based hash
functions, such as SHA2 or SHA3. From the theoretical perspective, it required and to have input length > 3λ, where λ is the security parameter.
To overcome these limitations, Dodis et al. conjectured — and left as the main open question — that the following (salted) construction is a length-preserving ROC:
where are random salts of appropriate length, and denotes the Merkle-Damgård-extension of a given compression function . As our main result, we resolve this conjecture in the affirmative. For practical use, this makes the resulting combiner applicable to existing, Merkle-Damgård-based hash functions. On the theory side, it shows the existence of ROCs only requiring optimal input length = λ+O(1).
On some non-linear recurrences over finite fields linked to isogeny graphs
This paper presents new results that establish connections between isogeny graphs and nonlinear recurrences over finite fields. Specifically, we prove several theorems that link these two areas, offering deeper insights into the structure of isogeny graphs and their relationship with nonlinear recurrence sequences. We further provide two related conjectures which may be worth of further research. These findings contribute to a better understanding of the endomorphism ring of a curve, advancing progress toward the resolution of the Endomorphism Ring Problem, which aims to provide a computational characterization of the endomorphism ring of a supersingular elliptic curve.
Analytic and Simulation Results of a Gaussian Physically Unclonable Constant Based on Resistance Dispersion
Physically Unclonable Constants (PUCs) are a special type of Physically Unclonable Constants and they can be used to embed secret bit-strings in chips. Most PUCs are an array of cells where each cell is a digital circuit that evolve spontaneously toward one of two states, the chosen state being function of random manufacturing process variations. In this paper we propose an Analog Physically Unclonable Constant (APUC) whose output is an analog value to be transformed in digital by a digitizer circuit. The ratio behind this proposal is that an APUC cell has the potential of providing more than one bit, reducing the required footprint. Preliminary theoretical analysis and simulation results are presented. The proposed APUC has interesting performances (e.g., it can provide up to 5 bits per cell) that grant for further investigation.
An attack on ML-DSA using an implicit hint
The security of ML-DSA, like most signature schemes, is partially based on the fact that the nonce used to generate the signature is unknown to any attacker. In this work, we exhibit a lattice-based attack that is possible if the nonces share implicit or explicit information. From a collection of signatures whose nonces share certain coefficients, it is indeed possible to build a collection of non full-rank lattices. Intersecting them, we show how to create a low-rank lattice that contains one of the polynomials of the secret key, which in turn can be recovered using lattice reduction techniques.
There are several interpretations of this result: firstly, it can be seen as a generalization of a fault-based attack on BLISS presented at SAC'16 by Thomas Espitau et al. Alternatively, it can be understood as a side-channel attack on ML-DSA, in the case where an attacker is able to recover only one of the coefficients of the nonce used during the generation of the signature. For ML-DSA-II, we show that signatures and few hours of computation are sufficient to recover the secret key on a desktop computer. Lastly, our result shows that simple countermeasures, such as permuting the generation of the nonce coefficients, are not sufficient.
Laconic Cryptography with Preprocessing
Laconic cryptography focuses on designing two-message protocols that allow secure computation on large datasets while minimizing communication costs. While laconic cryptography protocols achieve asymptotically optimal communication complexity for many tasks, their concrete efficiency is prohibitively expensive due to the heavy use of public-key techniques or the non-black-box of cryptographic primitives.
In this work, we initiate the study of "laconic cryptography with preprocessing", introducing a model that includes an offline phase to generate database-dependent correlations, which are then used in a lightweight online phase. These correlations are conceptually simple, relying on linear-algebraic techniques. This enables us to develop a protocol for private laconic vector oblivious linear evaluation (plvOLE). In such a protocol, the receiver holds a large database , and the sender has two messages and , along with an index . The receiver learns the value without revealing other information.
Our protocol, which draws from ideas developed in the context of private information retrieval with preprocessing, serves as the backbone for two applications of interest: laconic private set intersection (lPSI) for large universes and laconic function evaluation for RAM-programs (RAM-LFE). Based our plvOLE protocol, we provide efficient instantiations of these two primitives in the preprocessing model.
On the success rate of simple side-channel attacks against masking with unlimited attack traces
Side-channel attacks following a classical differential power
analysis (DPA) style are well understood, along with the effect the mask-
ing countermeasure has on them. However, simple attacks (SPA) where
the target variable does not vary thanks to a known value, such as the
plaintext, are less studied. In this paper, we investigate how the masking
countermeasure affects the success rate of simple attacks. To this end, we
provide theoretical, simulated, and practical experiments. Interestingly,
we will see that masking can allow us to asymptotically recover more
information on the secret than in the case of an unprotected implemen-
tation, depending on the masking type. We will see that this is true for
masking encodings that add non-linearity with respect to the leakages,
such as arithmetic masking, while it is not for Boolean masking. We be-
lieve this context provides interesting results, as the average information
of arithmetic encoding is proven less informative than the Boolean one.
Mobile Byzantine Agreement in a Trusted World
In this paper, we address the Byzantine Agreement problem in synchronous systems where Byzantine agents can move from process to process, corrupting their host.
We focus on three representative models: \emph{Garay's}, \emph{Bonnet's} and \emph{Buhrman's} models.
In \emph{Garay's model} when a process has been left by the Byzantine, it is in the \emph{cured} state and it is aware of its condition and thus can remain silent for a round to prevent the dissemination of wrong information.
In \emph{Bonnet's model} a cured process may send messages (based on a state corrupted by the malicious agent), however it will behave correctly in the way it sends those messages: i.e., send messages according to the algorithm.
In \emph{Buhrman's model} Byzantine agents move together with the message.
It has been shown that in order to solve Byzantine Agreement in the \emph{Garay's model} at least processors are needed, for \emph{Bonnet's model} at least processors are needed, while for \emph{Buhrman's model} at least processors are needed.
In this paper we target to increase the tolerance to mobile Byzantines by integrating a trusted counter abstraction to the above models. This abstraction prevents nodes to equivocate. In the new models we prove that at least , respectively , and processors are needed to tolerate mobile Byzantine agents. Furthermore, we propose novel Mobile Byzantine Agreement algorithms that match these new lower bounds for \emph{Garay's}, \emph{Bonnet's} and \emph{Buhrman's} models.
Lattice-Based Sanitizable Signature Schemes: Chameleon Hash Functions and More
Sanitizable Signature Schemes (SSS) enable a designated party, the sanitizer, to modify predefined parts of a signed message without invalidating the signature, making them useful for applications like pseudonymization and redaction. Since their introduction by Ateniese et al. (ESORICS'05), several classical SSS constructions have been proposed, but none have been instantiated from quantum-resistant assumptions. In this work, we develop the first quantum-secure sanitizable signature schemes based on lattice assumptions. Our primary focus is on SSS constructions that rely on chameleon hash functions (CHFs), a key component for enabling the controlled modification of messages. While lattice-based CHFs exist, they do not meet the required security guarantees for SSS, becoming insecure under adversarial access to an adapt oracle. To address this, we construct a novel lattice-based CHF that achieves collision resistance even in such settings, called full collision resistance. However, our CHF lacks the uniqueness property, a limitation we show to be inherent in lattice-based CHFs. As a result, our SSS constructions initially fall short of achieving the critical security property of accountability. To overcome this, we apply a transformation based on verifiable ring signatures (VRS), for which we present the first lattice-based instantiation. Additionally, we provide a comprehensive analysis of existing classical SSS constructions, explore their potential for post-quantum instantiations, and present new attacks on previously assumed secure SSS schemes. Our work closes the gap in constructing quantum-secure SSS and lays the groundwork for further research into advanced cryptographic primitives based on lattice assumptions.
PHOENIX: Crypto-Agile Hardware Sharing for ML-KEM and HQC
The transition to quantum-safe public-key cryptography has begun: for key agreement, NIST has standardized ML-KEM and selected HQC for future standardization. The relative immaturity of these schemes encourages crypto-agile implementations, to facilitate easy transitions between them. Intelligent crypto-agility requires efficient sharing strategies to compute operations from different cryptosystems using the same resources. This is particularly challenging for cryptosystems with distinct mathematical foundations, like lattice-based ML-KEM and code-based HQC.
We introduce PHOENIX, the first crypto-agile hardware coprocessor for lattice- and code-based cryptosystems--specifically, ML-KEM and HQC, at all three NIST security levels--with an effective agile sharing strategy.
PHOENIX accelerates polynomial multiplication, which is the main operation in both cryptosystems, and the current bottleneck of HQC. To maximise sharing, we replace HQC's Karatsuba-based polynomial multiplication with the Frobenius Additive FFT (FAFFT), which is similar on an abstract level to ML-KEM's Number Theoretic Transform (NTT).
We show that the FAFFT already brings substantial performance improvements in software. In hardware, our sharing strategy for the FAFFT and NTT is based on a new SuperButterfly unit that seamlessly switches between these two FFT variants over completely different rings. This is, to our knowledge, the first FAFFT hardware accelerator of any kind. We have integrated PHOENIX in a real System-on-Chip FPGA scenario, where our performance measurements show that efficient crypto-agility for lattice- and code-based KEMs can be achieved with low overhead.
Improved Round-by-round Soundness IOPs via Reed-Muller Codes
We give an IOPP (interactive oracle proof of proximity) for trivariate Reed-Muller codes that achieves the best known query complexity in some range of security parameters. Specifically, for degree and security parameter , our IOPP has round-by-round soundness, queries, rounds and length. This improves upon the FRI [Ben-Sasson, Bentov, Horesh, Riabzev, ICALP 2018] and the STIR [Arnon, Chiesa, Fenzi, Yogev, Crypto 2024] IOPPs for Reed-Solomon codes, that have larger query and round complexity standing at and respectively. We use our IOPP to give an IOP for the NP-complete language Rank-1-Constraint-Satisfaction with the same parameters.
Our construction is based on the line versus point test in the low-soundness regime. Compared to the axis parallel test (which is used in all prior works), the general affine lines test has improved soundness, which is the main source of our improved soundness.
Using this test involves several complications, most significantly that projection to affine lines does not preserve individual degrees, and we show how to overcome these difficulties. En route, we extend some existing machinery to more general settings. Specifically, we give proximity generators for Reed-Muller codes, show a more systematic way of handling "side conditions" in IOP constructions, and generalize the compiling procedure of [Arnon, Chiesa, Fenzi, Yogev, Crypto 2024] to general codes.
Insecurity of One Decentralized Attribute-based Signature Scheme for Social Co-governance
We show that the attribute-based signature scheme [Information Sciences, 654(2024), 119839] is insecure, because an adversary can generate valid signatures for any message even though he cannot access the signer's secret key. The four components of signature are not tightly bound to the target message and the signer's public key. The dependency between the signer's public key and secret key is not properly used to construct any intractable problem. The inherent flaw results in that the adversary can find an efficient signing algorithm functionally equivalent to the valid signing algorithm.
Nominal State-Separating Proofs
State-separting proofs are a powerful tool to structure cryptographic arguments, so that they are amenable for mechanization, as has been shown through implementations, such as SSProve. However, the treatment of separation for heaps has never been satisfactorily addressed. In this work, we present the first comprehensive treatment of nominal state separation in state-separating proofs using nominal sets. We provide a Coq library, called Nominal-SSProve, that builds on nominal state separation supporting mechanized proofs that appear more concise and arguably more elegant.
SoK: Self-Generated Nudes over Private Chats: How Can Technology Contribute to a Safer Sexting?
More and more people take advantage of mobile apps to strike up relationships and casual contacts. This sometimes results in the sharing of self-generated nudes. While this opens a way for sexual exploration, it also raises concerns. In this paper, we review existing technology-assisted permissive proposals/features that provide security, privacy or accountability benefits when sharing nudes online. To do so, we performed a systematic literature review combing through 10,026 search results and cross-references, and we identified real-world solutions by surveying OS features and 52 dating, messaging and social network apps. We systematized knowledge by defining a sexting threat model, deriving a taxonomy of the proposals/features, discussing some of their shortcomings, organizing privacy-related concepts, and providing take-aways with some directions for future research and development. Our study found a very diverse ecosystem of academic proposals and app features, showing that safer sexting goes far beyond nude detection. None of the techniques represents the ultimate solution for all threats, but each contributes to a safer sexting in a different way.
Highway to Hull: An Algorithm for Solving the General Matrix Code Equivalence Problem
The matrix code equivalence problem consists, given two matrix spaces of dimension , in finding invertible matrices and such that . Recent signature schemes such as MEDS and ALTEQ relate their security to the hardness of this problem. Naranayan et. al. recently published an algorithm solving this problem in the case in operations. We present a different algorithm which solves the problem in the general case. Our approach consists in reducing the problem to the matrix code conjugacy problem, i.e. the case . For the latter problem, similarly to the permutation code equivalence problem in Hamming metric, a natural invariant based on the \emph{Hull} of the code can be used. Next, the equivalence of codes can be deduced using a usual list collision argument. For , our algorithm achieves the same complexity as in the aforementioned reference. However, it extends to a much broader range of parameters.
Partial Key Exposure Attacks on UOV and Its Variants
In CRYPTO 2022, Esser et al. proposed a partial key exposure attack on several post-quantum cryptographic schemes including Rainbow which is a variant of UOV. The task of the attack is to recover a full secret key from its partial information such as a secret key with symmetric/asymmetric bit errors. One of the techniques Esser et al. developed is a partial enumeration that combines the standard algorithms to solve the MQ problem with enumeration.
Although an efficient attack on Rainbow was proposed, UOV and its variants have still been paid much attention since UOV and its three variants, i.e., MAYO, QR-UOV and SNOVA, were selected as the Round 2 candidates of the additional call for digital signature schemes proposal by NIST.
In this paper, we analyze partial key exposure attacks on UOV, MAYO, and QR-UOV. Although our proposed attacks use the partial enumeration, we refine their enumeration strategy. We employ two enumeration strategies and analyze the complexity of the proposed attacks. Then, we find a structural difference between UOV and its variants to resist partial enumeration. Specifically, the partial enumeration is effective if the number of vinegar variables is smaller than the number of equations and the order of a finite field is small.
As a result, the proposed attack is the most effective on MAYO. While our attacks on UOV and QR-UOV are effective only when the symmetric error probabilities are 0.11 and 0.05, respectively, that on MAYO is effective even when the probability is close to 0.5.
Efficient SNARKs for Boolean Circuits via Sumcheck over Tower Fields
In this paper, we present efficient SNARKs for Boolean circuits, achieving significant improvements in the prover efficiency. The core of our technique is a novel tower sumcheck protocol and a tower zero-check protocol tailored for tower fields, which enable this efficiency boost. When instantiated with Wiedemann's binary tower fields with the base field of and the top-level field , assuming the quadratic complexity of multiplications in the top-level field with bits, the prover time of our sumcheck protocol is . It is faster than the standard sumcheck protocol over the large field with the complexity of . To achieve a reasonable security level, is usually set to .
Leveraging this advancement, we improve the efficiency of IOP protocols over the binary or small characteristic fields for Plonkish, CCS, and GKR-based constraint systems. Moreover, to further improve the prover efficiency of the SNARKs, we introduce a basis-switching mechanism that efficiently transforms polynomial evaluations on the base-field polynomial to evaluations on the tower-field polynomial. With the basis-switching, we are able to compile the binary-field IOPs to SNARKs using large-field polynomial commitment schemes (PCS) that batch the witness over the base field. The size of the large-field PCS is only of the size of the witness over the base field. Combining the IOP and the PCS, the overall prover time of our SNARKs for Boolean circuits significantly faster than the naive approach of encoding Boolean values in a large field.
Oblivious Immutable Memory
An oblivious RAM (ORAM) compiler is a cryptographic tool that transforms a program running in time into an equivalent program , with the property that the sequence of memory addresses read from/written to by reveal nothing about 's data (Goldreich and Ostrovsky, JACM'96). An efficient ORAM compiler should achieve some combination of the following:
- Low bandwidth blow-up: should read/write a similar amount of data as does P.
- Low latency: should incur a similar number of roundtrips to the memory as does P.
- Low space complexity: should run in as few words of local memory as possible.
It is well known that for a generic compiler (i.e. one that works for any RAM program ), certain combinations of efficiencies are impossible. Any generic ORAM compiler must incur bandwidth blow-up, and any ORAM compiler with no latency blow-up must incur either bandwidth blow-up and/or local space. Moreover, while a bandwidth blow-up compiler is known, it requires the assumption that one-way functions exist and incurs enormous constant factors.
To circumvent the above problems and improve efficiency of particular ORAM programs, we develop a compiler for a specific class of programs. Let be a program that interacts with an immutable memory. Namely, may write values to memory, then read them back, but it cannot change values that were already written. Using only information-theoretic techniques, we compile any such into an oblivious form with a combination of efficiencies that no generic ORAM compiler can achieve:
- incurs amortized bandwidth blow-up.
- incurs amortized latency blow-up.
- runs in words of local space ( incurs an error with probability ).
We show that this, for instance, implies that any pure functional program can be compiled with the same asymptotics.
Our work builds on and is compatible with prior work (Appan et al., CCS'24) that showed similar results for pointer machine programs that manipulate objects with constant in-degree (i.e., the program may only maintain a constant number of pointers to any one memory cell; our immutable memory approach does not have this limitation). By combining techniques, we can consider programs that interact with a mixed memory that allows each memory cell to be updated until it is frozen, after which it becomes immutable, allowing further reads to be compiled with the above asymptotics, even when in-degree is high. Many useful algorithms/data structures can be naturally implemented as mixed memory programs, including suffix trees (powerful data structures used in computational biology) and deterministic finite automata (DFAs).
DSM: Decentralized State Machine - The Missing Trust Layer of the Internet
The modern internet relies heavily on centralized trust systems controlled by corporations, governments, and intermediaries to manage authentication, identity, and value transfer. These models introduce fundamental vulnerabilities, including censorship, fraud, and systemic insecurity. The Decentralized State Machine (DSM) addresses these issues by introducing a mathematically enforced trust layer that eliminates the need for consensus mechanisms, third-party validators, and centralized infrastructure. DSM enables quantum-resistant, deterministic state transitions for digital identity and value exchange—offering immediate finality, offline capability, and tamper-proof forward-only state progression.
DSM replaces traditional blockchain execution models with deterministic, pre-committed state transitions, enabling secure, multi-path workflows without requiring Turing-completeness or global consensus. The protocol architecture is based on a straight hash chain with sparse indexing and Sparse Merkle Trees (SMTs), ensuring efficient verification, scalability, and privacy. A bilateral isolation model supports asynchronous, offline operation with built-in consistency guarantees. DSM introduces a sustainable, gas-free economic model based on cryptographic subscription commitments.
This paper outlines the architecture, cryptographic foundations, and security guarantees of DSM, and demonstrates how it achieves verifiable, trustless interaction between peers—both online and offline. By decoupling security from consensus and enabling self-validating state transitions, DSM offers a practical and scalable alternative to conventional internet trust models.
ColliderVM: Stateful Computation on Bitcoin without Fraud Proofs
Bitcoin script cannot easily access and store state information onchain without an upgrade such as BIP-347 (OP_CAT); this makes performing general (stateful) computation on Bitcoin impossible to do directly. Despite this limitation, several approaches have been proposed to bypass it, with BitVM being the closest to production. BitVM enables fraud-proof-based computation on Bitcoin, relying on a -out-of- honesty assumption.
This left the question of whether it is possible to achieve computation under the same honesty assumption without requiring onlookers to ensure validity through fraud proofs. In this note, we answer this question affirmatively by introducing ColliderVM, a new approach for performing computation on Bitcoin today. Crucially, this approach eliminates some capital inefficiency concerns stemming from reliance on fraud proofs.
For our construction, a key point is to replace the Lamport or Winternitz signature-based storage component in contemporary protocols with a hash collision-based commitment. Our techniques are inspired by ColliderScript, but are more efficient, reducing the number of hash evaluations required by at least . With it, we estimate that the Bitcoin script length for STARK proof verification becomes nearly practical, allowing it to be used alongside other, pairing-based proof systems common today in applications.
Secure graph computation enables computing on graphs while hiding the graph topology as well as the associated node/edge data. This facilitates collaborative analysis among multiple data owners, who may only hold a private partial view of the global graph. Several works address this problem using the technique of secure multiparty computation (MPC) in the presence of 2 or 3 parties. However, when moving to the multiparty setting, as required for collaborative analysis among multiple data owners, these solutions are no longer scalable. This remains true with respect to the state-of-the-art framework of (Koti et al., CCS 2024) as well. Specifically, incurs a round complexity linear in the number of parties or data owners. This is due to its reliance on secure shuffle protocol, constituting a bottleneck in the multiparty setting. Additionally, has a prohibitively expensive initialisation phase due to its reliance on secure sort, with a round complexity dependent on both the graph size and the number of parties.
We propose , a generic framework for secure graph computation in the multiparty setting that eliminates the need of shuffle and instead, relies on a weaker primitive known as . Further is designed to have a lightweight initialisation, that eliminates the need for sorting, making its round complexity independent of the graph size and number of parties. Unlike any of the prior works, achieving a round complexity independent of the number of parties is what makes scalable.
Finally, we implement and benchmark the performance of for the application of PageRank computation and showcase its efficiency and scalability improvements over . Concretely, we witness improvements of up to in runtime in comparison to state-of-the-art framework . Further, we observe that takes under a minute to perform 10 iterations of PageRank computation on a graph of size that is distributed among parties/data owners, making it highly practical for secure graph computation in the multiparty setting.
Defeating AutoLock: From Simulation to Real-World Cache-Timing Exploits against TrustZone
In this article, we present for the first time a cross-core Prime+Probe attack on ARM
TrustZone, which bypasses the AutoLock mechanism. We introduce our simulation-
driven methodology based on gem5 for vulnerability analysis. We demonstrate its
utility in reverse engineering a SoC platform in order to study its microarchitectural
behavior (caches, etc.), inside a simulator, in spite of hardware protection. We present
a novel vulnerability analysis technique, which takes into account the cache set
occupancy for targeted victim executable. This proves to be essential in identifying
information leakage in presence of AutoLock. The above tool also identifies the cache
lines leaking a maximum amount of information. A cross-core Prime+Probe attack is
then mounted on these max-leakage cache lines both in simulation for fine-tuning,
and in real hardware. We validate our analysis and attack method on OP-TEE, an
open-source trusted execution environment running on RockPi4 a board based on
RK3399 SoC. More specifically we target the RSA subroutine in the MbedTLS library
used inside OP-TEE. Despite the presence of AutoLock, multiplier obfuscation, and
assuming a cross-core attack, we are able to retrieve 30% of the key bits, which can
later be used in Branch-and-Prune methods to recover the full key.
A Place for Everyone vs Everyone in its Place: Measuring and Attacking the Ethereum Global Network
The Ethereum Global Network (EGN) is the peer-to-peer (P2P) network underlying Ethereum and thousands of subsequent blockchain services. Deviating from traditional single-service P2P networks, EGN's multi-service architecture has gained widespread acceptance for supposedly improving node discovery efficiency and security. This paper challenges this belief by critically examining EGN's design and its purported benefits. Our analysis reveals significant shortcomings in EGN's node discovery process. EGN nodes struggle to connect with peers offering the desired service: over three-quarters of connection attempts reach nodes of other services. In an extreme case, one node spent an average of connection attempts to find each neighbor. Moreover, this blended architecture compromises EGN's security. The network demonstrates high susceptibility to DHT pollution and partition attacks. Even with only malicious nodes in EGN, an attacker can isolate thousands of nodes, significantly hindering recovery. In contrast, such a small number of malicious nodes has minimal impact on every single-service P2P network. We propose solutions to improve EGN's node discovery efficiency and strengthen its resilience against attacks.
Lifeboats on the Titanic Cryptography
The Titanic was the ship that "could not sink," fortunately its designers installed lifeboats (not enough) despite having no logical grounding for this waste of space and material. It was out of respect for unforeseen surprises. NIST-Post Quantum Ciphers represent the best and the brightest in world crypto intelligence. They are certified as good for their purpose. And likely so, alas, not surely so. If we could find a crypto equivalent for the Titanic Lifeboats, should not we load them up for our journey? Indeed, pattern-devoid cryptography is the crypto equivalent of the lifeboats that mitigated the Titanic disaster. Pattern-Devoid cryptography (PDC) may be deemed inelegant, inconvenient, and bloated, but it will hold up against quantum computers more powerful than expected, and more importantly, it will hold up against adversarial mathematical talent greater than expected. Which is why we should put up with its negatives, and install it just in case the Titanic story repeats itself in cyberspace. This article elaborates on this proposition.
Heuristic Algorithm for Solving Restricted SVP and its Applications
In lattice-based cryptography, many attacks are performed by finding a short enough vector on a specific lattice. However, it is possible that length is not the only restriction on the vector to be found. A typical example is SVP with infinity norm: since most SVP solving algorithms only aim to find short vector under Euclidean norm, the infinity norm is in fact another restriction on the vector. In the literature, such problems are usually solved by performing exhaustive search on a list of short vectors generated from lattice sieving. However, the sieving list might either be too large or too small to pass the additional restriction, which makes the solving algorithm inefficient in some cases.
Our contribution in this work is as follows: (1) We formally define a new lattice hard problem called restricted SVP, and show that it can be used to generalize many lattice hard problems, including SVP with non-Euclidean norm and Kannan's embedding on approximate CVP. (2) We extend the dimension for free technique and the enumerate-then-slice technique into approximate SVP where the goal is to output a list of short vectors with a certain size. (3) We give the heuristic algorithm for solving restricted SVP, and design a hardness estimator for this algorithm, which can be used to estimate the concrete hardness of signature forgery in Dilithium and other lattice-based signatures. Using this estimator, we present a concrete security analysis for Dilithium against signature forgery under the gate-count model for the first time. Our estimation matches well with the security estimation from core-SVP model in the document of Dilithium, and we also combine our estimator with the rescaling technique to generate a tighter estimation.
Adaptively-Secure Big-Key Identity-Based Encryption
Key-exfiltration attacks on cryptographic keys are a significant threat to computer security. One proposed defense against such attacks is big-key cryptography which seeks to make cryptographic secrets so large that it is infeasible for an adversary to exfiltrate the key (without being detected). However, this also introduces an inconvenience to the user who must now store the large key on all of their different devices. The work of Döttling, Garg, Sekar and Wang (TCC 2022) introduces an elegant solution to this problem in the form of big-key identity-based encryption (IBE). Here, there is a large master secret key, but very short identity keys. The user can now store the large master secret key as her long-term key, and can provision each of her devices with short ephemeral identity keys (say, corresponding to the current date). In this way, the long-term secret key is protected by conventional big-key cryptography, while the user only needs to distribute short ephemeral keys to their different devices. Döttling et al. introduce and construct big-key IBE from standard pairing-based assumptions. However, their scheme only satisfies selective security where the adversary has to declare its challenge set of identities at the beginning of the security game. The more natural notion of security is adaptive security where the user can adaptively choose which identities it wants to challenge after seeing the public parameters (and part of the master secret key).
In this work, we give the first adaptively-secure construction of big-key IBE from standard cryptographic assumptions. Our first construction relies on indistinguishability obfuscation (and one-way functions), while our second construction relies on witness encryption for NP together with standard pairing-based assumptions (i.e., the SXDH assumption). To prove adaptive security, we show how to implement the classic dual-system methodology with indistinguishability obfuscation as well as witness encryption.
The Singularity Random Number Generator: Bridging Determinism and Unpredictability to Redefine Randomness, Secure Systems, and Adaptive Intelligence
Abstract
The Singularity Random Number Generator (SRNG) represents a groundbreaking advancement in the generation of random numbers by integrating two key properties - computational irreducibility and seed independence - into a deterministic algorithm. Unlike conventional pseudorandom number generators (PRNGs) whose randomness is intrinsically linked to seed quality or chaotic sensitivity, SRNG transforms even low-entropy seeds into complex, unpredictable outputs. SRNG demonstrates high-quality randomness can emerge independently of seed entropy or size. This paper explores how SRNG not only challenges classical paradigms of predictability in deterministic systems but also offers transformative applications in cryptography, artificial intelligence (AI), and interdisciplinary research. Furthermore, by drawing parallels with cognitive variability research - such as insights from the Forbes article “Why A ‘Productively Distracted’ Brain Is A Superpower” - we discuss how the emergent unpredictability of SRNG may contribute to enhanced adaptive learning and decision-making processes in AI systems. Ultimately, SRNG is presented as a model that bridges the realms of science and mystery, inviting a new understanding of randomness and the limits of scientific inquiry, thereby expanding the frontiers of interdisciplinary research.
Counter Galois Onion (CGO) for Tor: Fast Non-Malleable Onion Encryption
In 2012, the Tor project expressed the need to upgrade Tor's onion encryption scheme to protect against tagging attacks and thereby strengthen its end-to-end integrity protection. Tor proposal 261, where each encryption layer is processed by a strongly secure, yet relatively expensive tweakable wide-block cipher, is the only concrete candidate replacement to be backed by formal, yet partial, security proofs (Degabriele and Stam, EUROCRYPT 2018, and Rogaway and Zhang, PoPETS 2018).
We propose an alternative onion encryption scheme, called Counter Galois Onion (CGO), that follows a minimalistic, modular design and includes several improvements over proposal 261. CGO's underlying primitive is an updatable tweakable split-domain cipher accompanied with a new security notion, that augments the recently introduced rugged pseudorandom permutation (Degabriele and Karadžić, CRYPTO 2022). Thus, we relax the security compared to a tweakable wide-block cipher, allowing for more efficient designs. We suggest a concrete instantiation for the updatable tweakable split-domain cipher and report on our experiments comparing the performance of CGO with Tor's existing onion encryption scheme.
Release the Power of Rejected Signatures: An Efficient Side-Channel Attack on Dilithium
The Module-Lattice-Based Digital Signature Standard (ML-DSA), formerly known as CRYSTALS-Dilithium, is a lattice-based post-quantum cryptographic scheme. In August 2024, the National Institute of Standards and Technology (NIST) officially standardized ML-DSA under FIPS 204. Dilithium generates one valid signature and multiple rejected signatures during the signing process. Most Side-Channel Attacks targeting Dilithium have focused solely on the valid signature, while neglecting the hints contained in rejected signatures. In this paper, we propose a method for recovering the private key by simultaneously leveraging side-channel leakages from both valid signatures and rejected signatures. This approach minimizes the number of signing attempts required for full key recovery. We construct a factor graph incorporating all relevant side-channel leakages and apply the Belief Propagation (BP) algorithm for private key recovery.
We conducted a proof-of-concept experiment on a Cortex M4 core chip, where the results demonstrate that utilizing rejected signatures reduces the required number of traces by at least for full key recovery. A minimum of a single trace can recover the private key with a success rate of . Our findings highlight that protecting rejected signatures is crucial, as their leakage provides valuable side-channel information. We strongly recommend implementing countermeasures for rejected signatures during the signing process to mitigate potential threats.
Reusable Dynamic Multi-Party Homomorphic Encryption
Homomorphic Encryption (HE) is a promising primitive for evaluating arbitrary circuits while keeping the user's privacy. We investigate how to use HE in the multi-party setting where data is encrypted with several distinct keys. One may use the Multi-Key Homomorphic Encryption (MKHE) in this setting, but it has space/computation overhead of for the number of users , which makes it impractical when grows large. On the contrary, Multi-Party Homomorphic Encryption (MPHE) is the other Homomorphic Encryption primitive in the multi-party setting, where the space/computation overhead is ; however, is limited in terms of ciphertext reusability and dynamicity, that ciphertexts are encrypted just for a group of parties and cannot be reused for other purposes, and that additional parties cannot join the computation dynamically.
Contrary to MKHE, where the secret key owners engage only in the decryption phase, we consider a more relaxed situation where the secret key owners can communicate before the computation. In that case, we can reduce the size of a ciphertext and the evaluation complexity from to as in a single-key HE setting. We call this primitive as {\em Reusable Dynamic Multi-Party Homomorphic Encryption}, which is more suitable in real-world scenarios.
We show that 1) the procedures before the computation can be done in a very few rounds of communications, 2) the evaluation/space complexities are independent of the number of users, and 3) the functionalities are as efficient as MKHE, with asymptotic analysis and with implementation.
Efficient Revocable Identity-Based Encryption from Middle-Product LWE
The Middle-Product Learning with Errors (MPLWE) assumption is a variant of the Learning with Errors (LWE) assumption. The MPLWE assumption reduces the key size of corresponding LWE-based schemes by setting keys as sets of polynomials. Moreover, MPLWE has more robust security than other LWE variants such as Ring-LWE
and Module-LWE. Lombardi et al. proposed an identity-based encryption (IBE) scheme (LVV-IBE) based on the MPLWE assumption in the random oracle model (ROM) by following Gentry et al.'s IBE scheme (GPV-IBE) based on LWE. Due to the benefit of MPLWE, LVV-IBE has a shorter master public key and a secret key than GPV-IBE without changing the size of a ciphertext. However, Lombardi et al.'s proof is not tight in the ROM, while Katsumata et al. proved that GPV-IBE achieves tight adaptive anonymity in the quantum ROM (QROM). Revocable IBE (RIBE) is a variant of IBE supporting a key revocation mechanism to remove malicious users from the system. Takayasu proposed the most efficient RIBE scheme (Takayasu-RIBE) based on LWE achieving tight adaptive anonymity in the QROM. Although a concrete RIBE scheme based on MPLWE has not been proposed, we can construct a scheme (LVV-based RIBE) by applying Ma and Lin's generic transformation to LVV-IBE. Due to the benefit of MPLWE, LVV-based RIBE has an asymptotically shorter master public key and a shorter secret key than Takayasu-RIBE although the former has a larger ciphertext than the latter. Moreover, the security proof is not tight and anonymous in the ROM due to security proofs of Ma-Lin and Lombardi et al. In this paper, we propose a concrete RIBE scheme based on MPLWE. Compared with the above RIBE schemes, the proposed RIBE scheme is the most asymptotically efficient since the sizes of a master public key and a secret key (resp. ciphertext) of the proposed scheme are the same as those of LVV-based RIBE scheme (resp. Takayasu-RIBE). Moreover, we prove the tight adaptive anonymity of the proposed RIBE scheme in the QROM. For this purpose, we also prove the tight adaptive anonymity of LVV-IBE in the QROM.
REGKYC: Supporting Privacy and Compliance Enforcement for KYC in Blockchains
Know Your Customer (KYC) is a core component of the Anti-Money Laundering (AML) framework, designed to prevent illicit activities within financial systems. However, enforcing KYC and AML on blockchains remains challenging due to difficulties in establishing accountability and preserving user privacy. This study proposes REGKYC, a privacy-preserving Attribute-Based Access Control (ABAC) framework that balances user privacy with externally mandated KYC and AML requirements. REGKYC leverages a structured ABAC model to support the flexible verification of KYC attributes and the enforcement of compliance policies, providing benefits to multiple stakeholders. First, it enables legitimate users to meet compliance requirements while preserving the privacy of their on-chain activities. Second, it empowers Crypto-asset Service Providers (CASPs) to tailor compliance policies to operational needs, ensuring adaptability to evolving regulations. Finally, it enhances regulatory accountability by enabling authorized deanonymization of malicious actors. We hope this work inspires future research to harmonize user privacy and regulatory compliance in blockchain systems.
Efficient Garbled Pseudorandom Functions and Lookup Tables from Minimal Assumption
Yao's garbled circuits have received huge attention in both theory and practice. While garbled circuits can be constructed using minimal assumption (i.e., the existence of pseudorandom functions or one-way functions), the state-of-the-art constructions (e.g., Rosulek-Roy, Crypto 2021) are based on stronger assumptions. In particular, the ``Free-XOR'' technique (Kolesnikov-Schneider, ICALP 2008) is essential in these state-of-the-art constructions, and their security can only be proven in the random oracle model, or rely on the ``circular-correlation robust hash'' assumption.
In this paper, we aim to develop new techniques to construct efficient garbling schemes using minimal assumptions. Instead of generically replacing the Free-XOR technique, we focus on garbling schemes for specific functionalities. We successfully eliminated the need for Free-XOR in several state-of-the-art schemes, including the one-hot garbling (Heath and Kolesnikov, CCS 2021) and the garbled pseudorandom functions, and the garbled lookup tables (Heath, Kolesnikov and Ng, Eurocrypt 2024). Our schemes are based on minimal assumptions, i.e., standard pseudorandom functions (PRFs)---we resolved the need for circular security. The performance of our scheme is almost as efficient as the best results except for a small constant factor. Namely, for any lookup table , our scheme takes bits of communication, where is the security parameter of PRF.
Making GCM Great Again: Toward Full Security and Longer Nonces
The GCM authenticated encryption (AE) scheme is one of the most widely used AE schemes in the world, while it suffers from risk of nonce misuse, short message length per encryption and an insufficient level of security. The goal of this paper is to design new AE schemes achieving stronger provable security in the standard model and accepting longer nonces (or providing nonce misuse resistance), with the design rationale behind GCM.
As a result, we propose two enhanced variants of GCM and GCM-SIV, dubbed eGCM and eGCM-SIV, respectively. eGCM and eGCM-SIV are built on top of a new CENC-type encryption mode, dubbed eCTR: using 2n-bit counters, eCTR enjoys beyond-birthday-bound security without significant loss of efficiency. eCTR is combined with an almost uniform and almost universal hash function, yielding a variable input-length variable output-length pseudorandom function, dubbed HteC. GCM and GCM-SIV are constructed using eCTR and HteC as building blocks.
eGCM and eGCM-SIV accept nonces of arbitrary length, and provide almost the full security (namely, n-bit security when they are based on an n-bit block cipher) for a constant maximum input length, under the assumption that the underlying block cipher is a pseudorandom permutation (PRP). Their efficiency is also comparable to GCM in terms of the rate and the overall speed.
Pre-Constructed Publicly Verifiable Secret Sharing and Applications
Conventional Publicly Verifiable Secret Sharing (PVSS) protocols allow a dealer to share a secret among parties without interaction, ensuring that any parties (where ) can recover the secret, while anyone can publicly verify the validity of both the individual shares and the reconstructed secret. PVSS schemes are shown to be a key tool in a wide range of practical applications. In this paper, we introduce Pre-constructed PVSS (PPVSS), an extension of standard PVSS schemes, highlighting its enhanced utility and efficiency in various protocols. Unlike standard PVSS, PPVSS requires the dealer to publish a commitment or encryption of the main secret and incorporates a novel secret reconstruction method. We show that these refinements make PPVSS more practical and versatile than conventional PVSS schemes.
To build a PPVSS scheme, we first point out that the well-known PVSS scheme by Schoenmakers (CRYPTO'99) and its pairing-based variant presented by Heidarvand and Villar (SAC'08) can be seen as special cases of PPVSS, where the dealer also publishes a commitment to the main secret. However, these protocols are not practical for many applications due to efficiency limitations and are less flexible compared to a standard PPVSS scheme. To address this, we propose a general strategy for transforming a Shamir-based PVSS scheme into a PPVSS scheme. Using this strategy, we construct two practical PPVSS schemes in both the Random Oracle (RO) and plain models, grounded in state-of-the-art PVSS designs. Leveraging the new RO-based PPVSS scheme, we revisit some applications and present more efficient variants. Notably, we propose a new universally verifiable e-voting protocol that improves on the alternative scheme by Schoenmakers (CRYPTO'99), reducing the verification complexity with voters from to exponentiations--a previously unattainable goal with standard PVSS schemes. Our implementation results demonstrate that both our proposed PPVSS schemes and the new universally verifiable e-voting protocol significantly outperform existing alternatives in terms of efficiency.
Wagner's Algorithm Provably Runs in Subexponential Time for SIS
At CRYPTO 2015, Kirchner and Fouque claimed that a carefully tuned variant of the Blum-Kalai-Wasserman (BKW) algorithm (JACM 2003) should solve the Learning with Errors problem (LWE) in slightly subexponential time for modulus and narrow error distribution, when given enough LWE samples. Taking a modular view, one may regard BKW as a combination of Wagner's algorithm (CRYPTO 2002), run over the corresponding dual problem, and the Aharonov-Regev distinguisher (JACM 2005). Hence the subexponential Wagner step alone should be of interest for solving this dual problem - namely, the Short Integer Solution problem (SIS) - but this appears to be undocumented so far.
We re-interpret this Wagner step as walking backward through a chain of projected lattices, zigzagging through some auxiliary superlattices. We further randomize the bucketing step using Gaussian randomized rounding to exploit the powerful discrete Gaussian machinery. This approach avoids sample amplification and turns Wagner's algorithm into an approximate discrete Gaussian sampler for -ary lattices.
For an SIS lattice with equations modulo , this algorithm runs in subexponential time to reach a Gaussian width parameter only requiring many SIS variables. This directly provides a provable algorithm for solving the Short Integer Solution problem in the infinity norm ( ) for norm bounds . This variant of SIS underlies the security of the NIST post-quantum cryptography standard Dilithium. Despite its subexponential complexity, Wagner's algorithm does not appear to threaten Dilithium's concrete security.
Buffalo: A Practical Secure Aggregation Protocol for Asynchronous Federated Learning
Federated Learning (FL) has become a crucial framework for collaboratively training Machine Learning (ML) models while ensuring data privacy. Traditional synchronous FL approaches, however, suffer from delays caused by slower clients (called stragglers), which hinder the overall training process.
Specifically, in a synchronous setting, model aggregation happens once all the intended clients have submitted their local updates to the server. To address these inefficiencies, Buffered Asynchronous FL (BAsyncFL) was introduced, allowing clients to update the global model as soon as they complete local training. In such a setting, the new global model is obtained once the buffer is full, thus removing synchronization bottlenecks. Despite these advantages, existing Secure Aggregation (SA) techniques—designed to protect client updates from inference attacks—rely on synchronized rounds, making them unsuitable for asynchronous settings.
In this paper, we present Buffalo, the first practical SA protocol tailored for BAsyncFL. Buffalo leverages lattice-based encryption to handle scalability challenges in large ML models and introduces a new role, the assistant, to support the server in securely aggregating client updates. To protect against an actively corrupted server, we enable clients to verify that their local updates have been correctly integrated into the global model. Our comprehensive evaluation—incorporating theoretical analysis and real-world experiments on benchmark datasets—demonstrates that Buffalo is an efficient and scalable privacy-preserving solution in BAsyncFL environments.
Forking Lemma in EasyCrypt
Formal methods are becoming an important tool for ensuring correctness and security of cryptographic constructions. However, the support for certain advanced proof techniques, namely rewinding, is scarce among existing verification frameworks, which hinders their application to complex schemes such as multi-party signatures and zero-knowledge proofs.
We expand the support for rewinding in EasyCrypt by implementing a version of the general forking lemma by Bellare and Neven. We demonstrate its usability by proving EUF-CMA security of Schnorr signatures.
Zinnia: An Expressive and Efficient Tensor-Oriented Zero-Knowledge Programming Framework
Zero-knowledge proofs (ZKPs) are cryptographic protocols that enable a prover to convince a verifier of a statement's truth without revealing any details beyond its validity. Typically, the statement is encoded as an arithmetic circuit, and allows the prover to demonstrate that the circuit evaluates to true without revealing its inputs. Despite their potential to enhance privacy and security, ZKPs are difficult to write and optimize, limiting their adoption in machine learning and data science. To address these challenges, we introduce Zinnia, a zero-knowledge programming framework with high utility, expressiveness and efficiency for tensor-oriented computation. Zinnia provides a high-level programming language that enables developers to easily write ZKP programs, and it employs a novel symbolic execution-inspired approach to extracting semantics from these programs to generate arithmetic circuits. Zinnia supports tensor-oriented computations and provides a rich set of programming constructs, optimizations, and a powerful static type system for expressing and optimizing complex logic. We evaluate Zinnia across 25 real-world programming tasks and a user study, comparing it to existing solutions, including DSLs and zkVMs (Halo2, SP1, and RISC0). Our results demonstrate that Zinnia outperforms these baselines in utility, expressiveness, and efficiency, with a statistically significant reduction in development time, shorter code length, 19.3% smaller circuit size, and up to faster proving time compared to zkVMs, paving the way for practical ZKP applications in various domains.
Universally Composable Relaxed Asymmetric Password-Authenticated Key Exchange
Password-Authenticated Key Exchange (PAKE) establishes a secure channel between two parties who share a password. Asymmetric PAKE is a variant of PAKE, where one party stores a hash of the password to preserve security under the situation that the party is compromised. The security of PAKE and asymmetric PAKE is often analyzed in the framework of universal composability (UC).
Abdalla et al. (CRYPTO '20) relaxed the UC security of PAKE and showed that the relaxed security still guarantees reasonable properties. This relaxation makes it possible to prove the security in the UC framework for several PAKE protocols.
In this paper, we propose a relaxed functionality of asymmetric PAKE by following the approach of Abdalla et al. We prove that the SPAKE2+ protocol UC-realizes this functionality. We also define a more relaxed functionality and prove that a variant of the AuCPace protocol UC-realizes it.
Partial Key Overwrite Attacks in Microcontrollers: a Survey
Embedded devices can be exposed to a wide range of attacks. Some classes of attacks can be mitigated using security features or dedicated countermeasures. Examples include Trusted Execution Environments, and masking countermeasures against physical side-channel attacks. However, a system that incorporates such secure components is not automatically a secure system. Partial Key Overwrite attacks are one class of attacks that specifically target the interface between different components of the security system. These attacks may allow an adversary to extract otherwise protected cryptographic keys through careful manipulation of memory-mapped registers. So far this powerful class of attacks has received little attention in the academic literature. In this work, we provide an overview of known Partial Key Overwrite vulnerabilities and how they were used in real-world attacks. Additionally, we evaluated 31 common microcontrollers and embedded microprocessors from eleven distinct vendors and detail our findings. Based on a first high-level evaluation we selected 15 SoCs and performed an in-depth evaluation. This evaluation revealed that at least eight of these SoCs are vulnerable to partial key overwrite attacks.
Solving Data Availability Limitations in Client-Side Validation with UTxO Binding
Issuing tokens on Bitcoin remains a highly sought-after goal, driven by its market dominance and robust security. However, Bitcoin's limited on-chain storage and functionality pose significant challenges. Among the various approaches to token issuance on Bitcoin, client-side validation (CSV) has emerged as a prominent solution. CSV delegates data storage and functionalities beyond Bitcoin’s native capabilities to off-chain clients, while leveraging the blockchain to validate tokens and prevent double-spending. Nevertheless, these protocols require participants to maintain token ownership and transactional data, rendering them vulnerable to data loss and malicious data withholding. In this paper, we propose UTxO binding, a novel framework that achieves both robust data availability and enhanced functionality compared to existing CSV designs. This approach securely binds a Bitcoin UTxO, which prevents double-spending, to a UTxO on an auxiliary blockchain, providing data storage and programmability. We formally prove its security and implement our design using Nervos CKB as the auxiliary blockchain.
An in-depth security evaluation of the Nintendo DSi gaming console
The Nintendo DSi is a handheld gaming console released by Nintendo in 2008. In Nintendo's line-up the DSi served as a successor to the DS and was later succeeded by the 3DS. The security systems of both the DS and 3DS have been fully analysed and defeated. However, for over 14 years the security systems of the Nintendo DSi remained standing and had not been fully analysed. To that end this work builds on existing research and demonstrates the use of a second-order fault injection attack to extract the ROM bootloaders stored in the custom system-on-chip used by the DSi. We analyse the effect of the induced fault and compare it to theoretical fault models. Additionally, we present a security analysis of the extracted ROM bootloaders and develop a modchip using cheap off-the-shelf components. The modchip allows to jailbreak the console, but more importantly allows to resurrect consoles previously assumed irreparable.