Papers updated in last 183 days (Page 2 of 1758 results)

Last updated:  2025-05-23
Scalable Time-Lock Puzzles
Aydin Abadi, Dan Ristea, Artem Grigor, and Steven J. Murdoch
Time-Lock Puzzles (TLPs) enable a client to lock a message such that a server can unlock it only after a specified time. They have diverse applications, such as scheduled payments, secret sharing, and zero-knowledge proofs. In this work, we present a scalable TLP designed for real-world scenarios involving a large number of puzzles, where clients or servers may lack the computational resources to handle high workloads. Our contributions are both theoretical and practical. From a theoretical standpoint, we formally define the concept of a "Delegated Time-Lock Puzzle (D-TLP)", establish its fundamental properties, and introduce an upper bound for TLPs, addressing a previously overlooked aspect. From a practical standpoint, we introduce the "Efficient Delegated Time-Lock Puzzle" (ED-TLP) protocol, which implements the D-TLP concept. This protocol enables both the client and server to securely outsource their resource-intensive tasks to third-party helpers. It enables realtime verification of solutions and guarantees their delivery within predefined time limits by integrating an upper bound and a fair payment algorithm. ED-TLP allows combining puzzles from different clients, enabling a solver to process them sequentially, significantly reducing computational resources, especially for a large number of puzzles or clients. ED-TLP is the first protocol of its kind. We have implemented ED-TLP and conducted a comprehensive analysis of its performance for up to 10,000 puzzles. The results highlight its significant efficiency in TLP applications, demonstrating that EDTLP securely delegates 99% of the client’s workload and 100% of the server’s workload with minimal overhead.
Last updated:  2025-05-23
XS-circuits in Block Ciphers
Sergey Agievich
XS-circuits describe block ciphers that utilize 2 operations: X) bitwise modulo 2 addition of binary words and S) substitution of words using key-dependent S-boxes with possibly complicated internal structure. We propose a model of XS-circuits which, despite the simplicity, covers a rather wide range of block ciphers. In our model, several instances of a simple round circuit, which contains only one S~operation, are linked together and form a compound circuit called a cascade. S operations of a cascade are interpreted as independent round oracles. We deal with diffusion characteristics of cascades. These characteristics are related to the cryptographic strength of corresponding block ciphers. We obtain results on invertibility, transitivity and 2-transitivity of mappings induced by round circuits and their cascades. We provide estimates on the first and second activation times where the i-th activation time is the minimum number of rounds which guarantees that at least i round oracles get different queries while processing two different cascade's inputs. The activation times are related to differential cryptanalysis. We introduce the similarity and duality relations between round circuits. Cascades of related circuits have the same or dual diffusion characteristics. We find canonical representatives of classes of similar circuits and show that the duality between circuits is related to duality between differential and linear attacks against corresponding block ciphers. We discuss families of circuits with growing number of inputs. Such families can be used to build wide-block ciphers.
Last updated:  2025-05-23
LeOPaRd: Towards Practical Post-Quantum Oblivious PRFs via Interactive Lattice Problems
Muhammed F. Esgin, Ron Steinfeld, Erkan Tairi, and Jie Xu
In this work, we introduce a more efficient post-quantum oblivious PRF (OPRF) design, called LeOPaRd. Our proposal is round-optimal and supports verifiability and partial obliviousness, all of which are important for practical applications. The main technical novelty of our work is a new method for computing samples of MLWE (Module Learning With Errors) in a two-party setting. To do this, we introduce a new family of interactive lattice problems, called interactive MLWE with re-use (iMLWE-RU). We rigorously study the hardness of iMLWE-RU and reduce it (under Gaussian error sampling) to a more standard MLWE-like problem where the adversary is additionally given access to a randomized MLWE PRF oracle. We believe iMLWE-RU problem family can be of independent interest for other interactive protocols. LeOPaRd exploits an iMLWE-RU assumption to realize a lattice-based OPRF design without relying on heavy machinery such as noise flooding and fully homomorphic encryption used in earlier works. LeOPaRd can feature around 136 KB total communication, compared to 300+ KB in earlier works. We also identify gaps in some existing constructions and models, and propose appropriate fixes.
Last updated:  2025-05-23
Attacking Poseidon via Graeffe-Based Root-Finding over NTT-Friendly Fields
Antonio Sanso and Giuseppe Vitto
This paper explores the algebraic structure of the Poseidon and Poseidon2 permutations over NTT-friendly finite fields, with a focus on preimage recovery via root-finding techniques. We introduce an algorithm for efficiently identifying single roots of high-degree univariate polynomials that emerge from these constructions, based on the Graeffe transform and the tangent Graeffe method. Our approach is evaluated on reduced-round bounty instances of these permutations at various security levels, as proposed by the Ethereum Foundation, demonstrating practical effectiveness. These results yield new insights into the security of permutation-based cryptographic primitives instantiated over NTT-friendly prime fields.
Last updated:  2025-05-23
Justvengers: Batched VOLE ZK Disjunctions in Communication
Yibin Yang
Recent progress on zero-knowledge proofs (ZKPs) based on vector oblivious linear evaluation (VOLE) offers a promising paradigm for scaling ZKPs over extremely large statements. In particular, VOLE-based ZK is currently the best choice in terms of end-to-end execution time. However, VOLE-based ZK incurs high communication overhead — it usually scales linearly with the circuit size. To mitigate this, existing literature considers VOLE-based ZK over structured statements. In this work, we focus on the batched disjunctive statement — and agree on fan-in circuits over a field ; each circuit is of size with inputs. 's goal is to demonstrate the knowledge of witnesses , for each s.t. where neither nor is revealed. Batched disjunctive statements are effective, e.g., in emulating the CPU execution inside ZK. Note, the naïve solution results in a circuit of size . To prove such a statement using VOLE-based ZK, the prior state-of-the-art protocol (Weng et al., CCS'22) incurred communication by additionally relying on AHE, whereas (Yang et al., CCS'23) achieved communication using only VOLE. In this work, we combine these two protocols non-trivially and present a novel protocol — targeting the batched disjunctive statement — that incurs only communication and computation for prover, using AHE and VOLE.
Last updated:  2025-05-23
Admissible Parameters for the Crossbred Algorithm and Semi-regular Sequences over Finite Fields
John Baena, Daniel Cabarcas, Sharwan K. Tiwari, Javier Verbel, and Luis Villota
Multivariate public key cryptography (MPKC) is one of the most promising alternatives to build quantum-resistant signature schemes, as evidenced in NIST's call for additional post-quantum signature schemes. The main assumption in MPKC is the hardness of the Multivariate Quadratic (MQ) problem, which seeks for a common root to a system of quadratic polynomials over a finite field. Although the Crossbred algorithm is among the most efficient algorithm to solve MQ over small fields, its complexity analysis stands on shaky ground. In particular, it is not clear for what parameters it works and under what assumptions. In this work, we provide a rigorous analysis of the Crossbred algorithm over any finite field. We provide a complete explanation of the series of admissible parameters proposed in previous literature and explicitly state the regularity assumptions required for its validity. Moreover, we show that the series does not tell the whole story, hence we propose an additional condition for Crossbred to work. Additionally, we define and characterize a notion of regularity for systems over a small field, which is one of the main building blocks in the series of admissible parameters.
Last updated:  2025-05-23
CL-SCA: A Contrastive Learning Approach for Profiled Side-Channel Analysis
Annv Liu, An Wang, Shaofei Sun, Congming Wei, Yaoling Ding, Yongjuan Wang, and Liehuang Zhu
Side-channel analysis (SCA) based on machine learning, particularly neural networks, has gained considerable attention in recent years. However, previous works predominantly focus on establishing connections between labels and related profiled traces. These approaches primarily capture label-related features and often overlook the connections between traces of the same label, resulting in the loss of some valuable information. Besides, the attack traces also contain valuable information that can be used in the training process to assist model learning. In this paper, we propose a profiled SCA approach based on contrastive learning named CL-SCA to address these issues. This approach extracts features by emphasizing the similarities among traces, thereby improving the effectiveness of key recovery while maintaining the advantages of the original SCA approach. Through experiments of different datasets from different platforms, we demonstrate that CL-SCA significantly outperforms other approaches. Moreover, by incorporating attack traces into the training process using our approach, we can further enhance its performance. This extension can improve the effectiveness of key recovery, which is fully verified through experiments on different datasets.
Last updated:  2025-05-23
Side-channel safe conditional moves and swaps
David Santos and Michael Scott
Constant-time implementations are a cornerstone of secure cryptographic systems, particularly in the context of key exchange protocols and digital signature schemes. These implementations are designed to eliminate timing side-channel vulnerabilities by ensuring that the program’s execution time is independent of secret data. A fundamental building block for achieving constant-time behavior is the conditional move operation. Unlike traditional branching constructs (such as if statements), which may introduce data-dependent timing variations, conditional moves allow developers to write logic that behaves identically at the hardware level regardless of input values. As a result, they are widely used in cryptographic libraries and standards to ensure both functional correctness and resistance to timing attacks. In this work, we describe our efforts to implement elliptic curve cryptography with some immunity against certain power leakage side-channel attacks, using standard C and Rust code.
Last updated:  2025-05-22
Fast elliptic curve scalar multiplications in SN(T)ARK circuits
Liam Eagen, Youssef El Housni, Simon Masson, and Thomas Piellard
Proof systems of arbitrary computations have found many applications in recent years. However, the proving algorithm has a consequent complexity tied to the size of the computation being proved. Thus, proving large computations is quite inefficient. One of these large computations is the scalar multiplication over an elliptic curve. In this work, we provide new techniques for reducing the time corresponding to proving a scalar multiplication, using integer lattice reduction or a (half) extended Euclidean algorithm in a ring of integers. We investigate optimizations in the case of small (complex multiplication) discriminant curves, and its generalization for multi scalar multiplications as used in signature verification. We provide an optimized Golang implementation for different elliptic curves in different proof systems settings. The speed-up in proving time is between 22% and 53% compared to the previous state-of-the-art.
Last updated:  2025-05-22
A Formal Analysis of Apple’s iMessage PQ3 Protocol
Felix Linker, Ralf Sasse, and David Basin
We present the formal verification of Apple’s iMessage PQ3, a highly performant, device-to-device messaging protocol offering strong security guarantees even against an adversary with quantum computing capabilities. PQ3 leverages Apple’s identity services together with a custom, post-quantum secure initialization phase and afterwards it employs a double ratchet construction in the style of Signal, extended to provide post-quantum, post-compromise security. We present a detailed formal model of PQ3, a precise specification of its fine-grained security properties, and machine-checked security proofs using the TAMARIN prover. Particularly novel is the integration of post-quantum secure key encapsulation into the relevant protocol phases and the detailed security claims along with their complete formal analysis. Our analysis covers both key ratchets, including unbounded loops, which was believed by some to be out of scope of symbolic provers like TAMARIN (it is not!).
Last updated:  2025-05-22
Integral cryptanalysis in characteristic
Tim Beyne and Michiel Verbauwhede
Integral and ultrametric integral cryptanalysis are generalized to finite rings of prime characteristic that are isomorphic to a product of fields. This extends, for instance, the complete state of the art in integral cryptanalysis from to , for all prime powers . A compact representation of transition matrices, based on convex polyhedra, is introduced to ensure that the proposed methods are computationally efficient even for large . Automated tools are developed and applied to a few generic and several concrete primitives. The analysis shows that previous degree estimates for Feistel-GMiMC, HadesMiMC, AES-Prime, small-pSquare and mid-pSquare are overly optimistic. Furthermore, except for AES-Prime, these primitives do not meet their design criteria unless their number of rounds is substantially increased.
Last updated:  2025-05-22
CTng: Secure Certificate and Revocation Transparency
Jie Kong, Damon James, Hemi Leibowitz, Ewa Syta, and Amir Herzberg
We present CTng, an evolutionary and practical PKI design that efficiently addresses multiple key challenges faced by deployed PKI systems. CTng ensures strong security properties, including guaranteed transparency of certificates and guaranteed, unequivocal revocation, achieved under NTTP-security, i.e., without requiring trust in any single CA, logger, or relying party. These guarantees hold even in the presence of arbitrary corruptions of these entities, assuming only a known bound (f) of corrupt monitors (e.g., f=8), with minimal performance impact. CTng also enables offline certificate validation and preserves relying-party privacy, while providing scalable and efficient distribution of revocation updates. Furthermore, CTng is post-quantum ready, maintaining efficiency even with high-overhead quantum-secure signature schemes. These properties significantly improve upon current PKI designs. In particular, while Certificate Transparency (CT) aims to eliminate single points of trust, the existing specification still assumes benign loggers. Addressing this through log redundancy is possible, but rather inefficient, limiting deployed configurations to f ≤ 2. We present a security analysis and an evaluation of our open-source CTng prototype, showing that it is efficient and scalable under realistic deployment conditions.
Last updated:  2025-05-22
Practical Zero-Trust Threshold Signatures in Large-Scale Dynamic Asynchronous Networks
Offir Friedman, Avichai Marmor, Dolev Mutzari, Yehonatan Cohen Scaly, and Yuval Spiizer
Threshold signatures have become a critical tool in cryptocurrency systems, offering enhanced security by distributing the signing process among multiple signers. In this work, we distribute this process between a client and a permissionless decentralized blockchain, and present novel protocols for ECDSA and EdDSA/Schnorr signatures in this setting. Typical threshold access architectures used by trusted custodians suffer from the honeypot problem, wherein the more assets the custodian holds, the greater the incentive of compromising it. Implementing threshold signatures over permissionless blockchains poses a few challenges. First, existing networks typically work over an asynchronous reliable broadcast communication channel. Accordingly, our protocol is implemented over such a channel. As a result, it also benefits from identifiable abort, public verifiability, and guaranteed output delivery, and the client benefits from censorship resistance of blockchain systems. Second, upon signing each block, the participating quorum may dynamically change and is post-determined. Therefore, we design a fluid protocol, that supports a post-determined dynamic quorum in each communication round, thereby complying with existing broadcast channel implementations. Third, in permissionless networks, parties may join, leave, and change their stake. Therefore, we offer protocols for network reconfiguration, with complexity independent of the number of clients in the system, and our protocol efficiently supports a weighted threshold access structure for the network. Specifically, the complexity of distributed key generation and presign only depends on the number of parties and not on the overall weight, and the amortized cost of sign only depends on the individual weight. Furthermore, our protocol introduces key improvements, including the removal of zero-knowledge proofs towards the client, and presigns with a non-interactive client. For Schnorr, the presigns are client-independent, and can be collected by the blockchain in a common pool, available for all clients in the system. These optimizations reduce communication overhead and improve the system's ability to handle traffic spikes during high-demand periods. Our protocol is UC-secure, and is therefore natively designed for multiple clients to use the system in parallel. Notably, we propose a novel assumption, Slightly-Enhanced ECDSA Unforgeability, offering concrete security for 256-bit elliptic curves for threshold ECDSA with support for parallel execution of presigns. In addition to securing cryptocurrency wallets, we demonstrate how our protocol enables various cross-chain applications, such as decentralized bridges, future transactions, andwallet transfer. Our system is designed for interoperability across multiple blockchains, enhancing security, scalability, and flexibility for decentralized finance (DeFi) ecosystems.
Last updated:  2025-05-22
Lightning Fast Secure Comparison for 3PC PPML
Tianpei Lu, Bingsheng Zhang, Lichun Li, Yuzhou Zhao, and Kui Ren
Privacy-preserving machine learning (PPML) techniques have gained significant popularity in the past years. Those protocols have been widely adopted in many real-world security-sensitive machine-learning scenarios. Secure comparison is one of the most important non-linear operations in PPML. In this work, we focus on maliciously secure comparison in the 3-party MPC over ring setting. In particular, we propose a novel constant round sign-bit extraction protocol in the preprocessing model. The communication of its semi-honest version is only 12.5% of the state-of-the-art (SOTA) constant-round semi-honest comparison protocol by Zhou et al.(Bicoptor, S&P 2023); communication complexity of its malicious version are approximately 25% of the SOTA by Patra and Suresh (BLAZE, NDSS 2020), for . Finally, the resulting ReLU protocol outperforms the SOTA secure ReLU evaluation solution (Bicoptor, S&P 2023) by in the semi-honest setting and in the malicious setting, respectively.
Last updated:  2025-05-22
On Gaussian Sampling for -ary Lattices and Linear Codes with Lee Weight
Maiara F. Bollauf, Maja Lie, and Cong Ling
We show that discrete Gaussian sampling for a -ary lattice is equivalent to codeword sampling for a linear code over with the Lee weight. This insight allows us to derive the theta series of a -ary lattice from the Lee weight distribution of the associated code. We design a novel Gaussian sampler for -ary lattices assuming an oracle that computes the symmetrized weight enumerator of the associated code. We apply this sampler to well-known lattices, such as the , Barnes-Wall, and Leech lattice, highlighting both its advantages and limitations, which depend on the underlying code properties. For certain root lattices, we show that the sampler is indeed efficient, forgoing the need to assume an oracle. We also discuss applications of our results in digital signature schemes and the Lattice Isomorphism Problem. In many cases, our sampler achieves a significant speed-up compared to state-of-the-art sampling algorithms in cryptographic applications.
Last updated:  2025-05-22
Towards Building Scalable Constant-Round MPC from Minimal Assumptions via Round Collapsing
Vipul Goyal, Junru Li, Rafail Ostrovsky, and Yifan Song
In this work, we study the communication complexity of constant-round secure multiparty computation (MPC) against a fully malicious adversary and consider both the honest majority setting and the dishonest majority setting. In the (strong) honest majority setting (where for a constant ), the best-known result without relying on FHE is given by Beck et al. (CCS 2023) based on the LPN assumption that achieves communication, where is the security parameter and the achieved communication complexity is independent of the number of participants. In the dishonest majority setting, the best-known result is achieved by Goyal et al. (ASIACRYPT 2024), which requires bits of communication and is based on the DDH and LPN assumptions. In this work, we achieve the following results: (1) For any constant , we give the first constant-round MPC in the dishonest majority setting for corruption threshold with communication assuming random oracles and oblivious transfers, where is the circuit depth. (2) We give the first constant-round MPC in the standard honest majority setting (where ) with communication only assuming random oracles. Unlike most of the previous constructions of constant-round MPCs that are based on multiparty garbling, we achieve our result by letting each party garble his local computation in a non-constant-round MPC that meets certain requirements. We first design a constant-round MPC that achieves communication assuming random oracles in the strong honest majority setting of . Then, we combine the party virtualization technique and the idea of MPC-in-the-head to boost the corruption threshold to for any constant assuming oblivious transfers to achieve our first result. Finally, our second result is obtained by instantiating oblivious transfers using a general honest-majority MPC and the OT extension technique built on random oracles.
Last updated:  2025-05-22
Multivalued Broadcast with Optimal Length
Gabriel Dettling, Martin Hirt, and Chen-Da Liu-Zhang
A multi-valued broadcast protocol allows a sender to broadcast an -bit message to recipients. For all relevant models, multi-valued broadcast protocols with asymptotically optimal communication complexity have been published. Despite their very low communication complexity, these protocols perform poorly in modern networks. Even if the network allows all parties to send messages at the same time, the execution time of the protocols is proportional to (instead of ). Even if the network allows to use all bilateral channels at the same time, the execution time is still proportional to (instead of ). We ask the natural question whether multi-valued broadcast protocols exist which take time proportional to if parties can simultaneously send messages, and even take time proportional to if the bilateral channels can be used simultaneously. We provide a complete characterization of multi-valued broadcast with a two-fold answer: On the negative side, we prove that for , multi-valued broadcast requires time proportional to , if parties can simultaneously send messages, respectively take time proportional to if bilateral channels can be used simultaneously. On the positive side, we prove that for (for any fixed ), multi-valued broadcast in time proportional to (when parties can send messages simultaneously), respectively proportional to (if bilateral channels can be used simultaneously) is possible. We provide such protocols both with cryptographic security as well as with statistical security.
Last updated:  2025-05-22
SEEC: Memory Safety Meets Efficiency in Secure Two-Party Computation
Henri Dohmen, Robin Hundt, Nora Khayata, and Thomas Schneider
Secure Multi-Party Computation (MPC) allows multiple parties to perform privacy-preserving computation on their secret data. MPC protocols based on secret sharing have high throughput which makes them well-suited for batch processing, where multiple instances are evaluated in parallel. So far, practical implementations of secret sharing-based MPC protocols mainly focus on runtime and communication efficiency, so the memory overhead of protocol implementations is often overlooked. Established techniques to reduce the memory overhead for constant-round garbled circuit protocols cannot be directly applied to secret sharing-based protocols because they would increase the round complexity. Additionally, state-of-the-art implementations of secret sharing-based MPC protocols are implemented in C/C++ and may exhibit memory unsafety and memory leaks, which could lead to undefined behavior. In this paper, we present SEEC: SEEC Executes Enormous Circuits, a framework for secret sharing-based MPC with a novel approach to address memory efficiency and safety without compromising on runtime and communication efficiency. We realize SEEC in Rust, a language known for memory-safety at close-to-native speed. To reduce the memory footprint, we develop an in-memory representation for sub-circuits. Thus, we never inline sub-circuit calls during circuit evaluation, a common issue that blows up memory usage in MPC implementations. We compare SEEC with the state-of-the-art secret sharing-based MPC frameworks ABY (NDSS'15), MP-SPDZ (CCS'20), and MOTION (TOPS'22) w.r.t. runtime, memory, and communication efficiency. Our results show that our reliable and memory-safe implementation has competitive or even better performance.
Last updated:  2025-05-22
The DROP Protocol: Dispute Resolution via Observation in Public for Verifiable, In-Person Voting
Josh Benaloh, Michael Naehrig, and Olivier Pereira
Dispute resolution has been a significant challenge in verifiable election protocols since such protocols were first proposed more than forty years ago. This work explores the problem from a new perspective and offers strong dispute resolution for in-person voting by depending on observers. It proposes a simple definition of dispute resolution as a property of a voting protocol---a definition that is independent of any other security goal. It also presents the DROP protocol, a verifiable, in-person voting protocol that runs in the presence of observers who will always reach a correct conclusion in the case of a dispute without ever being able to compromise privacy or facilitate coercion.
Last updated:  2025-05-22
REACTIVE: Rethinking Effective Approaches Concerning Trustees in Verifiable Elections
Josh Benaloh, Michael Naehrig, and Olivier Pereira
For more than forty years, two principal questions have been asked when designing verifiable election systems: how will the integrity of the results be demonstrated and how will the privacy of votes be preserved? Many approaches have been taken towards answering the first question such as use of mixnets and homomorphic tallying. But, in the case of large-scale elections, the second question has always been answered in the same way: decryption capabilities are divided amongst multiple independent "trustees" so that a collusion is required to compromise privacy. In practice, however, this approach can be fairly challenging to deploy. Even if multiple human trustees are chosen, they typically use software and often also hardware provided by a single voting system supplier, with little options to verify its quality when they have the technical expertise to do so. As a result, we observe that trustees are rarely in a position to exercise independent judgment to maintain privacy. This paper looks at several aspects of the trustee experience. It begins by surveying and discussing various cryptographic protocols that have been used for key generation in elections, explores their impact on the role of trustees, and notes that even the theory of proper use of trustees is more challenging than it might seem. This is illustrated by showing that one of the only references defining a full threshold distributed key generation (DKG) for elections defines an insecure protocol. Belenios, a broadly used open-source voting system, claims to rely on that reference for its DKG and security proof. Fortunately, it does not inherit the same vulnerability, and we offer a security proof for the Belenios DKG. The paper then discusses various practical contexts, in terms of humans, software, and hardware, and their impact on the practical deployment of a trustee-based privacy model.
Last updated:  2025-05-22
HAWK: Having Automorphisms Weakens Key
Daniël M. H. van Gent and Ludo N. Pulles
The search rank-2 module Lattice Isomorphism Problem (smLIP), over a cyclotomic ring of degree a power of two, can be reduced to an instance of the Lattice Isomorphism Problem (LIP) of at most half the rank if an adversary knows a nontrivial automorphism of the underlying integer lattice. Knowledge of such a nontrivial automorphism speeds up the key recovery attack on HAWK at least quadratically, which would halve the number of security bits. Luo et al. (ASIACRYPT 2024) recently found an automorphism that breaks omSVP, the initial underlying hardness assumption of HAWK. The team of HAWK amended the definition of omSVP to include this so-called symplectic automorphism in their submission to the second round of NIST's standardization of additional signatures. This work provides confidence in the soundness of this updated definition, assuming smLIP is hard, since there are plausibly no more trivial automorphisms that allow winning the omSVP game easily. Although this work does not affect the security of HAWK, it opens up a new attack avenue involving the automorphism group that may be theoretically interesting on its own.
Last updated:  2025-05-22
Enhancing Meme Token Market Transparency: A Multi-Dimensional Entity-Linked Address Analysis for Liquidity Risk Evaluation
Qiangqiang Liu, Qian Huang, Frank Fan, Haishan Wu, and Xueyan Tang
Meme tokens represent a distinctive asset class within the cryptocurrency ecosystem, characterized by high community engagement, significant market volatility, and heightened vulnerability to market manipulation. This paper introduces an innovative approach to assessing liquidity risk in meme token markets using entity-linked address identification techniques. We propose a multi-dimensional method integrating fund flow analysis, behavioral similarity, and anomalous transaction detection to identify related addresses. We develop a comprehensive set of liquidity risk indicators tailored for meme tokens, covering token distribution, trading activity, and liquidity metrics. Empirical analysis of tokens like BabyBonk, NMT, and BonkFork validates our approach, revealing significant disparities between apparent and actual liquidity in meme token markets. The findings of this study provide significant empirical evidence for market participants and regulatory authorities, laying a theoretical foundation for building a more transparent and robust meme token ecosystem.
Last updated:  2025-05-22
Fast Fuzzy PSI from Symmetric-Key Techniques
Cong Zhang, Yu Chen, Yang Cao, Yujie Bai, Shuaishuai Li, Juntong Lin, Anyu Wang, and Xiaoyun Wang
Private set intersection (PSI) enables a sender holding a set and a receiver holding a set to securely compute the intersection . Fuzzy PSI (FPSI) is a PSI variant where the receiver learns the items for which there exists such that with respect to some distance metric. Recently, Gao et al. (ASIACRYPT 2024) proposed the first FPSI protocols for and distance with linear complexity. They summarized their FPSI construction into two steps: fuzzy mapping and fuzzy matching. However, their realizations of the two steps heavily rely on public key operations, namely the DH-key exchange and additively homomorphic encryption, resulting in low efficiency. In this work, we propose new FPSI protocols for and distances, primarily leveraging symmetric-key primitives. We revisit the definition of fuzzy mapping and rigorously redefine it as a cryptographic scheme. We further introduce consistency for fuzzy mapping scheme, which could simplify the fuzzy matching step into plain PSI. We then demonstrate how to execute fuzzy mapping step satisfying consistency. We also propose several new technologies to completely avoid the extensive use of computationally expensive public-key operations burden inherent in existing solutions. We implement our FPSI protocols and compare them with the state-of-the-art FPSI protocols. Experiments show that our protocols perform better than state-of-the-art under all the parameters we tested. Specifically, our protocols achieve a speedup in running time and shrinking in communication cost, depending on set sizes, dimension and distance threshold.
Last updated:  2025-05-22
Polocolo: A ZK-Friendly Hash Function Based on S-boxes Using Power Residues (Full Version)
Jincheol Ha, Seongha Hwang, Jooyoung Lee, Seungmin Park, and Mincheol Son
Conventional hash functions are often inefficient in zero-knowledge proof settings, leading to design of several ZK-friendly hash functions. On the other hand, lookup arguments have recently been incorporated into zero-knowledge protocols, allowing for more efficient handling of ``ZK-unfriendly'' operations, and hence ZK-friendly hash functions based on lookup tables. In this paper, we propose a new ZK-friendly hash function, dubbed , that employs an S-box constructed using power residues. Our approach reduces the numbers of gates required for table lookups, in particular, when combined with Plonk, allowing one to use such nonlinear layers over multiple rounds. We also propose a new MDS matrix for the linear layer of . In this way, requires fewer Plonk gates compared to the state-of-the-art ZK-friendly hash functions. For example, when , requires less Plonk gates compared to Anemoi, which is currently the most efficient ZK-friendly hash function, where denotes the size of the underlying permutation in blocks of . For , requires less Plonk gates than Reinforced Concrete, which is one of the recent lookup-based ZK-friendly hash functions.
Last updated:  2025-05-22
Symphony of Speeds: Harmonizing Classic McEliece Cryptography with GPU Innovation
Wen Wu, Jiankuo Dong, Zhen Xu, Zhenjiang Dong, Dung Duong, Fu Xiao, and Jingqiang Lin
The Classic McEliece key encapsulation mechanism (KEM), a candidate in the fourth-round post-quantum cryptography (PQC) standardization process by the National Institute of Standards and Technology (NIST), stands out for its conservative design and robust security guarantees. Leveraging the code-based Niederreiter cryptosystem, Classic McEliece delivers high-performance encapsulation and decapsulation, making it well-suited for various applications. However, there has not been a systematic implementation of Classic McEliece on GPU platforms. This paper presents the first high-performance implementation of Classic McEliece on NVIDIA GPUs. Firstly, we present the first GPU-based implementation of Classic McEliece, utilizing a ``CPU-GPU'' heterogeneous approach and a kernel fusion strategy. We significantly reduce global memory accesses, optimizing memory access patterns. This results in encapsulation and decapsulation performance of 28,628,195 ops/s and 3,051,701 ops/s, respectively, for McEliece348864. Secondly, core operations like Additive Fast Fourier Transforms (AFFT), and Transpose AFFT (TAFFT) are optimized. We introduce the concept of the (T)AFFT stepping chain and propose two universal schemes: Memory Access Stepping Strategy (MASS) and Layer-Fused Memory Access Stepping Strategy (LFMASS), which achieve a speedup of 30.56% and 38.37%, respectively, compared to the native GPU-based McEliece6960119 implementation. Thirdly, extensive experiments on the NVIDIA RTX4090 show significant performance gains, achieving up to 344 higher encapsulation and 125 higher decapsulation compared to the official CPU-based AVX implementation, decisively outperforming existing ARM Cortex-M4 and FPGA implementations.
Last updated:  2025-05-22
SCMAC and LOL2.0: An AEAD Design Framework and A New Version of LOL Stream Cipher Design Framework
Dengguo Feng, Lin Jiao, Yonglin Hao, Qunxiong Zheng, Wenling Wu, Wenfeng Qi, Lei Zhang, Liting Zhang, Siwei Sun, and Tian Tian
In this paper, we introduce SCMAC, a general framework that transforms large-memory stream ciphers into AEAD schemes. It represents an intermediate design paradigm between Encrypt-then-MAC and dedicated single-pass AEAD, partially integrating encryption and authentication mechanisms while mitigating the risk of state leakage associated with immediate absorption and squeezing. Consequently, this approach harmonizes high performance with enhanced security. Additionally, we propose LOL2.0, an enhanced version of the blockwise stream cipher design framework LOL. This new framework improves security through modifications to the FSM update and output functions, and increases flexibility in constructing LFSR components. Based on SCMAC}$ and LOL2.0, we present two AEAD ciphers, LOL2.0-Mini and LOL2.0-Double, which support both stream cipher and AEAD modes. These ciphers are tailored to Beyond 5G/6G environments, offering 256-bit key length and resistance to known cryptanalysis methods, including differential, linear, and integral attacks. They also provide 128-bit security against forgery attacks in the nonce-respecting setting. Due to their compatibility with AES-NI and SIMD instructions, LOL2.0-Mini and LOL2.0-Double achieve software performance of 90 Gbps and 144 Gbps in stream cipher mode, respectively. In AEAD mode, they perform at 59 Gbps and 110 Gbps, significantly faster than their predecessor's Encrypt-then-MAC versions.
Last updated:  2025-05-22
Card-Based Protocol Counting Connected Components of Graphs
Koji Nuida
Card-based cryptography is a research area for realizing cryptographic functionality, such as secure multiparty computation and zero-knowledge proofs, by using a deck of physical cards and/or other non-electrical tools. Motivated by zero-knowledge proofs for solutions in pencil puzzles, there is a direction of recent studies on card-based protocols to verify connectivity of a set of cells or edges on lattice-shaped boards. In this paper, we generalize the problem to counting connected components of subsets on any graph, and propose a card-based protocol for the problem.
Last updated:  2025-05-22
SPECK: Signatures from Permutation Equivalence of Codes and Kernels
Marco Baldi, Michele Battagliola, Rahmi El Mechri, Paolo Santini, Riccardo Schiavoni, and Davide De Zuane
The ongoing search for secure post-quantum cryptographic primitives has led to the development of numerous novel digital signature schemes. In this paper we introduce , a new signature protocol based on the similarities between the Permuted Kernel Problem () and the Permutation Code Equivalence Problem (). At its core, is built on the permutation version of LESS, but introduces a key modification to the commitment step. Indeed, instead of committing to an entire permuted code, the prover commits to a random relaxed (that we call , Permutation Equivalence of Codes and Kernel) instance by randomly choosing a codeword from a random permutation of the initial code. In this sense, the secret key is used as a trapdoor to solve the committed instance. The new approach allows for a faster verification that does not involve gaussian elimination, while maintains roughly the same signature size as LESS. We present the protocol in detail and provide a deep analysis of the security of the new introduced assumptions.
Last updated:  2025-05-22
: Efficient Polynomial Commitment Schemes from Lattices
Lizhen Zhang, Shang Gao, and Bin Xiao
This work introduces , a Hypercube-Wise Optimized polynomial commitment scheme based on Lattices over ring Fields. The scheme achieves succinctness with proof size and verifier time, along with linear prover cost. It supports both univariate and multilinear polynomials under a unified framework. Inspired by the two-dimensional tensor structure employed in \cite{golovnev2021brakedown} to achieve sublinear efficiency, we generalize the idea to a -dimensional tensor (hypercube) structure and design a -round recursive proof protocol, where each round performs a dimensionality reduction, which results in an overall efficiency of . By setting , our scheme achieves near-optimal asymptotic performance. is fully transparent and relies only on the standard lattice assumption over rings. In terms of concrete efficiency, for polynomials with coefficients, our scheme yields proof sizes that are smaller than the lattice-based scheme of \cite{cini2024polynomial} (Crypto24), and over smaller than \cite{albrecht2024slap} (Eurocrypt24). Compared to \cite{nguyen2024greyhound} (Crypto24), our proof size is of similar magnitude, while achieving significantly lower verifier time.
Last updated:  2025-05-22
SQIsign2D: New SQIsign2D Variant by Leveraging Power Smooth Isogenies in Dimension One
Zheng Xu, Kaizhan Lin, Chang-An Zhao, and Yi Ouyang
In this paper, we propose SQIsign2D, a novel digital signature scheme within the SQIsign2D family. Unlike other SQIsign2D variants, SQIsign2D employs the prime as the field characteristic, where , and . By leveraging accessible -isogenies, SQIsign2D significantly reduces the degree requirements for two-dimensional isogeny computations, thereby lowering the overall computational overhead compared to other SQIsign2D variants. We also provide a proof-of-concept implementation of SQIsign2D, and give an efficiency comparison between SQIsign2D and other SQIsign2D variants. In particular, the experimental results demonstrate that the key generation and signing phases of SQIsign2D are more than twice as fast as those of SQIsign2D-East at the NIST-I security level, respectively. Additionally, the verification performance in SQIsign2D exhibits marginally improved efficiency.
Last updated:  2025-05-22
Rep3 Reloaded: On the Cost of Function-Dependent Preprocessing in Semi-Honest 3PC with Honest Majority
Marcel Keller
Rep3 denotes the implementation of semi-honest three-party computation with an honest majority in MP-SPDZ (CCS'20). It uses replicated secret sharing with one message per multiplication and party as proposed by Araki et al. (CCS'16). This approach is rivaled by Astra (CCSW'19) and Trio (PETS'25), which use function-dependent preprocessing. The latter is more involved than, e.g., Beaver triples which can be used as a commodity. In this work, we present a full implementation of Astra and Trio in MP-SPDZ, and we evaluate the costs of the different approaches. We show the equivalence of the schemes, which implies that a protocol in any of the schemes can be translated to one in another with the same overall communication cost. We also present an improvement to two important building blocks for privacy-preserving computation, namely secure comparison and probabilistic truncation used in fixed-point arithmetic. To evaluate our implementation, we have benchmarked machine learning training and inference in all three schemes, improving on Keller and Sun (ICML'22) by over 30%. Our implementation also highlights the large storage requirements of function-dependent preprocessing as it runs the two phases separately. To the best of our knowledge, this is the first implementation to do so.
Last updated:  2025-05-22
The Accidental Computer: Polynomial Commitments from Data Availability
Alex Evans and Guillermo Angeris
In this paper, we show two simple variations of a data availability scheme which enable it to act as a multilinear polynomial commitment scheme over the data in a block. The first variation enables commitments over all of the block's data with zero prover overhead: the data availability construction simply serves both purposes. The second variation allows commitments over subsets of data with nonzero but still concretely small proving costs, since most work is already done during data encoding. This works especially well for blockchains with a high degree of data parallelism, as data-parallel computation is particularly amenable to efficient GKR proofs. Since, in GKR, opening the polynomial commitment contributes significantly to prover costs, our construction enables the prover to reuse work already done by the data availability scheme, reducing—or wholly removing—work associated with the polynomial commitment scheme.
Last updated:  2025-05-21
Jagged Polynomial Commitments (or: How to Stack Multilinears)
Tamir Hemo, Kevin Jue, Eugene Rabinovich, Gyumin Roh, and Ron D. Rothblum
Modern SNARK constructions, almost ubiquitously, rely on a polynomial commitment scheme (PCS) --- a method by which a prover can commit to a large polynomial and later provide evaluation proofs of the form "P(x)=y" to the verifier. In the context of zkVMs (i.e., proof-systems for general-purpose RAM computations), the common design is to represent the computation trace as a sequence of tables, one per CPU instruction, and commit to the these tables, or even their individual columns, as separate polynomials. Committing separately to these polynomials has a large overhead in verification costs, especially in hash-based systems. In this work we drastically reduce this cost via a new construction which we call the jagged PCS. This PCS enables the prover to commit to the entire computation trace as a single polynomial, but then allows for the verifier to emulate access to the individual table or column polynomials, so that the arithmetization can proceed in the usual manner. The jagged PCS may be thought of as a sparse PCS for a very particular form of sparsity - namely, a "jagged" matrix in which each column has a different height. Our construction of the jagged PCS is highly performant in practice. In contrast to existing sparse PCS constructions for general sparse polynomials, the jagged PCS does not require the prover to commit to any additional oracles and the prover cost is dominated by 5 finite field multiplications per element in the trace area. Furthermore, we implement the verifier as an arithmetic circuit that depends only on the total trace area - thereby significantly reducing the problem of "combinatorial explosion" often encountered in zkVM recursion.
Last updated:  2025-05-21
Automated Verification of Consistency in Zero-Knowledge Proof Circuits
Jon Stephens, Shankara Pailoor, and Isil Dillig
Circuit languages like Circom and Gnark have become essential tools for programmable zero-knowledge cryptography, allowing developers to build privacy-preserving applications. These domain-specific languages (DSLs) encode both the computation to be verified (as a witness generator) and the corresponding arithmetic circuits, from which the prover and verifier can be automatically generated. However, for these programs to be correct, the witness generator and the arithmetic circuit need to be mutually consistent in a certain technical sense, and inconsistencies can result in security vulnerabilities. This paper formalizes the consistency requirement for circuit DSLs and proposes the first automated technique for verifying it. We evaluate the method on hundreds of real-world circuits, demonstrating its utility for both automated verification and uncovering errors that existing tools are unable to detect.
Last updated:  2025-05-21
LightCROSS: A Secure and Memory Optimized Post-Quantum Digital Signature CROSS
Harry Hart, Puja Mondal, Suparna Kundu, Supriya Adhikary, Angshuman Karmakar, and Chaoyun Li
CROSS is a code-based post-quantum digital signature scheme based on a zero-knowledge (ZK) framework. It is a second-round candidate of the National Institute of Standards and Technology’s additional call for standardizing post-quantum digital signatures. The memory footprint of this scheme is prohibitively large, especially for small embedded devices. In this work, we propose various techniques to reduce the memory footprint of the key generation, signature generation, and verification by as much as 50%, 52%, and 74%, respectively, on an ARM Cortex-M4 device. Moreover, our memory-optimized implementations adapt the countermeasure against the recently proposed (ASIACRYPT-24) fault attacks against the ZK-based signature schemes.
Last updated:  2025-05-21
Rock and a Hard Place: Attack Hardness in Neural Network-assisted Side Channel Analysis
Seyedmohammad Nouraniboosjin and Fatemeh Ganji
Side-channel analysis (SCA) has become a crucial area in ensuring the security of hardware implementations against potential vulnerabilities. With advancements in machine learning (ML) and artificial intelligence (AI), neural network(NN)-assisted techniques for SCA have demonstrated significant effectiveness. However, a fundamental question remains unanswered: are traces corresponding to different subkeys equally hard to attack? This paper addresses this issue by leveraging explainable AI techniques to analyze the hardness levels and, consequently, the root cause of hardness. To systematically investigate this problem, we reintroduce hardness metrics in SCA, which have been known to the ML community. Those metrics include query hardness (QH), log odds (LO), and matrix-based Rényi entropy (MRE). The challenge in this regard is that (virtually all) hardness metrics in ML cannot be adopted as they are. This is because ML and SCA metrics have conflicting goals, namely boosting accuracy and rank. Through careful experimentation, we identify the shortcomings of QH and LO in SCA and recommend MRE as a suitable hardness metric for SCA. We also study how hardness has been seen in SCA, where recent work has suggested the influence of class “labels” on the attack difficulty. Employing rigorous evaluation, our paper demonstrates that no statistically significant evidence can be found to support this claim. This leads us to the question of how much traces’ time samples affect the inherent hardness in distinguishing key candidates. Our novel explainable AI (XAI) approach not only answers this, but also makes a link between hardness and rank as the common performance metric in SCA. Our findings indicate that hardness values are influenced mainly by time samples rather than specific key labels. Furthermore, we examine whether hardness captures intrinsic properties of the traces, specifically, their lack of separability in feature space due to their inherent similarities. To this end, we introduce, for the first time in the context of SCA, the use of maximum mean discrepancy (MMD) as a principled metric. MMD effectively links trace hardness with attack difficulty by quantifying distributional differences induced by traces’ time samples. In addition to visualization techniques showing the root cause of hardness based on MMD, we employ XAI to explain the connection between MMD and attack hardness. Our proposed methodology enhances the understanding of attack difficulty in SCA and contributes to developing more robust evaluation metrics for profiling attacks.
Last updated:  2025-05-21
Structure-Preserving Compressing Primitives: Vector Commitments and Accumulators and Applications
Stephan Krenn, Omid Mir, and Daniel Slamanig
Compressing primitives such as accumulators and vector commitments, allow to represent large data sets with some compact, ideally constant-size value. Moreover, they support operations like proving membership or non-membership with minimal, ideally also constant-size, storage and communication overhead. In recent years, these primitives have found numerous practical applications, with many constructions based on various hardness assumptions. So far, however, it has been elusive to construct these primitives in a strictly structure-preserving setting, i.e., in a bilinear group in a way that messages, commitments and openings are all elements of the two source groups. Interestingly, backed by existing impossibility results, not even conventional commitments with such constraints are known in this setting. However, in many practical applications it would be convenient or even required to be structure-preserving, e.g., to commit or accumulate group elements. In this paper we investigate whether strictly structure-preserving compressing primitives can be realized. We close this gap by presenting the first strictly structure-preserving commitment that is shrinking (and in particular constant-size). We circumvent existing impossibility results by employing a more structured message space, i.e., a variant of the Diffie-Hellman message space. Our main results are constructions of structure-preserving vector commitments as well as structure-preserving accumulators. We first discuss generic constructions and then present concrete constructions under the Diffie-Hellman Exponent assumption. To demonstrate the usefulness of our constructions, we discuss various applications. Most notable, we present the first entirely practical constant-size ring signature scheme in bilinear groups (i.e., the discrete logarithm setting). Concretely, using the popular BLS12-381 pairing-friendly curve, our ring signatures achieve a size of roughly bits.
Last updated:  2025-05-21
Improved differential cryptanalysis of SPEEDY
Tim Beyne and Addie Neyt
SPEEDY is a family of lightweight block ciphers designed by Leander et al. Several differential attacks have been reported on the SPEEDY variants. However, nearly all of these attacks are based on differential characteristics with probabilities that differ from their reported values. These discrepancies arise from incorrect calculations of the (key-averaged) probability, particularly in consecutive steps within one round without intermediate key addition. In this paper, we revisit all reported differential characteristics and accurately calculate their key-averaged probabilities using quasidifferential trails. We extend this to also estimate the fixed-key probability. Our analysis reveals several characteristics with zero or significantly altered probability, invalidating several proposed attacks. We further implement a search algorithm and find a 5.5-round differential distinguisher that can be used to mount a full-round key recovery attack with a data complexity of and a time complexity of . The memory complexity varies: in the chosen-plaintext setting, it is , whereas in the chosen-ciphertext setting, it is .
Last updated:  2025-05-21
Tweakable Permutation-based Luby-Rackoff Constructions
Bishwajit Chakraborty and Abishanka Saha
Liskov, Rivest, and Wagner, in their seminal work, formulated tweakable blockciphers and proposed two blockcipher-based design paradigms, LRW1 and LRW2, where the basic design strategy is to xor the masked tweak to the input and output of a blockcipher. The 2-round cascaded LRW2 and 4-round cascaded LRW1 have been proven to be secure up to queries, but -bit optimal security still remains elusive for these designs. In their paper, Liskov also posed an open challenge of embedding the tweak directly in the blockcipher, and to address this, Goldenberg et al. introduced the tweakable Luby-Rackoff (LR) constructions. They showed that if the internal primitives are random functions, then for tweaks with blocks, the construction needs rounds to be optimally -bit CPA secure and rounds to be optimally -bit CCA secure, where respectively and rounds were required to process the tweaks. Since blockciphers can be designed much more efficiently than pseudorandom functions, in many practical applications the internal primitives of LR ciphers are instantiated as blockciphers, which however would lead to a birthday-bound factor, which is not ideal for say lightweight cryptography. This paper addresses the following two key questions affirmatively: (1) Can Goldenberg et al.'s results be extended to LR constructions with random permutations as internal primitives without compromising the optimal -bit security? (2) Can the number of rounds required for handling long tweaks be reduced? We formally define TLR-compatible functions, for processing the tweak, which when composed with 4-rounds and 5-rounds of LR construction with random permutations as internal primitives gives us respectively -bit CPA and CCA secure tweakable permutations. For the security analysis, we proved general Mirror Theory result for three permutations. We also propose instantiating TLR-compatible functions with one round LR where a permutation (resp, two AXU hash functions) is used to mask single-block tweaks (resp., variable-length tweaks), thus proposing the -bit CPA (resp., CCA) secure tweakable permutation candidates, and (resp., and ), using (resp., ) LR rounds, which is a significant reduction from the tweak-length-dependent results of Goldenberg et al. As a corollary, we also show -bit CPA (resp., CCA) security of -rounds (resp. -rounds) permutation-based LR construction, which is quite an improvement over the existing -bit security proved by Guo et al.
Last updated:  2025-05-21
Side-Channel Analysis of Integrate-and-Fire Neurons within Spiking Neural Networks
Matthias Probst, Manuel Brosch, and Georg Sigl
Spiking neural networks gain attention due to low power properties and event-based operation, making them suitable for usage in resource constrained embedded devices. Such edge devices allow physical access opening the door for side-channel analysis. In this work, we reverse engineer the parameters of a feed-forward spiking neural network implementation with correlation power analysis. Localized measurements of electro-magnetic emanations enable our attack, despite inherent parallelism and the resulting algorithmic noise of the network. We provide a methodology to extract valuable parameters of integrate-and-fire neurons in all layers, as well as the layer sizes.
Last updated:  2025-05-21
Hidden Number Problems in Fiat-Shamir based Post-Quantum Signatures
Yi-Fu Lai, Jonas Meers, and Julian Nowakowski
Schnorr and (EC)DSA signatures famously become completely insecure once a few bits of the random nonce are revealed to an attacker. In this work, we investigate whether post-quantum signature schemes allow for similar attacks. Specifically, we consider three Fiat-Shamir based signature schemes: Dilithium (lattices), LESS (codes) and CSI-FiSh (isogenies). Analogous to the attacks on Schnorr and (EC)DSA, we assume knowledge of some bits of the commitment randomness used in the underlying ID protocol. Such attacks belong to the broader class of Hidden Number Problems. In the case of LESS, we give an efficient attack that requires knowledge of a single bit of the randomness in roughly signatures to completely recover the secret key. Our attack is based on the observation that knowledge of a single bit in the randomness is enough to distinguish secret key entries from random ones. Since for proposed parameters there are at most candidates for each entry, we can enumerate all candidates and use the distinguisher to recover the secret key one entry at a time. For Dilithium we are able to recover the whole secret key using at most 1792 signatures when either 3 or 4 least significant bits of the randomness are known (depending on the parameter set). Here the attack is based on the simple observation that knowledge of the least significant bits of the randomness immediately yields a linear relation in the secret key. For CSI-FiSh, on the other hand, we obtain only an inefficient attack that requires exponentially many signatures. However, we prove that this attack is essentially optimal. Thus, our results show that CSI-FiSh comes equipped with an inherent leakage resilience. The argument easily extends to a wider class of signature schemes, but notably does not apply to the predecessor SeaSign.
Last updated:  2025-05-21
GAPP: Generic Aggregation of Polynomial Protocols
Chaya Ganesh, Sikhar Patranabis, Shubh Prakash, and Nitin Singh
We construct a new bivariate polynomial commitment scheme, bPCLB, with succinct verification, O(m+n) sized public parameters, and O(m + n) cryptographic operations to generate evaluation proofs for bivariate polynomials of degree (n, m). bPCLB commits to polynomials using their Lagrange coefficients. This is in contrast to existing bivariate schemes, which either incur O(mn)-sized public parameters and O(mn) cost for evaluation proofs, or do not natively support committing to polynomials in the Lagrange basis. We present the idea of a packing gadget that allows packing a non-constant number of univariate polynomials into one bivariate polynomial. We implement the packing gadget with bPCLB to achieve the following results. We propose a generic framework called GAPP for aggregation of polynomial protocols. This allows proving n instances of a polynomial protocol using a single aggregate proof that has O(log n) size, and can be verified using O(log n) operations. We construct an information-theoretic protocol for proving the satisfiability of the bivariate polynomial identity, which is then compiled using any bivariate polynomial commitment scheme (PCS) to yield an argument of knowledge for the aggregation relation. GAPP can be applied to several popular SNARKs over bilinear groups that are modeled as polynomial protocols in a black-box way. Using bPCLB yields an efficient instantiation of GAPP for aggregating PLONK proofs where the prover only performs sublinear cryptographic operations in the evaluation proof. We illustrate the packing property of bPCLB in two applications. We first construct a new lookup argument which supports tables of m-tuples. Second, we show a generalized grand-product protocol over a non-constant (say m) number of vectors. While existing approaches for both these applications incur O(m) proof size and verification complexity, our constructions achieve O(log m) dependence. We leverage these applications together with our GAPP framework to design an a la carte proof system proof system for non-uniform computations.
Last updated:  2025-05-21
High-Throughput Secure Multiparty Computation with an Honest Majority in Various Network Settings
Christopher Harth-Kitzerow, Ajith Suresh, Yongqin Wang, Hossein Yalame, Georg Carle, and Murali Annavaram
In this work, we present novel protocols over rings for semi-honest secure three-party computation (3PC) and malicious four-party computation (4PC) with one corruption. While most existing works focus on improving total communication complexity, challenges such as network heterogeneity and computational complexity, which impact MPC performance in practice, remain underexplored. Our protocols address these issues by tolerating multiple arbitrarily weak network links between parties without any substantial decrease in performance. Additionally, they significantly reduce computational complexity by requiring up to half the number of basic instructions per gate compared to related work. These improvements lead to up to twice the throughput of state-of-the-art protocols in homogeneous network settings and up to eight times higher throughput in real-world heterogeneous settings. These advantages come at no additional cost: Our protocols maintain the best-known total communication complexity per multiplication, requiring 3 elements for 3PC and 5 elements for 4PC. We implemented our protocols alongside several state-of-the-art protocols (Replicated 3PC, ASTRA, Fantastic Four, Tetrad) in a novel open-source C++ framework optimized for high throughput. Five out of six implemented 3PC and 4PC protocols achieve more than one billion 32-bit multiplications or over 32 billion AND gates per second using our implementation in a 25 Gbit/s LAN environment. This represents the highest throughput achieved in 3PC and 4PC so far, outperforming existing frameworks like MP-SPDZ, ABY3, MPyC, and MOTION by two to three orders of magnitude.
Last updated:  2025-05-21
Enforcing arbitrary constraints on Bitcoin transactions
Federico Barbacovi and Enrique Larraia
The challenge of enforcing constraints on Bitcoin transac- tions has recently gained a lot of attention. The current approach to solve this problem falls short in certain aspects, such as privacy and programmability. We design a new solution that leverages zkSNARKs and allows enforcing arbitrary constraints on Bitcoin transactions while maintaining some information private. Our approach also bypasses the non-Turing completeness of Bitcoin Script, allowing the enforcement of unbounded constraints, namely constraints that repeat a certain opera- tion an unbounded number of times.
Last updated:  2025-05-21
Fuzzy Private Set Intersection from VOLE
Aron van Baarsen and Sihang Pu
Private set intersection (PSI) is a well-researched cryptographic primitive that allows two parties to compute the intersection of their input sets without revealing any information about items outside of the intersection. Fuzzy private set intersection is a relatively new variant of PSI, where items are not matched exactly but ``fuzzily''. Most commonly, items are points in -dimensional integer space and a point is a fuzzy match to another if it lies within a ball of radius centered at this point, with respect to some distance metric. Previous works either only support infinity ) distance metric and standard PSI functionality, or support general Minkowski (, ) distance metrics and realize richer functionalities but rely on expensive homomorphic encryptions. Our work aims to bridge this gap by giving the first construction of a fuzzy PSI protocol for general Minkowski distance metrics relying on significantly cheaper operations during the online phase. Our main building block is a novel fuzzy matching protocol based on an oblivious pseudorandom function (OPRF), which can be realized very efficiently from vector oblivious linear evaluation (VOLE). Our protocol is able to preserve the asymptotic complexity as well as the simplicity of the fuzzy matching protocol from van Baarsen and Pu (Eurocrypt '24), while being much more concretely efficient. Additionally, we achieve several asymptotic improvements by representing intervals succinctly. Finally, we present the first fuzzy PSI protocol for infinity distance that places no assumptions on the sets of points, while maintaining asymptotic complexities comparable to the state-of-the-art fuzzy PSI protocol.
Last updated:  2025-05-21
Efficient Modular Multiplication Hardware for Number Theoretic Transform on FPGA
Tolun Tosun, Selim Kırbıyık, Emre Koçer, and Ersin Alaybeyoğlu
In this paper, we present a comprehensive analysis of various modular multiplication methods for Number Theoretic Transform (NTT) on FPGA. NTT is a critical and time-intensive component of Fully Homomorphic Encryption (FHE) applications while modular multiplication consumes a significant portion of the design resources in an NTT implementation. We study the existing modular reduction approaches from the literature, and implement particular methods on FPGA. Specifically Word-Level Montgomery (WLM) for NTT friendly primes [19] and K2 RED [3]. For improvements, we explore the trade-offs between the number of available primes in special forms and hardware cost of the reduction methods. We develop a DSP multiplication-optimized version of WLM, which we call WLM-Mixed. We also introduce a subclass of Proth primes, referred to as Proth-𝑙 primes, characterized by a low and fixed signed Hamming Weight. This special class of primes allows us to design multiplication-free shift-add versions of K2 RED and naive Montgomery reduction [20], referred to as K2 RED-Shift and Montgomery-Shift. We provide in-depth evaluations of these five reduction methods in an NTT architecture on FPGA. Our results indicate that WLM-Mixed is highly resource-efficient, utilizing only 3 DSP multiplications for 64-bit coefficient moduli. On the other hand, K2 RED-Shift and Montgomery-Shift offer DSP-free alternatives, which can be beneficial in specific scenarios.
Last updated:  2025-05-21
OAE-RUP: A Strong Online AEAD Security Notion and its Application to SAEF
Amit Singh Bhati, Elena Andreeva, and Damian Vizar
Release of unverified plaintexts (RUP) security is an important target for robustness in AE schemes. It is also highly crucial for lightweight (LW) implementations of online AE schemes on memory-constrained devices. Surprisingly, very few online AEAD schemes come with provable guarantees against RUP integrity and not one with any well-defined RUP confidentiality. In this work, we first propose a new strong security notion for online AE schemes called OAE-RUP that captures security under blockwise processing of both encryption (which includes nonce-misuse) and decryption (which includes RUP). Formally, OAE-RUP combines the standard RUP integrity notion INT-RUP with a new RUP confidentiality notion sOPRPF (strong Online PseudoRandom Permutation followed by a pseudorandom Function). sOPRPF is based on the concept of "strong online permutations" and can be seen as an extension of the well-known CCA3 notion (Abed et al., FSE 2014) that captures arbitrary-length inputs. An OAE-RUP-secure scheme is resistant against nonce-misuse as well as leakage of unverified plaintexts where the integrity remains unaffected, and the confidentiality of any encrypted plaintext is preserved up to the leakage of the longest prefix with the leaked plaintexts and the leakage of the length of the longest prefix with the nonce-repeating ciphertexts. We then prove the OAE-RUP security of the SAEF mode. SAEF is a ForkAE mode (Asiacrypt 2019) that is optimized for authenticated encryption of short messages and processes the message blocks sequentially and in an online manner. At SAC 2020, it was shown that SAEF is also an online nonce misuse-resistant AE (OAE), offering enhanced security against adversaries that make blockwise adaptive encryption queries. It has remained an open question if SAEF also resists attacks against blockwise adaptive decryption adversaries or, more generally, when the decrypted plaintext is released before verification (RUP). Our proofs are conducted using the coefficients H technique, and they show that, without any modifications, SAEF is OAE-RUP secure up to the birthday bound, i.e., up to processed data blocks, where is the block size of the forkcipher.
Last updated:  2025-05-21
Robust Threshold ECDSA with Online-Friendly Design in Three Rounds
Guofeng Tang and Haiyang Xue
Threshold signatures, especially ECDSA, enhance key protection by addressing the single-point-of-failure issue. Threshold signing can be divided into offline and online phases, based on whether the message is required. Schemes with low-cost online phases are referred to as ``online-friendly". Another critical aspect of threshold ECDSA for real-world applications is robustness, which guarantees the successful completion of each signing execution whenever a threshold number of semi-honest participants is met, even in the presence of misbehaving signatories. The state-of-the-art online-friendly threshold ECDSA without robustness was developed by Doerner et al. in S\&P'24, requiring only three rounds. Recent work by Wong et al. in NDSS'23 (WMY23) and NDSS'24 (WMC24) achieves robustness but demands additional communication rounds (7 and 4, respectively) or incurs costly operations in the online phase, such as computations over a homomorphic encryption scheme. This paper presents the first three-round threshold ECDSA scheme with both robustness and an online-friendly design. The online phase of our scheme relies solely on several elliptic-curve group operations, which are 2 to 3 orders of magnitude less computationally intensive than those based on linearly homomorphic encryption schemes. We implement our protocol and conduct a comprehensive comparison with WMY23 and WMC24. Benchmark results show that the online phase of our scheme is faster than that of WMY23 and hundreds of times faster than that of WMC24. Lastly, we demonstrate that our techniques can be extended to construct an online-friendly and robust three-round threshold BBS+ scheme.
Last updated:  2025-05-21
Energy Consumption Framework and Analysis of Post-Quantum Key-Generation on Embedded Devices
J Cameron Patterson, William J Buchanan, and Callum Turino
The emergence of quantum computing and Shor's algorithm necessitates an imminent shift from current public key cryptography techniques to post-quantum robust techniques. NIST has responded by standardising Post-Quantum Cryptography (PQC) algorithms, with ML-KEM (FIPS-203) slated to replace ECDH (Elliptic Curve Diffie-Hellman) for key exchange. A key practical concern for PQC adoption is energy consumption. This paper introduces a new framework for measuring the PQC energy consumption on a Raspberry Pi when performing key generation. The framework uses both available traditional methods and the newly standardised ML-KEM algorithm via the commonly utilised OpenSSL library.
Last updated:  2025-05-21
SubLogarithmic Linear Time SNARKs from Compressed Sum-Check
Nitin Singh and Sikhar Patranabis
We leverage recently proposed multilinear polynomial commitment schemes, with linear time prover and constant proof size to reduce the communication complexity of the classical sum-check protocol for multivariate polynomials. Specifically, for degree multivariate polynomial in variables which can be decomposed into multilinear polynomials, we exhibit a sum-check protocol with prover cost and communication for . This enables us to break the barrier for this ubiquitous primitive used in multilinear SNARKs and implies first construction of prover efficient (with proving cost) SNARKs using multilinear polynomials with proof-size. Currently, lower communication is only obtained by SNARKs based on univariate polynomials which incur prover complexity due to inherent polynomial multiplication. Concretely, using compressed sum-check in HyperPlonk protocol allows us to reduce the proof size from about KB KB to only about KB, making it competitive with univariate SNARKs like PLONK.
Last updated:  2025-05-21
Samaritan: Linear-time Prover SNARK from New Multilinear Polynomial Commitments
Chaya Ganesh, Sikhar Patranabis, and Nitin Singh
We study linear-time prover SNARKs and make the following contributions: We provide a framework for transforming a univariate polynomial commitment scheme into a multilinear polynomial commitment scheme. Our transformation is generic, can be instantiated with any univariate scheme and improves on prior transformations like Gemini (EUROCRYPT 2022) and Virgo (S&P 2020) in all relevant parameters: proof size, verification complexity, and prover complexity. Instantiating the above framework with the KZG univariate polynomial commitment scheme, we get SamaritanPCS – the first multilinear polynomial commitment scheme with constant proof size and linear-time prover. The proof size is just 368 bytes, which is the smallest among all multilinear polynomial commitment schemes. Our scheme also has excellent batching properties, wherein proving k evaluations over the hypercube of size n incurs O(n + k\sqrt{n}) cryptographic work, resulting in substantially amortized prover work over several evaluations. We construct LogSpartan – a new multilinear PIOP for R1CS based on recent techniques for lookup arguments. Compiling this PIOP using SamaritanPCS gives Samaritan – a SNARK in the universal and updatable SRS setting. Samaritan has linear-time prover, logarithmic verification and logarithmic proof size. Concretely, its proof size is one of the smallest among other known linear-time prover SNARKs without relying on concretely expensive proof recursion techniques. For an R1CS instance with 1 million constraints, Samaritan (over the BLS12-381 curve) has a proof size of 6.2KB. We compare Samaritan with other linear-time prover SNARKs in the updatable setting. We asymptotically improve on the log^2 n proof size of Spartan. Unlike Libra (CRYPTO 2019), the argument size of Samaritan is independent of the circuit depth. Compared to Gemini (EUROCRYPT 2022), Samaritan achieves 3X smaller argument size at 1 million constraints. We are competitive with the very recently proposed MicroSpartan (S&P 2025) and linear-time SNARKs for the Plonkish constraint system such as HyperPlonk (EUROCRYPT 2023).
Last updated:  2025-05-21
New Framework for Structure-Aware PSI From Distributed Function Secret Sharing
Dung Bui, Gayathri Garimella, Peihan Miao, and Phuoc Van Long Pham
Private set intersection (PSI) allows two parties to jointly compute the intersection of their private sets without revealing any additional information. Structure-aware PSI (sa-PSI), introduced by Garimella et al. (Crypto'22), is a variant where Alice's input set has a publicly known structure and Bob's input set remains unstructured, enabling new applications like fuzzy PSI. Their construction relies solely on lightweight cryptographic primitives such as symmetric-key primitives and oblivious transfer (OT) extension. Since then, there has been active research on sa-PSI based on lightweight cryptography. Notably, recent work by Garimella et al. (Crypto'24) achieves sa-PSI with both communication and computation costs only scaling with the description size of Alice's set, rather than its potentially large cardinality. However, this line of work remains largely theoretical, lacking efficient concrete implementations. In this work, we close this gap by presenting a new framework for sa-PSI that achieves practical efficiency. We identify and eliminate a hidden multiplicative overhead proportional to the security parameter (e.g., 128) in prior symmetric-key-based sa-PSI constructions. A key building block of our new framework is a distributed Function Secret Sharing (dFSS) key generation protocol tailored to the structure of Alice's set, which may be of independent interest. To demonstrate the practicality of our framework, we extend our dFSS protocol to support incremental evaluation, introduce new techniques for spatial hashing, and develop several new optimization techniques, including reducing the exponential dependence on dimension and enabling load balancing between the two parties. We instantiate our framework for structured sets defined by unions of -dimensional balls, and implement our protocols using only lightweight symmetric-key primitives and OT extension. Our experiments show concrete performance improvements of up to speedup in computation and reduction in communication in low-dimensional, large-radius settings compared to existing public-key-based fuzzy PSI protocols by van Baarsen & Pu (Eurocrypt'24) and Gao et al. (Asiacrypt'24).
Last updated:  2025-05-21
Covert Attacks on Machine Learning Training in Passively Secure MPC
Matthew Jagielski, Rahul Rachuri, Daniel Escudero, and Peter Scholl
Secure multiparty computation (MPC) allows data owners to train machine learning models on combined data while keeping the underlying training data private. The MPC threat model either considers an adversary who passively corrupts some parties without affecting their overall behavior, or an adversary who actively modifies the behavior of corrupt parties. It has been argued that in some settings, active security is not a major concern, partly because of the potential risk of reputation loss if a party is detected cheating. In this work we show explicit, simple, and effective attacks that an active adversary can run on existing passively secure MPC training protocols, while keeping essentially zero risk of the attack being detected. The attacks we show can compromise both the integrity and privacy of the model, including attacks reconstructing exact training data. Our results challenge the belief that a threat model that does not include malicious behavior by the involved parties may be reasonable in the context of PPML, motivating the use of actively secure protocols for training.
Last updated:  2025-05-21
Authenticated Key Exchange Protocol with Remote Randomness
John C. W. Chan
A conventional Authenticated Key Exchange (AKE) protocol consumes fresh random coins from the local random source. However, recent studies of bad randomness expose the vulnerability of some AKE protocols under small subgroup attacks when the random coins are manipulated or being repeated. It is important to ensure the bad randomness of one random source will not affect the security of the AKE protocol as a whole. Thus, we introduce the notion of remote randomness by introducing additional ephemeral keys generated by a fresh remote random source in the AKE protocol. In this paper, we argue that because of the thrive of cloud computing, it encourages high speed internal data transfer within server clusters or virtual machines, including entropy pool data used in our remote randomness AKE protocols. We present a remote randomness modification to the HMQV protocol to demonstrate its resilience under the manipulation of local random sources. We then provide a new security model with the consideration of remote randomness and show thatthe modified AKE protocol is secure under our new model.
Last updated:  2025-05-20
The Security of ML-DSA against Fault-Injection Attacks
Haruhisa Kosuge and Keita Xagawa
Deterministic signatures are often used to mitigate the risks associated with poor-quality randomness, where the randomness in the signing process is generated by a pseudorandom function that takes a message as input. However, some studies have shown that such signatures are vulnerable to fault-injection attacks. To strike a balance, recent signature schemes often adopt "hedged" randomness generation, where the pseudorandom function takes both a message and a nonce as input. Aranha et al. (EUROCRYPT 2020) investigated the security of hedged Fiat-Shamir signatures against 1-bit faults and demonstrated security for certain types of bit-tampering faults. Grilo et al. (ASIACRYPT 2021) extended this proof to the quantum random oracle model. Last year, NIST standardized the lattice-based signature scheme ML-DSA, which adopts the hedged Fiat-Shamir with aborts. However, existing security proofs against bit-tampering faults do not directly apply, as Aranha et al. left this as an open problem. To address this gap, we analyze the security of ML-DSA against multi-bit fault-injection attacks. We provide a formal proof of security for a specific class of intermediate values, showing that faults at these points cannot be exploited. Furthermore, to highlight the infeasibility of stronger fault resilience, we present key-recovery attacks that exploit signatures generated under fault injection at the other intermediate values.
Last updated:  2025-05-20
Dyna-hinTS: Silent Threshold Signatures for Dynamic Committees
Aniket Kate, Pratyay Mukherjee, Samipa Samanta, and Pratik Sarkar
The works of Garg et al. [S&P'24] (aka hinTS) and Das et al. [CCS'23] introduced the notion of silent threshold signatures (STS) - where a set of signers silently perform local computation to generate a public verification key. To sign a message, any set of signers sign the message non-interactively and these are aggregated into a constant-sized signature. This paradigm avoids performing expensive Distributed Key Generation procedure for each set of signers while keeping the public verification key constant-sized. In this work, we propose the notion of committee-based silent threshold signature (c-STS) scheme. In a c-STS scheme, a set of signers initially perform a one-time setup to generate the verification key, and then a subset of signers are randomly chosen for an epoch to perform the threshold signing while the other signers are not authorized to sign during that epoch. This captures existing systems like Ethereum Altair and Dfinity where only a specific committee is authorized to sign in a designated epoch. The existing STS schemes cannot be extended to the committee setting because the signature verification only attests to the number of signing parties, not which committee they belong to. So, we upgrade hinTS to the committee setting by proposing Dyna-hinTS. It is the c-STS scheme and it requires a one-time silent setup and generates a one-time public verification key that does not vary with the committee. Assuming a set of 1024 signers (with corrupt 682 signers), hinTS generates an aggregated signature in 1.7s whereas Dyna-hinTS generates it in s within a committee of signers. This yields a improvement over hinTS for signature generation at the cost of increasing signature verification time by over hinTS. Dyna-hinTS supports general access structure, weighted signatures and improves existing multiverse threshold signatures.
Last updated:  2025-05-20
Homomorphic Signature-based Witness Encryption and Applications
Alireza Kavousi and István András Seres
Signature-based witness encryption (SWE) schemes recently emerged as a viable alternative to instantiate timed-release cryptography in the honest majority setting. In particular, assuming threshold trust in a set of parties that release signatures at a specified time, one can ``encrypt to the future'' using an SWE scheme. Applications of SWE schemes include voting, auctions, distributed randomness beacons, and more. However, the lack of homomorphism in existing schemes reduces efficiency and hinders deployment. In this work, we introduce the notion of homomorphic SWE (HSWE) to improve the practicality of timed-release encryption schemes. We show one can build HSWE using a pair of encryption and signature schemes where the uniqueness of the signature is required when the encryption relies on injective one-way functions. We then build three HSWE schemes in various settings using BLS, RSA, and Rabin signatures and show how to achieve a privacy-preserving variant that only allows extracting the homomorphically aggregated result while keeping the individual plaintexts confidential.
Last updated:  2025-05-20
MicroCrypt Assumptions with Quantum Input Sampling and Pseudodeterminism: Constructions and Separations
Mohammed Barhoush, Ryo Nishimaki, and Takashi Yamakawa
We investigate two natural relaxations of quantum cryptographic primitives. The first involves quantum input sampling, where inputs are generated by a quantum algorithm rather than sampled uniformly at random. Applying this to pseudorandom generators (s) and pseudorandom states (s), leads to the notions denoted as and , respectively. The second relaxation, -pseudodeterminism, relaxes the determinism requirement by allowing the output to be a special symbol on an inverse-polynomial fraction of inputs. We demonstrate an equivalence between bounded-query logarithmic-size , logarithmic-size , and . Moreover, we establish that can be constructed from -s, which in turn were built from logarithmic-size . Interestingly, these relations remain unknown in the uniform key setting. To further justify these relaxed models, we present black-box separations. Our results suggest that -pseudodeterministic primitives may be weaker than their deterministic counterparts, and that primitives based on quantum input sampling may be inherently weaker than those using uniform sampling. Together, these results provide numerous new insights into the structure and hierarchy of primitives within MicroCrypt.
Last updated:  2025-05-20
Exclusive Ownership of Fiat-Shamir Signatures: ML-DSA, SQIsign, LESS, and More
Michael Meyer, Patrick Struck, and Maximiliane Weishäupl
Exclusive ownership (EO) security is a feature of signature schemes that prevents adversaries from "stealing" an honestly generated signature by finding a new public key which verifies said signature. It is one of the beyond unforgeability features (BUFF) which were declared to be desirable features by NIST. The BUFF transform allows to generically achieve exclusive ownership (and other properties) at the cost of an increased signature size. In this work, we study the EO security of (different variants of) Fiat-Shamir signatures. As our main result, we show that the commonly used variant of Fiat-Shamir signatures (where signatures consist of challenge-response tuples) with λ-bit challenges, can achieve about λ-bit EO security through its implicit usage of the BUFF transform—this presents a significant improvement to existing results that only provide λ/2-bit of EO security. This benefit of our result comes without an increase in signature size. For other variants of Fiat-Shamir signatures, we show worse bounds, which nevertheless improve upon existing results. Finally, we apply our results to several signature schemes: SQIsign and LESS (both round-2 NIST candidates); ML-DSA (NIST standard); CSI-FiSh; and Schnorr signatures. This shows that all these schemes achieve significantly better bounds regarding their EO security compared to existing results.
Last updated:  2025-05-20
A Study of Blockchain Consensus Protocols
Shymaa M. Arafat
When Nakamoto invented Bitcoin, the first generation of cryptocurrencies followed it in applying POW (Proof of Work) consensus mechanism; due to its excessive energy consumption and heavy carbon footprints, new innovations evolved like Proof of Space, POS (Proof of Stake), and a lot more with many variants for each. Furthermore, the emergence of more blockchain applications and kinds beyond just cryptocurrencies needed more consensus mechanisms that is optimized to fit requirements of each application or blockchain kind; examples range from IoT (Internet of Things) blockchains for sustainability applications that often use variants of BFT (Byzantine Fault Tolerance) algorithm, and consensus needed to relay transactions and/or assets between different blockchains in interoperability solutions. Previous studies concentrated on surveying and/or proposing different blockchain consensus rules, on a specific consensus issue like attacks, randomization, or on deriving theoretical results. Starting from discussing most important theoretical results, this paper tries to gather and organize all significant existing material about consensus in the blockchain world explaining design challenges, tradeoffs and research areas. We realize that the topic could fit for a complete textbook, so we summarize the basic concepts and support with tables and appendices. Then we highlight some case examples from interoperability solutions to show how flexible and wide the design space is to fit both general and special purpose systems. The aim is to provide researchers with a comprehensive overview of the topic, along with the links to go deeper into every detail.
Last updated:  2025-05-20
Improved Noise Bound in BFV Homomorphic Encryption and Its Application to Multiplication
Akshit Aggarwal, Yang Li, and Srinibas Swain
Fully Homomorphic Encryption (FHE) enables computations on encrypted data without requiring decryption. However, each computation increases the noise level, which can eventually cause decryption failures once a certain threshold is reached. In particular, homomorphic multiplication significantly amplifies noise in the ciphertext. In this work, we revisit Ring-learning-With-Error (RLWE) based encryption proposed by Fan et al. and present an optimized noise growth approach by swapping the sample space for secret key and error distribution. Thereafter, we revisit BFV homomorphic multiplication proposed by Kim et al. (ASIACRYPT'21) and present an optimized noise bound. Later, we empirically check the hardness of proposed scheme using lattice estimator. Our analysis demonstrates that the proposed method achieves more than 128-bit security and achieves a lower noise bound than existing techniques.
Last updated:  2025-05-20
MAESTRO: Multi-party AES using Lookup Tables
Hiraku Morita, Erik Pohle, Kunihiko Sadakane, Peter Scholl, Kazunari Tozawa, and Daniel Tschudi
Secure multi-party computation (MPC) enables multiple distrusting parties to jointly compute a function while keeping their inputs private. Computing the AES block cipher in MPC, where the key and/or the input are secret-shared among the parties is important for various applications, particularly threshold cryptography. In this work, we propose a family of dedicated, high-performance MPC protocols to compute the non-linear S-box part of AES in the honest majority setting. Our protocols come in both semi-honest and maliciously secure variants. The core technique is a combination of lookup table protocols based on random one-hot vectors and the decomposition of finite field inversion in into multiplications and inversion in the smaller field , taking inspiration from ideas used for hardware implementations of AES. We also apply and improve the analysis of a batch verification technique for checking inner products with logarithmic communication. This allows us to obtain malicious security with almost no communication overhead, and we use it to obtain new, secure table lookup protocols with only communication for a table of size , which may be useful in other applications. Our protocols have different trade-offs, such as having a similar round complexity as previous state-of-the-art by Chida et al. [WAHC'18] but lower bandwidth costs, or having fewer rounds and lower bandwidth costs. An experimental evaluation in various network conditions using three party replicated secret sharing shows improvements in throughput between and in the semi-honest setting. For malicious security, we improve throughput by to in LAN and by in WAN due to sublinear batch verification.
Last updated:  2025-05-20
Permutation-Based Hashing Beyond the Birthday Bound
Charlotte Lefevre and Bart Mennink
It is known that the sponge construction is tightly indifferentiable from a random oracle up to around queries, where is the capacity. In particular, it cannot provide generic security better than half of the underlying permutation size. In this paper, we aim to achieve hash function security beating this barrier. We present a hashing mode based on two -bit permutations named the double sponge. The double sponge can be seen as the sponge embedded within the double block length hashing paradigm, making two permutation calls in parallel interleaved with an efficient mixing function. Similarly to the sponge, the permutation size is split as , and the underlying compression function absorbs bits at a time. We prove that the double sponge is indifferentiable from a random oracle up to around queries. This means that the double sponge achieves security beyond the birthday bound in the capacity. In addition, if , the double sponge beats the birthday bound in the primitive size, to our knowledge being the first hashing mode based on a permutation that accomplices this feature.
Last updated:  2025-05-20
Sonikku: Gotta Speed, Keed! A Family of Fast and Secure MACs
Amit Singh Bhati, Elena Andreeva, Simon Müller, and Damian Vizar
Message authentication codes (MACs) are fundamental symmetric key cryptographic functions used to generate a short, secret-key-dependent tag for a given message. This tag ensures both message authenticity and integrity, as computing a valid tag without the secret key is computationally infeasible, thereby revealing any unauthorized modification. Existing MACs often rely on block ciphers (BCs) and tweakable block ciphers (TBCs). The design of these MACs involves various trade-offs regarding properties such as data processing rate, the number of secret keys, achievable security definitions and concrete margins, the necessity for pre- or post-processing, parallelization capabilities, internal state size, and performance optimization for diverse message lengths. This work introduces , a new family of MACs based on expanding primitives, comprising three distinct instances: , , and . The MACs offer a compelling combination of advantages: 1) superior speed compared to state-of-the-art TBC-based MACs; 2) security beyond the birthday bound related to the input block size; 3) a smaller internal state than comparable contemporary MACs; and 4) design flexibility considering diverse trade-offs, including pre/post-processing-free operation, parallel processing, a small resource footprint, and suitability for both short and long messages. These characteristics make them highly attractive for widespread applications, including resource-constrained environments like IoT and embedded devices. Performance evaluations on a Cortex-M4 32-bit microcontroller demonstrate that instantiated with achieves a significant speed-up of at least 2.11x (and up to 4.36x) compared to the state-of-the-art ZMAC instantiated with for 128-bit block sizes and messages up to 95 bytes. Similarly, and instantiated with exhibit speed improvements of at least 1.93x for short messages (up to 95 bytes) and 1.48x for larger messages (up to 64KB), respectively, when benchmarked against ZMAC instantiated with for both 64- and 128-bit block sizes. Building upon the approach of ZMAC and PMAC2x, we further illustrate the potential of the family by employing to construct SonicAE, a highly efficient, beyond-birthday secure, stateless, and deterministic authenticated encryption scheme.
Last updated:  2025-05-20
A New Approach for LPN-based Pseudorandom Functions: Low-Depth and Key-Homomorphic
Youlong Ding, Aayush Jain, and Ilan Komargodski
We give new constructions of pseudorandom functions (PRFs) computable in from (variants of the) Learning Parity with Noise (LPN) assumption. Prior to our work, the only -computable PRF from LPN-style assumptions was due to Boyle et al. (FOCS 2020) who constructed a weak PRF from a new heuristic variant of LPN called variable-density LPN. We give the following three results: (1) A weak PRF computable in from standard LPN. (2) A (strong) encoded-input PRF (EI-PRF) computable in from sparse LPN. An EI-PRF is a PRF whose input domain is restricted to an efficiently sampleable and recognizable set. The input encoding can be computed in for any constant , implying a strong PRF computable in . (3) A (strong) PRF computable in from a (new, heuristic) seeded LPN assumption. In our assumption, columns of the public LPN matrix are generated by an -wise independent distribution. Supporting evidence for the security of the assumption is given by showing resilience to linear tests. As a bonus, all of our PRF constructions are key-homomorphic, an algebraic property that is useful in many symmetric-cryptography applications. No previously-known LPN-based PRFs are key-homomorphic, even if we completely ignore depth-efficiency. In fact, our constructions support key homomorphism for linear functions (and not only additive), a property that no previously-known PRF satisfies, including ones from LWE. Additionally, all of our PRF constructions nicely fit into the substitution-permutation network (SPN) design framework used in modern block ciphers (e.g. AES). No prior PRF construction that has a reduction to a standard cryptographic assumptions (let alone LPN) has an SPN-like structure. Technically, all of our constructions leverage a new recursive derandomization technique for LPN instances, which allows us to generate LPN error terms deterministically. This technique is inspired by a related idea from the LWE literature (Kim, EUROCRYPT 2020) for which devising an LPN analogue has been an outstanding open problem.
Last updated:  2025-05-20
Generalizing the Augot-Finiasz PKE to Other Code Classes
Anmoal Porwal, Anna Baumeister, Violetta Weger, Antonia Wachter-Zeh, and Pierre Loidreau
The Augot-Finiasz system is a public-key encryption (PKE) scheme based on Reed-Solomon codes and was later followed by analogous versions in the rank metric. Although these schemes were eventually broken, their fundamental idea remains exciting. Notably, these schemes are significantly different from the McEliece system as there is no need to hide the code and, as such, promise much better parameters. Further, they admit a simple description where both the public key and ciphertext are just corrupted codewords of a public code. An interesting question is whether the general idea can be made to work, i.e., resist all known attacks, by using other code classes. This paper shows how to generalize the Augot-Finiasz system to other code families. We reduce the correctness and security of this framework to simple assertions about the code class with which it is instantiated. Specifically, its correctness is equivalent to the existence of an efficient error-erasure decoder, and its security reduces to an easily understood hardness assumption, called "supercode decoding", close to the syndrome decoding problem. We provide a negative answer for various code families by showing that solving the supercode decoding problem is easy for them. It remains an open question whether a secure choice exists.
Last updated:  2025-05-20
Row Reduction Techniques for -Party Garbling
Kelong Cong, Emmanuela Orsini, Erik Pohle, and Oliver Zajonc
Recent advancements in maliciously secure garbling have significantly improved the efficiency of constant-round multi-party computation. Research in the field has primarily focused on reducing communication complexity through row reduction techniques and improvements to the preprocessing phase with the use of simpler correlations. In this work, we present two contributions to reduce the communication complexity of state of the art multi-party garbling with an arbitrary number of corruptions. First, we show how to achieve full row reduction for -party garbled circuits in HSS17-style protocols (Hazay et al., Asiacrypt'17 & JC'20) and authenticated garbling (Yang et al., CCS'20), reducing the size of the garbled circuit by 25% from to and from to bits per AND gate, respectively. Achieving row reduction in multi-party garbling has been an open problem which was partly addressed by the work of Yang et al. for authenticated garbling. In our work, we show a full row reduction for both garbling approaches, thus addressing this open problem completely. Second, drawing inspiration from the work of Dittmer et al. (Crypto 2022), we propose a new preprocessing protocol to obtain the required materials for the garbling phase using large field triples that can be generated with sublinear communication. The new preprocessing significantly reduces the communication overhead of garbled circuits. Our optimizations result in up to a reduction in communication compared to HSS17 and a reduction over the state of the art authenticated garbling of Yang et al. for 3 parties in a circuit with 10 million AND gates.
Last updated:  2025-05-20
Two Is All It Takes: Asymptotic and Concrete Improvements for Solving Code Equivalence
Alessandro Budroni, Andre Esser, Ermes Franch, and Andrea Natale
The Linear Code Equivalence () problem asks, for two given linear codes , to find a monomial mapping into . Algorithms solving crucially rely on a (heuristic) subroutine, which recovers the secret monomial from pairs of codewords satisfying . We greatly improve on this known bound by giving a constructive (heuristic) algorithm that recovers the secret monomial from any \emph{two} pairs of such codewords for fields of sufficiently large prime order . We then show that this reduction in the number of required pairs enables the design of a more efficient algorithm for solving the problem. Our asymptotic analysis shows that this algorithm outperforms previous approaches for a wide range of parameters, including all parameters proposed across the literature. Furthermore, our concrete analysis reveals significant bit security reductions for suggested parameters. Most notably, in the context of the LESS signature scheme, a second-round contender in the ongoing NIST standardization effort for post-quantum secure digital signatures, we obtain bit security reductions of up to 24 bits.
Last updated:  2025-05-20
MacaKey: Full-State Keyed Sponge Meets the Summation-Truncation Hybrid
Uncategorized
Charlotte Lefevre and Mario Marhuenda Beltrán
Show abstract
Uncategorized
The keyed sponge construction has benefited from various efficiency advancements over time, most notably leading to the possibility to absorb over the entire state, as in the full-state keyed sponge. However, squeezing has always remained limited to blocks smaller than the permutation size, as security is determined by the capacity c, the size of the non-squeezed state. In this work, we present Macakey, an improved version of the full-state keyed sponge that not only absorbs over the entire state but also squeezes over the entire state. The scheme combines ideas of the full-state keyed sponge with those of the summation-truncation hybrid of Gunsing and Mennink. We demonstrate that, with no sacrifice in generic security and with only using c bits of extra storage, Macakey can significantly boost performance, particularly in scenarios requiring large amounts of output. For example, using the 320-bit Ascon permutation with a 256-bit capacity, Macakey outputs five times as many bits as the full-state keyed sponge.
Last updated:  2025-05-20
Practical cryptanalysis of pseudorandom correlation generators based on quasi-Abelian syndrome decoding
Charles Bouillaguet, Claire Delaplace, Mickaël Hamdad, and Damien Vergnaud
Quasi-Abelian Syndrome Decoding (QA-SD) is a recently introduced generalization of Ring-LPN that uses multivariate polynomials rings. As opposed to Ring-LPN, it enables the use of small finite field such as GF(3) and GF(4). It was introduced by Bombar et al (Crypto 2023) in order to obtain pseudorandom correlation generators for Beaver triples over small fields. This theoretical work was turned into a concrete and efficient protocol called F4OLEage by Bombar et al. (Asiacrypt 2024) that allows several parties to generate Beaver triples over GF(2). We propose efficient algorithms to solve the decoding problem underlying the QA-SD assumption. We observe that it reduce to a sparse multivariate polynomial interpolation problem over a small finite field where the adversary only has access to random evaluation points, a blind spot in the otherwise rich landscape of sparse multivariate interpolation. We develop new algorithms for this problem: using simple techniques we interpolate polynomials with up to two monomials. By sending the problem to the field of complex numbers and using convex optimization techniques inspired by the field of “compressed sensing”, we can interpolate polynomials with more terms. This enables us to break in practice parameters proposed by Bombar et al. at Crypto’23 and Asiacrypt’24 as well as Li et al. at Eurocrypt’25 (IACR flagship conferences Grand Slam). In the case of the F4OLEage protocol, our implementation recovers all the secrets in a few hours with probability 60%. This not only invalidates the security proofs, but it yields real-life privacy attacks against multiparty protocols using the Beaver triples generated by the broken pseudorandom correlation generators.
Last updated:  2025-05-20
Adaptively Secure IBE from Lattices with Asymptotically Better Efficiency
Weidan Ji, Zhedong Wang, Lin Lyu, and Dawu Gu
Current adaptively secure identity-based encryption (IBE) constructions from lattices are unable to achieve a good balance among the master public key size, secret key size, modulus and reduction loss. All existing lattice-based IBE schemes share a common restriction: the modulus is quadratic in the trapdoor norm. In this work, we remove this restriction and present a new adaptively secure IBE scheme from lattices in the standard model, which improves the state-of-the-art construction proposed by Abla et al. (TCC 2021) and achieves asymptotically better efficiency. More precisely, we achieve the asymptotically minimal number of public vectors among all the existing schemes, along with a significantly smaller modulus compared to the scheme by Abla et al. (TCC 2021). Furthermore, our scheme enjoys the smallest Gaussian width of the secret key among all existing schemes and has the same tightness as Abla et al.'s scheme. We propose a novel cross-multiplication design for our IBE scheme, along with several novel tools and techniques, including: (a) a homomorphic computation algorithm that outputs BGG+-style encoding with two distinct-norm trapdoors; (b) a sampling algorithm with hybrid Gaussian outputs; and (c) a partial rerandomization algorithm. These new tools and techniques are general and could find rich applications in lattice-based cryptography.
Last updated:  2025-05-20
Adaptively Secure Blockchain-Aided Decentralized Storage Networks: Formalization and Generic Construction
Xiangyu Su, Yuma Tamagawa, Mario Larangeira, and Keisuke Tanaka
This work revisits the current Decentralized Storage Network (DSN) definition to propose a novel general construction based on a UTxO based ledger. To the best of our knowledge, this is the first adaptively secure UTxO blockchain-aided DSN. More concretely, we revisit the currently existing designs to thoroughly formalize the DSN definition and its security. Moreover we present a general construction, which a client delegates data to a DSN that keeps custody of it during a jointly agreed period. Our newly proposed approach, leveraged by the Extended UTxO (EUTxO) Model, neatly allows the storage network to offer automatic verifiability, i.e., without any interaction of the data owner, via proofs published in the blockchain. In summary, this work presents a redesign of the DSN with the aid of a EUTxO based blockchain, by (1) putting forth a formal and rigorous description of a blockchain-aided DSN protocol, (2) offering a thorough description of a practical EUTxO based DSN, and (3) detailing a security analysis showing that our protocol is adaptively secure by providing (rational) security guarantees.
Last updated:  2025-05-20
SQIsign2DPush: Faster Signature Scheme Using 2-Dimensional Isogenies
Kohei Nakagawa and Hiroshi Onuki
Isogeny-based cryptography is cryptographic schemes whose security is based on the hardness of a mathematical problem called the isogeny problem, and is attracting attention as one of the candidates for post-quantum cryptography. A representative isogeny-based cryptography is the signature scheme called SQIsign, which was submitted to the NIST PQC standardization competition for additional signature. SQIsign has attracted much attention because of its very short signature and key size among candidates for the NIST PQC standardization. Recently, many new signature schemes using high-dimensional isogenies have been proposed, such as: SQIsignHD, SQIsign2D-West, SQIsgn2D-East, and SQIPrime. Last year, SQIsign advanced to Round 2 of the NIST competition and was updated to version 2.0 (we call it SQIsign-v2.0), which is based on SQIsign2D-West. SQIsign-v2.0 achieves smaller signature sizes and faster verification. However, the signing costs are relatively high. In this paper, we propose a new signature scheme `SQIsign2DPush', which has a smaller signing cost than SQIsign-v2.0 while the signature size and the verification cost are almost the same.
Last updated:  2025-05-20
Efficient Pseudorandom Correlation Generators for Any Finite Field
Zhe Li, Chaoping Xing, Yizhou Yao, and Chen Yuan
Correlated randomness lies at the core of efficient modern secure multi-party computation (MPC) protocols. Costs of generating such correlated randomness required for the MPC online phase protocol often constitute a bottleneck in the overall protocol. A recent paradigm of {\em pseudorandom correlation generator} (PCG) initiated by Boyle et al. (CCS'18, Crypto'19) offers an appealing solution to this issue. In sketch, each party is given a short PCG seed, which can be locally expanded into long correlated strings, satisfying the target correlation. Among various types of correlations, there is oblivious linear evaluation (OLE), a fundamental and useful primitive for typical MPC protocols on arithmetic circuits. Towards efficient generating a great amount of OLE, and applications to MPC protocols, we establish the following results: (i) We propose a novel {\em programmable} PCG construction for OLE over any field . For OLE correlations, we require communication and computation, where is an arbitrary integer . Previous works either have quadratic computation (Boyle et al. Crypto'19), or can only support fields of size larger than (Bombar et al. Crypto'23). (ii) We extend the above OLE construction to provide various types of correlations for any finite field. One of the fascinating applications is an efficient PCG for two-party {\em authenticated Boolean multiplication triples}. For authenticated triples, we offer PCGs with seed size of bits. To our best knowledge, such correlation has not been realized with sublinear communication and quasi-linear computation ever before. (iii) In addition, the \emph{programmability} admits efficient PCGs for multi-party Boolean triples, and thus the first efficient MPC protocol for Boolean circuits with {\em silent} preprocessing. In particular, we show -party Boolean multiplication triples can be generated in -bit communication, while the state-of-the-art FOLEAGE (Asiacrypt'24) requires a broadcast channel and takes bits communication. (iv) Finally, we present efficient PCGs for circuit-dependent preprocessing, matrix multiplications triples, and string OTs etc. Compared to previous works, each has its own right.
Last updated:  2025-05-19
Blockchain-Based Carbon Footprint Management
Umut Pekel and Oguz Yayla
This paper introduces a novel approach to managing carbon footprints using blockchain technology to integrate these footprints intrinsically into the attributes of products, akin to their price. In contrast to conventional methods that treat carbon footprints as distinct, tradeable units, our model incorporates them directly into the product life cycle, thus maintaining the connection between environmental impact and product consumption. By closely examining blockchain's functionality, this study illustrates how it can improve transparency and security in transactions while influencing market dynamics by making carbon accountability an integral part of product ownership. This method may aid in establishing a more sustainable economic model by better incorporating environmental costs into transactions, potentially fostering progress in environmental finance and sustainability initiatives.
Last updated:  2025-05-19
InstaRand: Instantly Available and Instantly Verifiable On-chain Randomness
Jacob Gorman, Lucjan Hanzlik, Aniket Kate, Pratyay Mukherjee, Pratik Sarkar, and Sri AravindaKrishnan Thyagarajan
Web3 applications, such as on-chain gaming, require unbiased and publicly verifiable randomness that can be obtained quickly and cost-effectively whenever needed. Existing services, such as those based on Verifiable Random Functions (VRF), incur network delays and high fees due to their highly interactive nature. FlexiRand [CCS 2023] addressed these problems by hiding the output of the VRF and using that as a seed to derive many randomnesses locally. These randomnesses are instantly available for usage. However, these randomnesses can not be verified independently (or instantly) without disclosing the seed, leaving scope for malicious actors to cheat. To solve this problem, we introduce a new notion, called instantly-verifiable VRF (iVRF), which enables the generation of many randomnesses from one VRF output seed, such that each of them is verifiable independently - this enables the solution to generate randomnesses, such that they are and also . To instantiate we propose a generic construction called InstaRand - it combines any (possibly distributed) VRF at the server's end with another VRF at the client's end to construct an iVRF. Our specific instantiation uses the BLS-based GLOW-DVRF [Euro S&P 2021] at the server's end and the DDH-based VRF of Goldberg et al. [RFC 2023] at the client's end. We use the universal composability framework to analyze the security. Moreover, due to its generality, InstaRand can be instantiated with any post-quantum secure VRF to yield a post-quantum secure iVRF. Our experiments demonstrate that our instantiation of InstaRand is . The client incurs a cost to generate the seed (server's VRF output) by querying the GLOW-dVRF servers once. Once the seed is set up, the client locally generates the pseudorandom value on demand in , avoiding the client-server round-trip delay. Each value can be independently verified in . This yields a improvement in terms of output generation and improvement in verification cost over existing solutions.
Last updated:  2025-05-19
Blinding Post-Quantum Hash-and-Sign Signatures
Charles Bouillaguet, Thibauld Feneuil, Jules Maire, Matthieu Rivain, Julia Sauvage, and Damien Vergnaud
Blind signature schemes are essential for privacy-preserving applications such as electronic voting, digital currencies or anonymous credentials. In this paper, we revisit Fischlin's framework for round-optimal blind signature schemes and its recent efficient lattice-based instantiations. Our proposed framework compiles any post-quantum hash-and-sign signature scheme into a blind signature scheme. The resulting scheme ensures blindness by design and achieves one-more unforgeability, relying solely on the unforgeability of the underlying signature scheme and the random oracle model. To achieve this we introduce the notion of commit-append-and-prove (CAP) systems, which generalizes traditional commit-and-prove system by making their commitments updatable before proving. This building block allows us to unlock the technical challenges encountered when generalizing previous variants of the Fischlin's framework to any hash-and-sign signature scheme. We provide efficient CAP system instantiations based on recent MPC-in-the-Head techniques. We showcase our framework by constructing blind versions of UOV and Wave, thereby introducing the first practical blind signatures based on multivariate cryptography and code-based cryptography. Our blind UOV signatures range from 3.8 KB to 11 KB, significantly outperforming previous post-quantum blind signatures, such as the 22 KB lattice-based blind signatures, which were the most compact until now.
Last updated:  2025-05-19
Shadowfax: A Deniability-Preserving AKEM Combiner
Phillip Gajland, Vincent Hwang, and Jonas Janneck
As cryptographic protocols transition to post-quantum security, most adopt hybrid solutions combining pre-quantum and post-quantum assumptions. However, this shift often introduces trade-offs in terms of efficiency, compactness, and in some cases, even security. One such example is deniability, which enables users, such as journalists or activists, to deny authorship of potentially incriminating messages. While deniability was once mainly of theoretical interest, protocols like X3DH, used in Signal and WhatsApp, provide it to billions of users. In the post-quantum setting, however, protocols like PQXDH, as well as others such as Apple’s iMessage with PQ3, do not support deniability. This work investigates how to efficiently preserve deniability in the post-quantum setting. To this end, we introduce two combiners for authenticated KEMs (AKEMs) at different levels of abstraction. First, at the highest level, we propose a black-box construction that combines two AKEMs, showing that deniability is preserved when both constituent schemes are deniable. Second, we present Shadowfax, a non-black-box combiner that integrates a pre-quantum NIKE, a post-quantum KEM, and a post-quantum ring signature. We demonstrate that Shadowfax ensures deniability in both dishonest and honest receiver settings. When instantiated, we rely on statistical security for the former, and on a pre- or post-quantum assumption in the latter. Finally, we provide an optimised, yet portable, implementation of a specific instantiation of Shadowfax yielding ciphertexts of 1 781 bytes and public keys of 1 449 bytes. Our implementation achieves competitive performance: encapsulation takes 1.9 million cycles and decapsulation takes 800 000 cycles on a Firestorm core running at 3GHz on an Apple M1 Pro.
Last updated:  2025-05-19
An Efficient Variant of F4 Algorithm for Solving MQ Problem
Kosuke Sakata and Tsuyoshi Takagi
In this paper, we propose an enhanced variant of the F4 algorithm specifically designed for efficiently solving multivariate quadratic (MQ) problems, which are central to many post-quantum cryptographic schemes. Our approach overcomes a major inefficiency of conventional F4 by integrating a Hilbert-driven strategy that determines the optimal number of S-polynomials to generate at each degree, thereby reducing unnecessary zero reductions. We further introduce refined pair selection techniques that prioritize candidates yielding S-polynomials with smaller leading terms, which in turn minimizes the dimensions of intermediate matrices used during reduction. Experimental results show that our implementation outperforms state-of-the-art systems such as M4GB and Magma's F4 in both single-core and multi-core environments. Notably, our method sets new records in the Fukuoka MQ Challenge for Type VI problems over with and , demonstrating the robustness and practical impact of our approach in solving highly challenging MQ instances.
Last updated:  2025-05-19
On the Estonian Internet Voting System, IVXV, SoK and Suggestions
Shymaa M. Arafat
The Estonian i-voting experience is probably the richest to analyze; a country that is considered a pioneer in digitizing both the government and private sector since 2001, and hence digital voting in 2005, yet there are still some complaints submitted, critics and remarks to consider about the IVXV system. In this paper, we introduce a Systemization of Knowledge of the Estonian IVXV i-voting system and propose some added security enhancements. The presented SoK includes applications implemented by election observers in 2023 & 2024 elections, which, to our knowledge, has never been mentioned and/or analyzed in the academia before. The paper also updates the general knowledge about an extra right given to auditors (but not observers) in the June 2024 European election, recent improvements, and recent complaints. Finally, we discuss the current system status in 2024 EP elections, propose our own suggestions to some remaining vulnerabilities, then raise the inevitable question of the approaching quantum threat.
Last updated:  2025-05-19
PaCo: Bootstrapping for CKKS via Partial CoeffToSlot
Jean-Sébastien Coron and Tim Seuré
We introduce PaCo, a novel and efficient bootstrapping procedure for the CKKS homomorphic encryption scheme, where PaCo stands for “(Bootstrapping via) Partial CoeffToSlot”. At a high level, PaCo reformulates the CKKS decryption equation in terms of blind rotations and modular additions. This reformulated decryption circuit is then evaluated homomorphically within the CKKS framework. Our approach makes use of the circle group in the complex plane to simulate modular additions via complex multiplication, and utilizes alternative polynomial ring structures to support blind rotations. These ring structures are enabled by a variant of the CoeffToSlot operation, which we call a partial CoeffToSlot. This yields a new bootstrapping approach within CKKS, achieving a computational complexity which is logarithmic in the number of complex slots. We further introduce a parallelized variant that enables bootstrapping over all CKKS slots with enhanced throughput, highlighting PaCo’s suitability for practical and large-scale homomorphic applications. In addition to the bootstrapping technique itself, we develop several supporting tools — particularly in the context of bit-reversing and alternative ring structures for CKKS — which can be of independent interest to the community. Finally, a proof-of-concept implementation confirms that PaCo achieves performance competitive with state-of-the-art methods for CKKS bootstrapping.
Last updated:  2025-05-19
Threshold Computation in the Head: Improved Framework for Post-Quantum Signatures and Zero-Knowledge Arguments
Thibauld Feneuil and Matthieu Rivain
The MPC-in-the-Head paradigm is instrumental in building zero-knowledge proof systems and post-quantum signatures using techniques from secure multi-party computation. In this work, we extend and improve the recently proposed framework of MPC-in-the-Head based on threshold secret sharing, here called Threshold Computation in the Head. We first address some limitations of this framework, namely its overhead in the communication cost, its constraint on the number of parties and its degradation of the soundness. Our tweak of this framework makes it applicable to the previous MPCitH schemes (and in particular post-quantum signature candidates recently submitted to NIST) for which we obtain up to 50% timing improvements without degrading the signature size. Then we extend the TCitH framework to support quadratic (or higher degree) MPC round functions as well as packed secret sharing. We show the benefits of our extended framework for several applications. First we provide post-quantum zero-knowledge arguments for arithmetic circuits which improve the state of the art in the "small to medium size" regime. Then we apply our extended framework to derive improved variants of the MPCitH candidates submitted to NIST. For most of them, we save between 5% and 37% of the signature size. We further propose a generic way to build efficient post-quantum ring signatures from any one-way function. When applying our TCitH framework to this design to concrete one-way functions, the obtained scheme outperforms all the previous proposals in the state of the art. For instance, our scheme instantiated with MQ achieves sizes below 6 KB and timings around 10 ms for a ring of 4000 users. Finally, we provide exact arguments for lattice problems. Our scheme is competitive with state-of-the-art zero-knowledge lattice techniques and achieves proofs around 15 KB for LWE instances with similar security as Kyber512. We conclude our work by exhibiting strong connections between the TCitH framework and other proof systems (namely VOLE-in-the-Head and Ligero) which thus unifies different MPCitH-like proof systems under the same umbrella.
Last updated:  2025-05-19
Distinguishing Full-Round AES-256 in a Ciphertext-Only Setting via Hybrid Statistical Learning
Gopal Singh
The security of block ciphers such as AES-128, AES-192, and AES-256 relies on the assumption that their ciphertext outputs are computationally indistinguishable from random permutations. While distinguishers have been proposed for reduced-round variants or under non-standard models such as known-key or chosen-key settings, no effective distinguisher has been demonstrated for the full-round AES ciphers in the standard secret-key model. This work introduces FESLA (Feature Enhanced Statistical Learning Attack), a hybrid statistical learning framework that integrates outputs from a suite of classical statistical tests with machine learning and deep learning classifiers to construct ciphertext-only distinguishers for AES-128, AES-192, and AES-256. In contrast to existing approaches based on handcrafted or bitwise features, FESLA aggregates intermediate statistical metrics as features, enabling the capture of persistent structural biases in ciphertext distributions. Experimental evaluation across multiple datasets demonstrates consistent 100% classification accuracy using Support Vector Machines, Random Forests, Multi-Layer Perceptron, Logistic Regression, and Naïve Bayes classifiers. Generalization and robustness are confirmed through k-fold cross-validation, including on previously unseen ciphertext samples. These results establish the first ciphertext-only distinguishers for full-round AES-128, AES-192, and AES-256 under the secret-key model, and underscore the potential of machine learning–augmented cryptanalysis based on principled statistical feature engineering.
Last updated:  2025-05-19
Relaxed Vector Commitment for Shorter Signatures
Seongkwang Kim, Byeonghak Lee, and Mincheol Son
MPC-in-the-Head (MPCitH) has recently gained traction as a foundation for post-quantum signature schemes, offering robust security without trapdoors. Despite its strong security profile, MPCitH-based schemes suffer from high computational overhead and large signature sizes, limiting their practical application. This work addresses these inefficiencies by relaxing vector commitments within MPCitH-based schemes. We introduce the concept of vector semi-commitment, which relaxes the binding property of traditional vector commitment. Vector semi-commitment schemes may allow an adversary to find more than one preimage of a commitment. We instantiate vector semi-commitment schemes in both the random oracle model and the ideal cipher model, leveraging recent optimizations on GGM tree such as correlated GGM tree. We apply the ideal-cipher-based vector semi-commitment scheme to the BN++ signature scheme and prove it almost fully secure in the ideal cipher model. Implementing these improvements in the AIMer v2.0 signature scheme, we achieve up to 18% shorter signatures and up to 112% faster signing and verification speeds, setting new benchmarks for MPCitH-based schemes.
Last updated:  2025-05-19
Obfuscation of Unitary Quantum Programs
Mi-Ying (Miryam) Huang and Er-Cheng Tang
Program obfuscation aims to hide the inner workings of a program while preserving its functionality. In the quantum setting, recent works have obtained obfuscation schemes for specialized classes of quantum circuits. For instance, Bartusek, Brakerski, and Vaikuntanathan (STOC 2024) constructed a quantum state obfuscation scheme, which supports the obfuscation of quantum programs represented as quantum states for pseudo-deterministic quantum programs with classical inputs and outputs in the classical oracle model. In this work, we improve upon existing results by constructing the first quantum state obfuscation scheme for unitary (or approximately unitary) quantum programs supporting quantum inputs and outputs in the classical oracle model. At the core of our obfuscation scheme are two novel ingredients: a functional quantum authentication scheme that allows key holders to learn specific functions of the authenticated quantum state with simulation-based security, and a compiler that represents an arbitrary quantum circuit as a projective linear-plus-measurement quantum program described by a sequence of non-adaptive Clifford gates interleaved with adaptive and compatible measurements.
Last updated:  2025-05-19
SPEEDY: Caught at Last
Christina Boura, Patrick Derbez, Baptiste Germon, Rachelle Heim Boissier, and María Naya-Plasencia
SPEEDY is a family of ultra-low-latency block ciphers designed by Leander et al. in 2021. In 2023, Boura et al. proposed a differential attack on the full 7-round variant, SPEEDY-7-192. However, shortly thereafter, Beyne and Neyt demonstrated that this attack was invalid, as the dominant differential characteristic it relied upon had probability zero. A similar issue affects another differential attack proposed the same year by Wang et al., which also targets SPEEDY-7-192 and suffers from the same flaw. As a result, although SPEEDY-7-192 was initially believed to be broken, it remained unbroken in practice, and the question of finding a valid attack on this cipher remained an open problem. In this work, we resolve this problem by presenting the first valid differential attack on SPEEDY-7-192. We verify the validity of our distinguisher using the quasidifferential framework. Moreover, our search for the differential distinguisher is significantly more rigorous than in the previous works, allowing us to explore a larger portion of the search space. We also fully exploit probabilistic extensions of the distinguisher to identify optimal parameters for the key recovery step. Our attack on SPEEDY-7-192 has data and time complexities of 2^{186.36} encryption calls and a memory complexity of 2^{84} 192-bit states. In addition, we present differential attacks on 4-round SPEEDY-5-192 and 5-round SPEEDY-6-192 which currently represent the best attacks against these smaller variants.
Last updated:  2025-05-19
At the Top of the Hypercube -- Better Size-Time Tradeoffs for Hash-Based Signatures
Dmitry Khovratovich, Mikhail Kudinov, and Benedikt Wagner
Hash-based signatures have been studied for decades and have recently gained renewed attention due to their post-quantum security. At the core of the most prominent hash-based signature schemes, XMSS and SPHINCS+, lies a one-time signature scheme based on hash chains due to Winternitz. In this scheme, messages are encoded into vectors of positions (i.e., vertices in a hypercube) in the hash chains, and the signature contains the respective chain elements. The encoding process is crucial for the efficiency and security of this construction. In particular, it determines a tradeoff between signature size and computational costs. Researchers have been trying to improve this size-time tradeoff curve for decades, but all improvements have been arguably marginal. In this work, we revisit the encoding process with the goal of minimizing verification costs and signature sizes. As our first result, we present a novel lower bound for the verification cost given a fixed signature size. Our lower bound is the first to directly apply to general encodings including randomized, non-uniform, and non-injective ones. Then, we present new encodings and prove their security. Inspired by our lower bound, these encodings follow a counterintuitive approach: we map messages non-uniformly into the top layers of a much bigger hypercube than needed but the encoding itself has (hard to find) collisions. With this, we get a 20 % to 40 % improvement in the verification cost of the signature while keeping the same security level and the same size. Our constructions can be directly plugged into any signature scheme based on hash chains, which includes SPHINCS+ and XMSS.
Last updated:  2025-05-19
Bootstrapping GBFV with CKKS
Jaehyung Kim
The Generalized BFV [Geelen and Vercauteren; Eurocrypt'25] is an efficient fully homomorphic encryption scheme that supports integer computations over large cyclotomic moduli. However, the only known bootstrapping approach cannot support large precision as it uses BFV linear transformation as a subroutine. In this work, we introduce a GBFV bootstrapping that relies on CKKS bootstrapping as in the BFV bootstrapping from CKKS [Kim et al.; CCS'24]. The new bootstrapping can handle arbitrary precision, notably bootstrapping the CLPX scheme [Chen et al.; CT-RSA'18] for the first time, bootstrapping up to bits of plaintext modulus in less than seconds. In addition, we introduce conversions between GBFV and CKKS and discuss its impact.
Last updated:  2025-05-19
Helix: Scalable Multi-Party Machine Learning Inference against Malicious Adversaries
Yansong Zhang, Xiaojun Chen, Qinghui Zhang, Xudong Chen, and Ye Dong
With the growing emphasis on data privacy, secure multi-party computation has garnered significant attention for its strong security guarantees in developing privacy-preserving machine learning (PPML) schemes. However, only a few works address scenarios with a large number of participants. The state of the art by Liu et al. (LXY24, USENIX Security'24) first achieves a practical PPML protocol for up to 63 parties but is constrained to semi-honest security. Although naive extensions to the malicious setting are feasible, they would introduce significant overhead in verifying the correctness of multiplications. In this paper, we propose Helix, a scalable framework for maliciously secure PPML in the honest majority setting, aiming to enhance both the scalability and practicality of maliciously secure protocols. In particular, we first report a privacy leakage issue in LXY24 during prefix OR operations and introduce a round-optimized alternative based on a single-round vectorized four-input multiplication protocol. To mitigate the verification burden, we propose a set of lightweight compression protocols by exploiting reusability properties within the computation process, and seamlessly integrate them into existing verification techniques. Building on these enhancements, we further construct a practically-efficient and general -party computation protocol that serves as the cryptographic foundation for advanced PPML schemes. As a result, Helix achieves efficiency comparable to semi-honest frameworks. For instance, in 63-party neural network inference, Helix is only 1.9 (1.1) slower in the online phase and 1.2 (1.1) slower in preprocessing under LAN (WAN), compared to LXY24, in the best case.
Last updated:  2025-05-18
IO-Optimized Design-Time Configurable Negacyclic Seven-Step NTT Architecture for FHE Applications
Uncategorized
Emre Koçer, Selim Kırbıyık, Tolun Tosun, Ersin Alaybeyoğlu, and Erkay Savaş
Show abstract
Uncategorized
FHE enables computations on encrypted data, proving itself to be an essential building block for privacy-preserving applications. However, it involves computationally demanding operations such as polynomial multiplication, with the NTT being the state-of-the-art solution to perform it. Considering that most FHE schemes operate over the negacyclic ring of polynomials, we introduce a novel formulation of the hierarchical Four-Step NTT approach for the negacyclic ring, eliminating the need for pre- and post-processing steps found in the existing methods. To accelerate NTT operations, the FPGA devices offer flexible and powerful computing platforms. We propose an FPGA-based, high-speed, parametric and fully pipelined architecture that implements the improved Seven-Step NTT algorithm, which builds upon the four-step algorithm. Our design supports a wide range of parameters, including ring sizes up to and modulus sizes up to -bit. We focus on achieving configurable throughput, as constrained by the bandwidth of HBM, which is a additional in-package memory common in high-end FGPA devices such as Alveo U280. We aim to maximize throughput through an IO parametric design on the Alveo U280 FPGA. The implementation results demonstrate that the average latency of our design for batch NTT operation is for the ring size and -bit width; a speed-up of compared to the current state-of-the-art designs.
Last updated:  2025-05-18
Fast amortized KZG proofs
Dankrad Feist and Dmitry Khovratovich
In this note we explain how to compute KZG proofs for a polynomial of degree in time superlinear of . Our technique is used in lookup arguments and vector commitment schemes.
Last updated:  2025-05-17
The Jacobi Factoring Circuit: Quantum Factoring with Near-Linear Gates and Sublinear Space and Depth
Gregory D. Kahanamoku-Meyer, Seyoon Ragavan, Vinod Vaikuntanathan, and Katherine Van Kirk
We present a compact quantum circuit for factoring a large class of integers, including some whose classical hardness is expected to be equivalent to RSA (but not including RSA integers themselves). Most notably, we factor -bit integers of the form with for in space and depth sublinear in n (specifically, ) using quantum gates; for these integers, no known classical algorithms exploit the relatively small size of to run asymptotically faster than general-purpose factoring algorithms. To our knowledge, this is the first polynomial-time circuit to achieve sublinear qubit count for a classically-hard factoring problem. We thus believe that factoring such numbers has potential to be the most concretely efficient classically-verifiable proof of quantumness currently known. Our circuit builds on the quantum algorithm for squarefree decomposition discovered by Li, Peng, Du, and Suter (Nature Scientific Reports 2012), which relies on computing the Jacobi symbol in quantum superposition. The technical core of our contribution is a new space-efficient quantum algorithm to compute the Jacobi symbol of mod , in the regime where is classical and much larger than . Our circuit for computing the Jacobi symbol generalizes to related problems such as computing the greatest common divisor and modular inverses, and thus could be of independent interest.
Last updated:  2025-05-17
Learning With Quantization: A Ciphertext Efficient Lattice Problem with Tight Security Reduction from LWE
Shanxiang Lyu, Ling Liu, and Cong Ling
In this paper, we introduce Learning With Quantization (LWQ), a new problem related to the Learning With Errors (LWE) and Learning With Rounding (LWR) problems. LWQ provides a tight security reduction from LWE while enabling efficient ciphertext compression comparable to that of LWR. We adopt polar lattices to instantiate the quantizer of LWQ. Polar lattices are a specific instance of the classical Construction D, which utilizes a set of nested polar codes as component codes. Due to the polarization phenomenon of polar codes, the distribution of the quantization error converges to a discrete Gaussian. Moreover, the quantization algorithm can be executed in polynomial time. Our main result establishes a security reduction from LWE to LWQ, ensuring that the LWQ distribution remains computationally indistinguishable from the uniform distribution. The technical novelty lies in bypassing the noise merging principle often seen in the security reduction of LWR, instead employing a more efficient noise matching principle. We show that the compression rate is ultimately determined by the capacity of the ``LWE channel,'' which can be adjusted flexibly. Additionally, we propose a full information-rate encryption framework based on LWQ, demonstrating its advantage over constructions based on LWE and quantized LWE. Our result answers affirmatively a question left open by Micciancio and Schultz (CRYPTO 2023).
Last updated:  2025-05-17
-out-of- Proofs and Application to Privacy-Preserving Cryptocurrencies
Min Zhang, Yu Chen, Xiyuan Fu, and Zhiying Cui
Cryptocurrencies enable transactions among mutually distrustful users, necessitating strong privacy, namely, concealing both transfer amounts and participants' identities, while maintaining practical efficiency. While UTXO-based cryptocurrencies offer mature solutions achieving strong privacy and supporting multi-receiver transfers, account-based cryptocurrencies currently lack practical solutions that simultaneously guarantee these properties. With the aim to close this gap, we propose a generic framework for account-based cryptocurrencies that achieve strong privacy and support multi-receiver transfers, and then give a practical instantiation called \textit{Anonymous PGC}. Experimental results demonstrate that, for a 64-sized anonymity set and 8 receivers, Anonymous PGC outperforms Anonymous Zether (IEEE S\&P 2021) --- which offers limited anonymity and no multi-receiver support --- achieving 2.6 faster transaction generation, 5.1 faster verification, and 2.1 reduction in transaction size. Along the way of building Anonymous PGC, we present two novel -out-of- proofs. First, we generalize the Groth-Kohlweiss (GK) -out-of- proof (EUROCRYPT 2015) to the -out-of- case, resolving an open problem of its natural generalization. Particularly, the obtained -out-of- proof lends itself to integrate with range proofs in a seamless way, yielding an efficient -out-of- range proof, which demonstrates that witnesses among instances lie in specific ranges. Second, we extend the Attema-Cramer-Fehr (ACF) -out-of- proof (CRYPTO 2021) to support distinct group homomorphisms, improving its expressiveness while reducing both prover and verifier complexities from quadratic to linear. We believe these two -out-of- proofs are of independent interest, and will find more applications in privacy-preserving scenarios.
Last updated:  2025-05-17
Perfect 2-Party Computation from Additive Somewhat Homomorphic Encryption
Jonathan Trostle
Two-party computation has been an active area of research since Yao's breakthrough results on garbled circuits. We present secret key somewhat homomorphic additive schemes where the client has perfect privacy (server can be computationally unbounded). Our basic scheme is somewhat homomorphic additive and we extend it to support multiplication. The server handles circuit multiplication gates by returning the multiplicands to the client which updates the decryption key so that the original ciphertext vector includes the encrypted multiplication gate outputs. We give a 2-party computation (2PC) protocol that also incorporates server inputs where the client has perfect privacy. Server privacy holds against a computationally bounded adversary since it depends on the hardness of the subset sum problem. The 2PC protocol maintains circuit privacy except for leaking the number of multiplication gates to the client. Scaling the 2PC protocol via separate encryption parameters for smaller subcircuits allows the ciphertext size to remain constant as circuit size grows.
Last updated:  2025-05-17
A Fast, Efficient, Platform-Adaptive, and AIS-20/31 Compliant PLL-Based True Random Number Generator on an SoC FPGA
Oğuz Yayla and Yunus Emre Yılmaz
Phase-locked loops (PLLs) embedded within field-program-mable gate arrays (FPGAs) or system-on-chip FPGAs (SoCs) present a promising methodology for the generation of random numbers. Their widespread integration across these platforms, combined with their isolated operational characteristics and robust entropy generation, as substantiated by prior research, positions PLL-based true random number generators (PLL-TRNGs) as highly effective solutions for this purpose. The present study focuses on the implementation of PLL-TRNGs utilizing the ZC702 Rev1.1 Evaluation Board, which incorporates the Zynq-7020 SoC from Xilinx. For experimental validation, a configuration involving three such boards is employed. The parameters governing the PLL-TRNG are optimized through a backtracking algorithm. Additionally, a novel, platform-adaptive technique is proposed to enhance the rate of random data bit generation while preserving entropy characteristics. The system's performance is rigorously evaluated against the criteria established by the German Federal Office for Information Security (BSI) AIS-20/31 Tests, with a detailed account of the implementation process provided. Furthermore, the study demonstrates the minimal resource utilization of the PLL-TRNG design within a SoC, thereby underscoring its suitability for Internet-of-Things (IoT) applications, where logic resources are often highly constrained.
Last updated:  2025-05-17
Design and Implementation of a Fast, Platform-Adaptive, AIS-20/31 Compliant PLL-Based True Random Number Generator on a Zynq 7020 SoC FPGA
Oğuz Yayla and Yunus Emre Yılmaz
Phase-locked loops (PLLs) integrated within field-program-mable gate arrays (FPGAs) or System-on-Chip FPGAs (SoCs) represent a promising approach for generating random numbers. Their widespread deployment, isolated functionality within these devices, and robust entropy, as demonstrated in prior studies, position PLL-based true random number generators (PLL-TRNGs) as highly viable solutions for this purpose. This study explicitly examines PLL-TRNG implementations using the ZC702 Rev1.1 evaluation board featuring the Zynq 7020 SoC from Xilinx, utilizing a configuration involving three such boards for experimental validation. Parameters governing the PLL-TRNG are optimized using a backtracking algorithm. Additionally, a novel methodology is proposed to enhance the rate of random data bit generation while preserving entropy characteristics. Performance metrics are rigorously evaluated against the criteria set by the German Federal Office for Information Security (BSI) AIS-20/31 Tests, accompanied by detailed descriptions of the implementation process.
Last updated:  2025-05-17
Leveled Homomorphic Encryption over Composite Groups
Mahdi Mahdavi, Ehsan Meamari, Emad Heydari Beni, and Maryam Sheikhi
Homomorphic encryption is a powerful tool that enables computation on encrypted data without requiring decryption. While many Fully Homomorphic Encryption schemes, supporting arbitrary computations on encrypted data, have been developed using lattice-based and AGCD-based approaches, progress in composite groups has been limited to Partial Homomorphic Encryption schemes, which only support either addition or multiplication. This paper introduces the first -leveled homomorphic encryption schemes over composite groups, based on factoring problem, that achieve both multiplicative and additive homomorphic properties. %Additionally, a modified version of Rothblum’s transformation technique is developed to provide public key variants of the symmetric schemes. Our novel design eliminates the need for relinearization operation, which is common in LWE-based HE schemes, and removes the requirement for the circular security assumption. For applications where the traffic must be indistinguishable as encrypted, a scrambled scheme is designed using a labeling technique. While the initial schemes offer an expanded message space, the introduction of the double-sized Message technique enables the encryption of messages up to twice the size without increasing the ciphertext size. Implementation results show that our schemes significantly outperform existing solutions, particularly in multiplication operations, achieving speeds up to 1000 times faster than well-known schemes such as BFV, BGV, CKKS, and TFHE.
Last updated:  2025-05-17
One-Way Homomorphic Encryption: A Composite Group Approach
Mahdi Mahdavi and Helena Rifà-Pous
Homomorphic Encryption (HE) is a fundamental Privacy-Enhancing Technology (PET) that enables computations on encrypted data without decryption. Despite its utility, designing an efficient and secure HE scheme is highly complex, requiring sophisticated cryptographic techniques. This paper introduces a novel approach to achieving homomorphic properties—supporting either one addition or one multiplication—within composite groups. While the proposed technique exhibits one-wayness, it has a good potential to serve as a foundational building block for constructing indistinguishable cryptosystems. This work contributes to the advancement of homomorphic encryption by providing a new perspective on secure computation within structured algebraic settings.
Last updated:  2025-05-17
Post-Quantum Privacy for Traceable Receipt-Free Encryption
Paola de Perthuis and Thomas Peters
Traceable Receipt-free Encryption (TREnc) has recently been introduced (Asiacrypt’22) as a verifiable public-key encryption primitive allowing to randomize ciphertexts in transit in order to remove any subliminal information up to a public trace which prevents the malleability of the underlying plaintexts. This unique feature generically enables the construction of voting systems by allowing voters to encrypt their votes, tracing whether a published ballot takes their choices into account, and preventing them from proving how they voted. While being a very promising primitive, the few existing TREnc mechanisms solely rely on discrete-logarithm related assumptions making them vulnerable to the well-known record-now/decrypt-later attack in the wait of quantum computers. In this article, we address this limitation by building the first TREnc which can be safely used today until the advent of quantum adversaries. More precisely, based on the observation that security must hold at the time the primitive is used while only privacy should withstand in the post-quantum era, our solution relies on a mix of pre-quantum and post-quantum cryptography. As a first contribution, we generalize the original TREnc primitive that is too restrictive to be easily compatible with built-in lattice-based semantically-secure encryption. Our more flexible model keeps all the ingredients generically implying receipt-free voting. Next, we design our construction with the following essential properties for trustworthy elections: (i) it is provably-secure in the standard model; (ii) it relies on standard assumptions, namely Ring Learning With Errors (RLWE) coupled with pairing-based statistical zero-knowledge simulation-sound SXDH-based proofs; and (iii) it further enjoys a public-coin common reference string removing the need of a trusted setup.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.