All papers in 2025 (971 results)

Last updated:  2025-05-28
Generalized BGV, BFV, and CKKS for Homomorphic Encryption over Matrix Rings
Bence Mali
Some of the most valuable applications of homomorphic encryption, such as encrypted machine learning inference, require efficient large-scale plaintext-ciphertext and ciphertext-ciphertext matrix multiplications. Current state-of-the-art techniques for matrix multiplications all build on the ability to pack many ciphertexts into a ciphertext and compute on them in a Single Instruction, Multiple Data (SIMD) manner. However, to fit the operation of matrix multiplication into this computational model, a large number of additional costly operations need to be performed, such as the rotation of elements between the plaintext slots. In this work, we propose an orthogonal approach to performing encrypted matrix operations with BGV-like encryption schemes, where the plaintext and ciphertext spaces are generalized to a matrix ring of arbitrary dimension. To deal with the inherent problem of noncommutativity in the case of matrix rings, we present a new superoperator technique to better represent linear and quadratic expressions in the secret key, which allows for the relinearization of ciphertexts after multiplication. The security of the modified encryption schemes is based on Module-LWE with module rank equal to the dimension of the matrices. With this construction, we demonstrate that Ring-LWE, Module-LWE, and LWE are potentially equally efficient for homomorphic encryption, both in terms of useful information density and noise growth, only for different sizes of matrices.
Last updated:  2025-05-27
Sabot: Efficient and Strongly Anonymous Bootstrapping of Communication Channels
Christoph Coijanovic, Laura Hetz, Kenneth G. Paterson, and Thorsten Strufe
Anonymous communication is vital for enabling individuals to participate in social discourse without fear of marginalization or persecution. An important but often overlooked part of anonymous communication is the bootstrapping of new communication channels, generally assumed to occur out-of-band. However, if the bootstrapping discloses metadata, communication partners are revealed even if the channel itself is fully anonymized. We propose Sabot, the first anonymous bootstrapping protocol that achieves both strong cryptographic privacy guarantees and bandwidth-efficient communication. In Sabot, clients cooperatively generate a private relationship matrix, which encodes who wants to contact whom. Clients communicate with k ≥ 2 servers to obtain “their” part of the matrix and augment the received information using Private Information Retrieval (PIR) to learn about their prospective communication partners. Compared to previous solutions, Sabot achieves stronger privacy guarantees and reduces the bandwidth overhead by an order of magnitude.
Last updated:  2025-05-27
How to Verify that a Small Device is Quantum, Unconditionally
Giulio Malavolta and Tamer Mour
A proof of quantumness (PoQ) allows a classical verifier to efficiently test if a quantum machine is performing a computation that is infeasible for any classical machine. In this work, we propose a new approach for constructing PoQ protocols where soundness holds unconditionally assuming a bound on the memory of the prover, but otherwise no restrictions on its runtime. In this model, we propose two protocols: 1. A simple protocol with a quadratic gap between the memory required by the honest parties and the memory bound of the adversary. The soundness of this protocol relies on Raz's (classical) memory lower bound for matrix inversion (Raz, FOCS 2016). 2. A protocol that achieves an exponential gap, building on techniques from the literature on the bounded storage model (Dodis et al., Eurocrypt 2023). Both protocols are also efficiently verifiable. Despite having worse asymptotics, our first protocol is conceptually simple and relies only on arithmetic modulo 2, which can be implemented with one-qubit Hadamard and CNOT gates, plus a single one-qubit non-Clifford gate.
Last updated:  2025-05-27
Decentralized Data Archival: New Definitions and Constructions
Elaine Shi, Rose Silver, and Changrui Mu
We initiate the study of a new abstraction called incremental decentralized data archival (). Specifically, imagine that there is an ever-growing, massive database such as a blockchain, a comprehensive human knowledge base like Wikipedia, or the Internet archive. We want to build a decentralized archival of such datasets to ensure long-term robustness and sustainability. We identify several important properties that an scheme should satisfy. First, to promote heterogeneity and decentralization, we want to encourage even weak nodes with limited space (e.g., users' home computers) to contribute. The minimum space requirement to contribute should be approximately independent of the data size. Second, if a collection of nodes together receive rewards commensurate with contributing a total of blocks of space, then we want the following reassurances: 1) if is at least the database size, we should be able to reconstruct the entire dataset; and 2) these nodes should actually be commiting roughly space in aggregate --- even when is much larger than the data size, the nodes should be storing redundant copies of the database rather than storing just one copy, and yet impersonating arbitrarily many pseudonyms to get unbounded rewards. We propose new definitions that mathematically formalize the aforementioned requirements of an scheme. We also devise an efficient construction in the random oracle model which satisfies the desired security requirements. Our scheme incurs only audit cost, as well as update cost for both the publisher and each node, where hides polylogarithmic factors. Further, the minimum space provisioning required to contribute is as small as polylogarithmic. Our construction exposes several interesting technical challenges. Specifically, we show that a straightforward application of the standard hierarchical data structure fails, since both our security definition and the underlying cryptographic primitives we employ lack the desired compositional guarantees. We devise novel techniques to overcome these compositional issues, resulting in a construction with provable security while still retaining efficiency. Finally, our new definitions also make a conceptual contribution, and lay the theoretical groundwork for the study of . We raise several interesting open problems along this direction.
Last updated:  2025-05-27
Learning with Alternating Moduli, Arora-Ge over Composite Moduli, and Weak PRFs
Yilei Chen, Liheng Ji, and Wenjie Li
In TCC 2018, Boneh, Ishai, Passelègue, Sahai, and Wu propose candidates of weak and strong PRFs by evaluating linear functions over coprime moduli alternatively. Such PRFs can be evaluated by low-depth circuits and are MPC-friendly. However, they have not been able to base the security of their PRFs on well-formed assumptions other than assuming that the PRF constructions themselves are secure. In this paper, we formalize a new assumption called Learning with Alternating Moduli (LAM). We show that over certain large moduli, the LAM assumption is as hard as the Learning with Errors (LWE) assumption. For LAM over constant moduli, we do not know how to base its hardness on the LWE assumption. Instead, we provide (i) polynomial-time attacks on LAM with constant prime-power moduli and certain constant non-prime-power moduli, and (ii) evidence of the sub-exponential hardness of LAM with other moduli by analyzing the effect of typical attacks. More specifically, we put forward two new attacks. The first attack is a recursive algorithm that solves LWE with certain constant composite moduli and error distributions. The algorithm extends the Arora-Ge algorithm for LWE from prime moduli to composite moduli, and it also solves LAM for certain parameters. The second attack is a polynomial-time attack that rules out the existence of weak PRFs in for any prime . Based on our studies, we propose candidate weak PRFs in for some distinct primes based on LAM over constant moduli, or the Learning with Rounding (LWR) assumption over constant moduli. Compared to the weak PRF candidates by Boneh et al., our weak PRF candidates live in the same complexity class while having the advantage of being based on well-formed assumptions.
Last updated:  2025-05-27
Registered Functional Encryption for Pseudorandom Functionalities from Lattices: Registered ABE for Unbounded Depth Circuits and Turing Machines, and More
Tapas Pal, Robert Schädlich, and Erkan Tairi
Registered functional encryption (RFE) is a generalization of public-key encryption that enables computation on encrypted data (like classical FE), but without needing a central trusted authority. Concretely, the users choose their own public keys and register their keys together with a function with an (untrusted) key curator. The key curator aggregates all of the individual public keys into a short master public key, which serves as the public key of the FE scheme. Currently, we only know RFE constructions for restricted functionalities using standard assumptions, or for all circuits using powerful tools such as indistinguishability obfuscation, and only in the non-uniform model. In this work, we make progress on this front by providing the first lattice-based constructions of RFE for pseudorandom functionalities, where the model of computation is either non-uniform (unbounded depth circuits) or uniform (Turing machines). Intuitively, we call a functionality pseudorandom if the output of the circuit is indistinguishable from uniform for every input seen by the adversary. Security relies on LWE and a recently introduced primitive called pseudorandom FE (prFE), which currently can be instantiated from evasive LWE. We illustrate the versatility of these new functionalities for RFE by leveraging them to achieve key-policy and ciphertext-policy registered attribute-based encryption and registered predicate encryption schemes (KP-RABE, CP-RABE and RPE) for both unbounded depth circuits and Turing machines. Existing RABE constructions support only bounded depth circuits, and prior to our work there neither existed RABE for uniform models of computation nor RPE. As an appealing feature, all our constructions enjoy asymptotic optimality in the sense that their parameters depend neither on the length of public attributes nor the size of policies. Along the way, we can also improve on the state-of-the-art for classical attribute-based encryption (ABE) and predicate encryption (PE). Specifically, we obtain new constructions for KP-ABE, CP-ABE and PE for Turing machines with optimal asymptotic parameters. For KP-ABE, this is an in improvement in terms of efficiency, whereas for CP-ABE and PE we are not aware of any prior purely lattice-based construction supporting Turing machines.
Last updated:  2025-05-27
Multiparty Homomorphic Secret Sharing and More from LPN and MQ
Geoffroy Couteau, Naman Kumar, and Xiaxi Ye
We give the first constructions of multiparty pseudorandom correlation generators, distributed point functions, and (negligible-error) homomorphic secret sharing for constant-degree polynomials for any number of parties without using LWE or iO. Our constructions are proven secure under the combination of LPN with dimension , samples, and noise rate for a small constant , and MQ with variables and equations. As applications of our results, we obtain from the same assumptions secure multiparty computation protocols with sublinear communication and silent preprocessing, as well as private information retrieval for servers and size- databases with optimal download rate and client-to-server communication .
Last updated:  2025-05-27
Multiparty FHE Redefined: A Framework for Unlimited Participants
Robin Jadoul, Barry van Leeuwen, and Oliver Zajonc
Multiparty fully homomorphic encryption (MPFHE) is a generalization of (multi-key) fully homomorphic encryption ((MK)FHE) that lives on the cusp between multiparty computation (MPC) and FHE, enabling a computation over encrypted data using multiple keys. However, contrary to MKFHE it seeks to reduce the noise inflation based on the number of parties by allowing the parties to first compute shared data in MPC before executing the computation in FHE. Generally, MPFHE protocols have required ad-hoc constructions and adaptations to already existing protocols. In this work we present a new framework that standardizes the approach of MPFHE to allow the use of a broad spectrum of MPC and FHE protocols, while eliminating the noise inflation based on the participating number of parties. This presents the first ever multiparty FHE protocol which allows an arbitrary number of participants. We then show a case study of this using the FINAL scheme and show that we reduce the required key material by 40-99.9% compared to the MKFHE FINAL scheme, FINALLY, 8-71% compared to the AKÖ scheme, and 65-70% compared to the Park-Rovira scheme. Moreover, we reduce the bootstrapping time for the AKÖ, Park-Rovira, and KMS schemes by 75-99.7%.
Last updated:  2025-05-28
TOOP: A transfer of ownership protocol over Bitcoin
Ariel Futoransky, Fadi Barbara, Ramses Fernandez, Gabriel Larotonda, and Sergio Lerner
The Transfer of Ownership Protocol (TOOP) enables a secure transfer of assets from Bitcoin to other blockchains and back. This is achieved through a committee-based validation protocol that requires only 1-out-of-nhonest security. The protocol operates in distinct phases: the lock phase, where the initial setup and individual assets are locked on Bitcoin, and the unlocking with ownership transfer phase, where the asset is transferred to a possibly different legitimate owner. This protocol solves a limitation of all existing BitVM-like protocols that restricts the unlocking transfers to only addresses known and preregistered during lock and setup. Accordingly, our protocol avoids the financially costly, regulatory problematic, and congestion-prone front-and-reimburse paradigm. TOOP has been implemented for the first time in Cardinal, a protocol for wrapping Bitcoin Unspent Transaction Outputs (UTxOs) onto the Cardano blockchain, with Bitcoin Ordinals represented as Cardano Non-Fungible Tokens (NFTs).
Last updated:  2025-05-26
Permutation-Based Hashing with Stronger (Second) Preimage Resistance - Application to Hash-Based Signature Schemes
Siwei Sun, Shun Li, Zhiyu Zhang, Charlotte Lefevre, Bart Mennink, Zhen Qin, and Dengguo Feng
The sponge is a popular construction of hash function design. It operates with a -bit permutation on a -bit state, that is split into a -bit inner part and an -bit outer part. However, the security bounds of the sponge are most often dominated by the capacity : If the length of the digest is bits, the construction achieves -bit collision resistance and -bit second preimage resistance (and a slightly more complex but similar bound for preimage resistance). In certain settings, these bounds are too restrictive. For example, the recently announced Chinese call for a new generation of cryptographic algorithms expects hash functions with 1024-bit digests and 1024-bit preimage and second preimage resistance, rendering the classical sponge design basically unusable, except with an excessively large permutation. We present the SPONGE-DM construction to salvage the sponge in these settings. This construction differs from the sponge by evaluating the permutation during absorption in a Davies-Meyer mode. We also present SPONGE-EDM, that evaluates potentially round-reduced permutations during absorption in Encrypted Davies-Meyer mode, and SPONGE-EDM, that optimizes the amount of feed-forward data in this construction. We prove that these constructions generically achieve -bit collision resistance as the sponge does, but they achieve -bit preimage resistance and -bit second preimage resistance, where is the maximum size of the first preimage in blocks. With such constructions, one could improve the security (resp., efficiency) without sacrificing the efficiency (resp., security) of hash-based signature schemes whose security relies solely on the (second) preimage resistance of the underlying hash functions. Also, one could use the -bit Keccak permutation with capacity and rate to achieve -bit collision resistance and -bit preimage and second preimage resistance, without making extra permutation calls. To encourage further cryptanalysis, we propose two concrete families of instances of SPONGE-EDM (expected to be weaker than SPONGE-DM), using SHA3 and Ascon. Moreover, we concretely demonstrate the security and performance advantages of these instances in the context of hashing and hash-based signing.
Last updated:  2025-05-26
An almost key-homomorphic post-quantum block cipher with key rotation and security update for long-term secret storage
Thomas Prévost, Bruno Martin, and Olivier Alibart
In this paper, we propose a new block cipher primitive, based on ring-LWE, which allows key rotation with a possible security update. This makes it possible to double the security of the ciphertext with each key rotation. Our scheme could therefore be used for long-term secret storage, allowing the security of the ciphertext to be adapted to the attacker's computing power, without the need for decryption. We propose an implementation of our cryptographic scheme and prove its security.
Last updated:  2025-05-26
Addendum to How Small Can S-boxes Be?
Yu Sun, Lixuan Wu, Chenhao Jia, Tingting Cui, Kai Hu, and Meiqin Wang
In ToSC 2025(1), Jia et al. proposed an SAT-aided automatic search tool for the S-box design. A part of the functionality of this tool is to search for implementations of an S-box with good area and gate-depth complexity. However, it is well-known that the gate depth complexity cannot precisely reflect the latency of an implementation. To overcome this problem, Rasoolzadeh introduced the concept of latency complexity, a more precise metric for the latency cost of implementing an S-box than the gate depth complexity in the real world. In this addendum, we adapt Jia et al.'s tool to prioritize latency as the primary metric and area as the secondary metric to search for good implementations for existing S-boxes. The results show that the combination of Jia et al.'s tool and Rasoolzadeh's latency complexity can lead to lower-latency S-box implementations. For S-boxes used in LBlock, Piccolo, SKINNY-64, RECTANGLE, PRESENT and TWINE, which are popular targets in this research line, we find new implementations with lower latency. We conducted synthesis comparisons of the area and latency under multiple standard libraries, where our results consistently outperformed in terms of latency. For example, for LBlock-S0, our solution reduces latency by around 50.0% ∼73.8% compared to previous implementations in TSMC 90nm library with the latency-optimized synthesis option.
Last updated:  2025-05-26
A Framework for Advanced Signature Notions
Patrick Struck and Maximiliane Weishäupl
The beyond unforgeability features formalize additional security properties for signature schemes. We develop a general framework of binding properties for signature schemes that encompasses existing beyond unforgeability features and reveals new notions. Furthermore, we give new results regarding various transforms: We show that the transform by Cremers et al. (SP'21) achieves all of our security notions and provide requirements such that this is also the case for the transform by Pornin and Stern (ACNS'05). Finally, we connect our framework to unforgeability notions.
Last updated:  2025-05-26
Zero-Trust Post-quantum Cryptography Implementation Using Category Theory
Ilias Cherkaoui, Ciaran Clarke, Jerry Horgan, and Indrakshi Dey
This paper blends post-quantum cryptography (PQC) and zero trust architecture (ZTA) to secure the access for AI models, formalized through the abstract mathematical lens of category theory. In this work, latticebased PQC primitives are assigned ZTA components that include microsegmentation and context-aware authentication, leading to a visual compositional framework that describes cryptographic workflows as morphisms and trust policies as functors, showing how category theory allows for fine-grained policies and adaptive trust. This quantum-resistant algorithm viewing perspective offers an ease for protection against adversarial AI threats. The paper uses a concrete implementation to attest to the effectiveness of the theoretical contribution, rendering it a crypto-agile transition using categorical proofs for AI security .
Last updated:  2025-05-26
Efficient Pairings Final Exponentiation Using Cyclotomic Cubing for Odd Embedding Degrees Curves
Uncategorized
Walid Haddaji, Loubna Ghammam, Nadia El Mrabet, and Leila Ben Abdelghani
Show abstract
Uncategorized
In pairings-based cryptographic applications, final exponentiation with a large fixed exponent ensures distinct outputs for the Tate pairing and its derivatives. Despite notable advancements in optimizing elliptic curves with even embedding degrees, improvements for those with odd embedding degrees, particularly those divisible by , remain underexplored. This paper introduces three methods for applying cyclotomic cubing in final exponentiation and enhancing computational efficiency. The first allows for the execution of one cyclotomic cubing based on the final exponentiation structure. The second leverages some existing seeds structure to enable the use of cyclotomic cubing and extends this strategy to generate new seeds. The third allows generating sparse ternary representation seeds to apply cyclotomic cubing as an alternative to squaring. These optimizations improve performance by up to when computing the final exponentiation for the optimal Ate pairing on and , the target elliptic curves of this study.
Last updated:  2025-05-26
Laurent Polynomial-Based Linear Transformations for Improved Functional Bootstrapping
San Ling, Benjamin Hong Meng Tan, Huaxiong Wang, and Allen Siwei Yang
Following Gentry's seminal work (STOC 2009), Fully Homomorphic Encryption (FHE) has made significant advancements and can even evaluate functions in the bootstrapping process, called functional bootstrapping. Recently, Liu and Wang (ASIACRYPT 2023) proposed a new approach to functional bootstrapping, which bootstrapped ciphertexts in 7ms amortized time. Their methods packed the secret key of the TFHE cryptosystem into a ciphertext of the BFV cryptosystem, followed by performing functional bootstrapping of TFHE within BFV. However, while this yields high amortized efficiency, it faces high latency and computational complexity of ciphertext-ciphertext multiplications due to use of large BFV plaintext primes that serve as the TFHE ciphertext modulus, , to maximize SIMD slots. In this work, we adapt their techniques to achieve lower latency functional bootstrapping by relaxing the requirement for prime BFV plaintext modulus to prime powers, . We first introduce an improved linear transformation stage, multiplying Laurent Polynomial packed secret key and ciphertexts, and , evaluating a linear map. With this, we reduce the number of operations needed to evaluate the linear phase of bootstrapping. Finally, we generalize their functional bootstrapping procedure from plaintext space to via leveraging the digit extraction algorithm, achieving a theoretical complexity of ciphertext-ciphertext multiplications. Additionally, we enable a multi-valued bootstrapping scheme that permits the evaluation of multiple functions over a shared input. To the best of our knowledge, this is the first demonstration of such a method for TFHE ciphertexts that relies predominantly on BFV-based techniques. In our experiments, we achieve overall runtimes as low as 49.873s, representing latency reductions of at least , while noting a slowdown in amortized performance.
Last updated:  2025-05-26
LEAF: A Low-Latency Evaluation Architecture for Feedforward Block in Privacy-Preserving Transformer Inference
Linru Zhang, Xiangning Wang, Xianhui Lu, Huaxiong Wang, and Kwok Yan Lam
Fully homomorphic encryption (FHE) is an appealing and promising solution for privacy-preserving transformer inference to protect users' privacy. However, the huge computational overhead makes it unrealistic to apply FHE in real-world transformers for large language models (LLM). Current FHE-based approaches to secure transformer inference face significant performance challenges, with total latency exceeding 5 hours for 32-input batches. The feedforward block, comprising a large-scale matrix multiplication followed by a GELU evaluation, is widely recognized as one of the most computationally intensive components in privacy-preserving transformer inference. In the state-of-the-art system NEXUS, evaluating the feedforward block incurs a total latency of 5,378 seconds, processing up to 32 inputs per batch. We aim to reduce the latency and propose LEAF, a low-latency evaluation architecture for the feedforward block. LEAF introduces a novel combination of fast matrix multiplication and an asymptotically efficient algorithm for computing non-polynomial activations. When evaluated on the BERT-base model, LEAF reduces total latency to 53.4 seconds, offering a speedup over the state-of-the-art method in the same environment. Our implementations are available.
Last updated:  2025-05-26
Towards Better Integral Distinguishers over Based on Exact Coefficients of Monomials
Muzhou Li, Jiamin Cui, Longzheng Cui, Kai Hu, Chao Niu, and Meiqin Wang
Symmetric primitives used in multi-party computation, fully homomorphic encryption, and zero-knowledge proofs are often defined over Finite Field with or an odd prime . Integral attack is one of the most effective methods against such primitives due to the common use of low-degree non-linear layers. This in turn highlights the importance of a deeper understanding of degree growth. For ciphers defined over , numerous works have explored the growth of the algebraic degree. However, these methods cannot be directly applied to . At CRYPTO 2020, Beyne et al. extended the integral cryptanalysis to by comparing degree with when using data. However, given that the precise degree evaluation remains fundamentally challenging and often computationally infeasible, one may lose better integral distinguishers. In this paper, we present the first automatic search model over based on the exact coefficient of the monomial contained in the algebraic representation. This model is constructed following the Computation-Traceback-Determine framework, where is represented by several sums of multinomial coefficients under specific conditions. The existence of integral properties is then transformed into a determination of whether these sums can consistently equal . This determination is facilitated by four newly developed propositions based on Lucas Theorem. To demonstrate the effectiveness of our framework, we apply it to all variants of GMiMC. As a result, we achieve the best integral distinguishers for GMiMC-erf/-crf using large primes when they are used as block ciphers. For GMiMC-nyb/-mrf using 32/64-bit primes, our integral distinguishers cover more rounds than all other attacks. Meanwhile, all distinguishers we identified are no worse than those trivial ones predicted only considering the maximal degree. This shows the necessity of considering exact coefficients when searching for integral distinguishers over . Our framework is further employed to assess the security of two HADES designs: HadesMiMC and Poseidon2. The results reveal that the full rounds at the beginning and end of HADES provide sufficient resistance against integral cryptanalysis.
Last updated:  2025-05-26
Poseidon and Neptune: Gröbner Basis Cryptanalysis Exploiting Subspace Trails
Lorenzo Grassi, Katharina Koschatko, and Christian Rechberger
At the current state of the art, algebraic attacks are the most efficient method for finding preimages and collisions for arithmetization-oriented hash functions, such as the closely related primitives Poseidon/Poseidon2 and Neptune. In this paper, we revisit Gröbner basis (GB) attacks that exploit subspace trails to linearize some partial rounds, considering both sponge and compression modes. Starting from Poseidon's original security evaluation, we identified some inaccuracies in the model description that may lead to misestimated round requirements. Consequently, we reevaluate and improve the proposed attack strategy. We find that depending on the concrete instantiation, the original security analysis of Poseidon under- or overestimates the number of rounds needed for security. Moreover, we demonstrate that GB attacks leveraging subspace trails can outperform basic GB attacks for Poseidon/Poseidon2 and Neptune. We propose a variant of the previous attack strategy that exploits a crucial difference between Poseidon/Poseidon2 and Neptune: while Poseidon's inverse round functions have a high degree, Neptune's inverse external rounds maintain the same degree as the forward rounds. Using this new model, we demonstrate that Neptune's security in compression mode cannot be reduced to its security against the Constrained-Input-Constrained-Output (CICO) problem. To the best of our knowledge, this is the first time a concrete example has been provided where finding preimages is easier than solving the corresponding CICO problem. Our results emphasize the importance of considering the mode of operation in security analysis while confirming the overall security of Poseidon/Poseidon2 and Neptune against the presented algebraic attacks.
Last updated:  2025-05-29
Tight Multi-User Security of CCM and Enhancement by Tag-Based Key Derivation Applied to GCM and CCM
Yusuke Naito, Yu Sasaki, and Takeshi Sugawara
and are block cipher (BC) based authenticated encryption modes. In multi-user (mu) security, a total number of BC invocations by all users and the maximum number of BC invocations per user are crucial factors. For , the tight mu-security bound has been identified as , where and are respectively the key and block sizes, is the number of users, is the number of offline queries.In contrast, the 's mu-security bound is still unclear. Two bounds of and have been derived by Luykx~et~al.~(Asiacrypt~2017) and Zhang~et~al.~(CCS~2024), respectively, but both are not tight and worse than the 's bound. Moreover, methods to enhance mu security without disruptive changes in the scheme have been considered for , namely nonce randomization () to improve offline security and nonce-based key derivation () to improve online security, but their applicability to has never been discussed. In this paper, we prove an improved mu-security bound of , which is tight, and reaches the 's bound. We then prove that and applied to result in the same bounds for the case to . An important takeaway is that is now proved to be as secure as . Moreover, we argue that and can be insufficient for some applications with massive data, and propose a new enhancement method called nonce-based and tag-based key derivation () that is applied to and . We prove that the resulting schemes meet such real-world needs.
Last updated:  2025-05-25
A Provably Secure W-OTS based on MQ Problem
Zijun Zhuang, Yingjie Zhang, and Jintai Ding
In 2022, Antonov showed that SHA-256 does not satisfy some secure property that SPHINCS needs, and a fogery attack based on this observation reduces the concrete classical security by approximately 40 bits of security. This illustrates a more general concern: the provable security of some hash-based signature schemes can be compromised when implemented with certain real-world hash functions, and motivates the need to design new functions with rigorous, provable security guarantees. Besides, it has been shown that from W-OTS to W-OTS, the security requirement for the hash function's collision resistance can be relaxed to second-preimage resistance (SPR), which means that it is possible to use some functions with SPR property to instantiate the underlying function family in W-OTS, and obtain a provably secure W-OTS. In this paper, we use multivariate quadratic functions (MQ functions) to instantiate in W-OTS, which yields the first provably secure W-OTS To prove its security, we need to derive the SPR property of MQ functions. The key is to show the -hardness of finding second preimages. Furthermore, we prove the multi-function, multi-target one-wayness (MM-OW) and the multi-function, multi-target second-preimage resistance (MM-SPR) of MQ functions, which implies the provable security of MQ-based W-OTS in the multi-user setting, on the condition that the number of users is for some , where is the security parameter.
Last updated:  2025-05-25
Enhancing Provable Security and Efficiency of Permutation-based DRBGs
Woohyuk Chung, Seongha Hwang, Hwigyeom Kim, and Jooyoung Lee
We revisit the security analysis of the permutation-based deterministic random bit generator~(DRBG) discussed by Coretti et al. at CRYPTO 2019. Specifically, we prove that their construction, based on the sponge construction, and hence called Sponge-DRBG in this paper, is secure up to queries in the seedless robustness model, where is the required min-entropy and is the sponge capacity. This significantly improves the provable security bound from the existing to the birthday bound. We also show that our bound is tight by giving matching attacks. As the Multi-Extraction game-based reduction proposed by Chung et al. at Asiacrypt 2024 is not applicable to Sponge-DRBG in a straightforward manner, we further refine and generalize the proof technique so that it can be applied to a broader class of DRBGs to improve their provable security. We also propose a new permutation-based DRBG, dubbed POSDRBG, with almost the optimal output rate , outperforming the output rate of Sponge-DRBG, where is the output size of the underlying permutation and . We prove that POSDRBG is tightly secure up to queries. Thus, to the best of our knowledge, POSDRBG is the first permutation-based DRBG that achieves the optimal output rate of 1, while maintaining the same level of provable security as Sponge-DRBG in the seedless robustness model.
Last updated:  2025-05-25
Breaking Poseidon Challenges with Graeffe Transforms and Complexity Analysis by FFT Lower Bounds
Ziyu Zhao and Jintai Ding
Poseidon and Poseidon2 are cryptographic hash functions designed for efficient zero-knowledge proof protocols and have been widely adopted in Ethereum applications. To encourage security research, the Ethereum Foundation announced a bounty program in November 2024 for breaking the Poseidon challenges, i.e. solving the CICO (Constrained Input, Constrained Output) problems for round-reduced Poseidon constructions. In this paper, we explain how to apply the Graeffe transform to univariate polynomial solving, enabling efficient interpolation attacks against Poseidon. We will provide an open-source code and details our approach for solving several challenges valued at $20000 in total. Compared to existing attacks, we improves 2^{13} and 2^{4.5} times in wall time and memory usage, respectively. For all challenges we solved, the cost of memory access turns out to be an essential barrier, which makes the security margin much larger than expected. We actually prove that the memory access cost for FFT grows as the 4/3-power of the input size, up to a logarithmic factor. This indicates the commonly used pseudo linear estimate may be overly conservative. This is very different from multivariate equation solving whose main bottleneck is linear algebra over finite fields. Thus, it might be preferable to choose parameters such that the best known attack is interpolation, as it presents more inherent hardness.
Last updated:  2025-05-24
Almost-Total Puzzles and Their Applications
Xiao Liang, Omkant Pandey, Yuhao Tang, and Takashi Yamakawa
Public-coin protocols are cryptographic protocols in which all messages sent by a specific party (typically the receiver or verifier) consist solely of random bits. These protocols have been extensively studied due to their advantageous properties in several scenarios, such as the parallel repetition of interactive arguments, and the design of secure multi-party computation with low round complexity, among others. Curiously, constructions of public-coin protocols remain limited, particularly when optimization is sought in additional factors like round complexity or hardness assumptions. We introduce the concept of , a novel cryptographic primitive characterized by two key properties: (i) hardness against any efficient adversary, and (ii) an "almost-total" guarantee of the existence of solutions, even when the puzzle generator is malicious. We demonstrate that this primitive can be derived from one-way functions in public-coin, requiring only two rounds. By leveraging this primitive, we obtain a family of new results in both the classical and post-quantum settings, based on the of (post-quantum) one-way functions, including: - five-round post-quantum extractable commitments and witness-indistinguishable arguments of knowledge, where the (knowledge) extractors achieve the () simulation proposed by Lombardi, Ma, and Spooner [FOCS'22]; - five-round classical extractable commitments that ; - five-round classical delayed-input strong witness-indistinguishable arguments of knowledge, and delayed-input witness-hiding arguments of knowledge; - the five-round post-quantum analogue of the last item, but with the difference that (1) the input can be delayed until the third round, and (2) post-quantum arguments of knowledge are again defined w.r.t. -simulation; - -round post-quantum non-malleable commitments.
Last updated:  2025-05-24
Resolving the Efficiency-Utility Dilemma of Threshold Linearly Homomorphic Encryption via Message-Space Adapter
Yijia Chang, Rongmao Chen, Chao Lin, Songze Li, and Xinyi Huang
Threshold linearly homomorphic encryption (ThLHE) is a useful cryptographic tool for secure computation in multi-party settings, with applications in electronic voting, secure multiparty computation (MPC), and beyond. Although ThLHE offers significant advantages such as low communication overhead, its adoption in modern systems is hindered by a critical dilemma between efficiency and utility. Precisely, existing ThLHE schemes either suffer from high decryption complexity—typically or worse for parties—or impose extra restrictions on the message space or participant set, limiting their practicality in large-scale and dynamic settings. In this work, we resolve this efficiency-utility dilemma for ThLHE by introducing a novel primitive termed message-space adapter (MeSA). By developing a lattice-based MeSA for exponential ElGamal (Exp-ElGamal), we mitigate the small-message restriction of Exp-ElGamal while preserving its efficient threshold decryption. This leads to the design of the first ThLHE scheme that achieves quasi-linear decryption complexity without restrictions on the message space or participant set. We implement a prototype of this new ThLHE scheme and validate the quasi-linear growth of its running time with respect to . Beyond resolving this dilemma, we further extend the applications of our new ThLHE scheme. Specifically, we apply it to construct the first multi-party fully homomorphic encryption scheme with quasi-linear computation complexity and constant communication complexity, while supporting arbitrary threshold and dynamic participant set. This demonstrates the extra benefits of our ThLHE scheme with broader applicability.
Last updated:  2025-05-29
Quantum Rewinding for IOP-Based Succinct Arguments
Alessandro Chiesa, Marcel Dall'Agnol, Zijing Di, Ziyi Guan, and Nicholas Spooner
We analyze the post-quantum security of succinct interactive arguments constructed from interactive oracle proofs (IOPs) and vector commitment schemes. Specifically, we prove that an interactive variant of the *BCS transformation* is secure in the standard model against quantum adversaries when the vector commitment scheme is collapse binding. Prior work established the post-quantum security of Kilian's succinct interactive argument, a special case of the BCS transformation for one-message IOPs (i.e., PCPs). That analysis is inherently limited to one message because the reduction, like all prior quantum rewinding reductions, aims to extract classical information (a PCP string) from the quantum argument adversary. Our reduction overcomes this limitation by instead extracting a *quantum algorithm* that implements an IOP adversary; representing such an adversary classically may in general require exponential complexity. Along the way we define *collapse position binding*, which we propose as the ``correct'' definition of collapse binding for vector commitment schemes, eliminating shortcomings of prior definitions. As an application of our results, we obtain post-quantum secure succinct arguments, in the standard model (no oracles), with the *best asymptotic complexity known*.
Last updated:  2025-05-24
Logup*: faster, cheaper logup argument for small-table indexed lookups
Lev Soukhanov
Logup argument (in it's modern GKR version, as described in eprint:2023/1284 paper) is a logarithmic derivative-based unindexed lookup argument. An indexed lookup argument can be constructed from unindexed one using standard trick. In this short informal note, we explain a different way of obtaining indexed lookup from logup, which does not commit any additional arrays of the size of the indexing array. That makes it particularly amenable for lookups in small tables (giving, to our knowledge, a first argument with this property). Additionally, this argument is not subject to numerator overflow issue that requires additional mitigation described in eprint:2024/2067. Improvements to SPARK / Lasso protocols are also discussed.
Last updated:  2025-05-23
Quantum Security Analysis of the Key-Alternating Ciphers
Chen Bai, Mehdi Esmaili, and Atul Mantri
In this work, we study the quantum security of key-alternating ciphers (KAC), a natural multi-round generalization of the Even–Mansour (EM) cipher underlying many block cipher constructions, including AES. While the classical security of KAC and the quantum security of the -round KAC (i.e. Even-Mansour) cipher are well understood, the quantum resistance of multi-round KAC remains largely unexplored. We focus on the -round KAC construction, defined using public -bit permutations , and keys , , and as Our main contributions are as follows: 1. Quantum Lower Bounds. We provide the first formal analysis showing that a -round KAC is quantum-secure in both the and models. Specifically, in the model, a (non-adaptive) adversary must make at least quantum queries to the public permutations and at least classical queries to the cipher in order to distinguish it from a random permutation (in contrast to the classical lower bound of queries). As a corollary, we show that in the model, a (non-adaptive) adversary requires quantum queries. To achieve such a result, we employ the quantum hybrid method along with recently proposed lifting theorems in the ideal cipher and random permutation oracle model. 2. Quantum Key-Recovery Attack. We give the first nontrivial quantum key-recovery attack on multi-round KAC in the model where the adversary has quantum access to all of the public permutations. Our quantum attack applies to any -round KAC and achieves quantum query complexity , where , improving over the best known classical bound of , where , from Bogdanov et al. (EUROCRYPT 2012). The attack leverages a novel application of quantum walk algorithms specifically adapted to the KAC structure. 3. The Model. To bridge the classical and settings, we introduce the , in which the adversary has quantum superposition access to at most one permutation. This model is crucial for our lower bound and supports similar key-recovery attacks to Q1, using fewer quantum resources. We believe is of independent interest.
Last updated:  2025-05-23
Succinct Witness Encryption for Batch Languages and Applications
Lalita Devadas, Abhishek Jain, Brent Waters, and David J. Wu
Witness encryption allows one to encrypt a message to an relation and a statement . The corresponding decryption key is any valid witness . In a succinct witness encryption scheme, we require that the size of the ciphertext be sublinear in the size of the relation. Currently, all realizations of succinct witness encryption for rely on strong assumptions such as pseudorandom obfuscation, extractable witness encryption, or differing-inputs obfuscation. Notably, none of these notions are known from standard assumptions. In this work, we consider a relaxation of succinct witness encryption for to the setting of batch . In this setting, one encrypts to an relation together with statements . In the basic version, one can decrypt if they have a witness for all statements. The succinctness requirement is that the size of the ciphertext should be sublinear in the number of instances , but is allowed to grow with the size of the relation (i.e., the size of a single instance). More generally, we can also impose a (monotone) policy over the instances. In this case, decryption is possible only if there exists such that . In this work, we initiate a systematic study of succinct witness encryption for batch languages. We start with two simple constructions that support CNF and DNF policies based on plain witness encryption in conjunction with a somewhere statistically sound batch argument for or a function-binding hash function. Then, using indistinguishability obfuscation, we show how to support policies that can be computed by read-once bounded-space Turing machines. The latter construction is in fact a unique witness map for monotone-policy batch , and as such, also gives a SNARG for monotone-policy batch where the size of the common reference string is sublinear in the number of instances. Finally, we demonstrate some immediate applications of succinct witness encryption for batch languages. We construct new succinct computational secret sharing schemes for CNFs, DNFs, and weighted threshold policies. We also show how to build distributed monotone-policy encryption, a notion that generalizes recent trustless primitives like distributed broadcast encryption and threshold encryption with silent setup.
Last updated:  2025-05-28
On the Adaptive Security of Key-Unique Threshold Signatures
Elizabeth Crites, Chelsea Komlo, and Mary Maller
In this work, we investigate the security assumptions required to prove the adaptive security of threshold signatures. Adaptive security is a strong notion of security that allows an adversary to corrupt parties at any point during the execution of the protocol, and is of practical interest due to recent standardization efforts for threshold schemes. Towards this end, we give two different impossibility results. We begin by formalizing the notion of a key-unique threshold signature scheme, where public keys have a unique correspondence to secret keys and there is an efficient algorithm for checking that public keys are well-formed. Key-uniqueness occurs in many threshold schemes that are compatible with standard, single-party signatures used in practice, such as BLS, ECDSA, and Schnorr signatures. Our first impossibility result demonstrates that it is impossible to prove the adaptive security of any key-unique threshold signature scheme under any non-interactive computational assumption for a broad class of reductions, in the range , where is the total number of parties, is the number of corrupted parties, and is the threshold. We begin by ruling out full adaptive security (i.e., corruptions) for key-unique threshold signatures under non-interactive computational assumptions, including, but not limited to, the discrete logarithm (DL), computational Diffie-Hellman (CDH), and q-Strong Diffie-Hellman (q-SDH) assumptions. We then generalize this impossibility result for all such that . Our second impossibility result applies specifically to key-unique threshold Schnorr signatures, currently an active area of research. We demonstrate that, even under the interactive computational assumptions One-More Discrete Logarithm (OMDL) and Algebraic OMDL (AOMDL), it is impossible to prove adaptive security for in the programmable ROM with rewinding. Taken together, our results underscore the difficulty of achieving adaptive security for key-unique threshold signatures. However, we believe this work can open a new line of research, by indicating assumptions and properties to aim for when constructing adaptively secure threshold schemes.
Last updated:  2025-05-23
On the (in)security of Proofs-of-Space based Longest-Chain Blockchains
Mirza Ahad Baig and Krzysztof Pietrzak
The Nakamoto consensus protocol underlying the Bitcoin blockchain uses proof of work as a voting mechanism. Honest miners who contribute hashing power towards securing the chain try to extend the longest chain they are aware of. Despite its simplicity, Nakamoto consensus achieves meaningful security guarantees assuming that at any point in time, a majority of the hashing power is controlled by honest parties. This also holds under ``resource variability'', i.e., if the total hashing power varies greatly over time. Proofs of space (PoSpace) have been suggested as a more sustainable replacement for proofs of work. Unfortunately, no construction of a ``longest-chain'' blockchain based on PoSpace, that is secure under dynamic availability, is known. In this work, we prove that without additional assumptions no such protocol exists. We exactly quantify this impossibility result by proving a bound on the length of the fork required for double spending as a function of the adversarial capabilities. This bound holds for any chain selection rule, and we also show a chain selection rule (albeit a very strange one) that almost matches this bound. Concretely, we consider a security game in which the honest parties at any point control times more space than the adversary. The adversary can change the honest space by a factor with every block (dynamic availability), and ``replotting'' the space (which allows answering two challenges using the same space) takes as much time as blocks. We prove that no matter what chain selection rule is used, in this game the adversary can create a fork of length that will be picked as the winner by the chain selection rule. We also provide an upper bound that matches the lower bound up to a factor . There exists a chain selection rule (albeit a very strange one) which in the above game requires forks of length at least . Our results show the necessity of additional assumptions to create a secure PoSpace based longest-chain blockchain. The Chia network in addition to PoSpace uses a verifiable delay function. Our bounds show that an additional primitive like that is necessary.
Last updated:  2025-05-24
Proof of Exponentiation: Enhanced Prover Efficiency for Algebraic Statements
Zhuo Wu, Shi Qi, Xinxuan Zhang, Yi Deng, Kun Lai, and Hailong Wang
Recent years have seen the widespread adoption of zkSNARKs constructed over small fields, including but not limited to, the Goldilocks field, small Mersenne prime fields, and tower of binary fields. Their appeal stems primarily from their efficacy in proving computations with small bit widths, which facilitates efficient proving of general computations and offers significant advantages, notably yielding remarkably fast proving efficiency for tasks such as proof of knowledge of hash preimages. Nevertheless, employing these SNARKs to prove algebraic statements (e.g., RSA, ECDSA signature verification) presents efficiency challenges. To address this problem, we first define a new circuit model: arithmetic circuits with additional \textit{exponentiation gates}. These gates serve as fundamental building blocks for establishing more intricate algebraic relations. Then we present a \textit{Hash-committed Commit-and-Prove (HCP)} framework to construct Non-interactive Zero-knowledge (NIZK) proofs for the satisfiability of these circuits. Specifically, when proving knowledge of group exponentiations in discrete logarithm hard groups and RSA groups, compared to verifying complex group exponentiations within SNARK circuits, our approach requires proving only more lightweight computations within the SNARK, such as zk-friendly hash functions (e.g., Poseidon hash function). The number of these lightweight computations depends solely on the security parameter. This differentiation leads to substantial speedups for the prover relative to direct SNARK methods, while maintaining competitive proof size and verification cost.
Last updated:  2025-05-23
Special Genera of Hermitian Lattices and Applications to HAWK
Guilhem Mureau
In its decisional form, the module-Lattice Isomorphism Problem (decision module-LIP) has received first attention in a paper by Ling, Liu and Mendelsohn. The authors gave a polynomial-time algorithm to distinguish spinor genera within the genus of a quadratic binary -lattice, assuming that is a principal ideal domain. However, this algorithm would not impact cryptographic schemes based on decision module-LIP for lattices such as those employed in HAWK, i.e., for binary -lattices equipped with an Hermitian form (with a cyclotomic number field). Motivated by HAWK's framework, we investigate a concept that serves as an analogue of the spinor genus for Hermitian lattices, called special genus. This notion was studied by Shimura who provided a complete set of invariants for describing special genera. Building on this result, we propose an algorithm to determine whether two Hermitian lattices belong to the same special genus. Specifically for HAWK's lattice and sibblings, our algorithm runs in classical polynomial-time. Nevertheless we provide numerical evidence suggesting that the ability to distinguish special genera does not, in practice, constitute a significative advantage for solving decision module-LIP.
Last updated:  2025-05-23
On the security of one certificateless aggregate signature scheme with dynamic revocation in vehicular ad-hoc networks
Zhengjun Cao and Lihua Liu
We show that the certificateless signature scheme [Veh. Commun. 47: 100763 (2024)] is insecure, because an adversary can launch forgery attack for any message. The signer's certificateless public key is not tightly bound to the system public key. The inherent flaw results in that the adversary can find an efficient signing algorithm functionally equivalent to the valid signing algorithm. The findings in this note could be helpful for newcomers who are not familiar with the designing techniques for certificateless signatures.
Last updated:  2025-05-23
PSYLOCKE: Provably Secure Logic Locking with Practical Efficiency
Yohei Watanabe, Kyoichi Asano, Haruka Hirata, Tomoki Ono, Mingyu Yang, Mitsugu Iwamoto, Yang Li, and Yuko Hara
Logic locking is an obfuscation technique designed to protect the intellectual property of hardware designs and has attracted considerable attention for over a decade. However, most logic locking schemes have been developed heuristically, leading the field into a cat-and-mouse game between attackers and defenders. Indeed, several proposed schemes have already been broken. While recent works have introduced provably secure logic locking, they often incur impractical overhead or fail to support the ASIC design paradigm while offering strong theoretical security guarantees. In this work, we propose PSYLOCKE, a provably secure and practically efficient logic locking scheme that balances formal security guarantees with implementation feasibility. We introduce a new security paradigm that formalizes logic locking under predetermined allowable leakage, such as circuit topology, and we provide refined definitions of resilience against specific attacks. Our analysis bridges general security definitions and attack resilience, quantifying how leakage impacts the success of real-world attacks. PSYLOCKE is provably secure under topology leakage and achieves significant efficiency improvement compared to existing provably secure logic locking schemes. Alongside our theoretical analysis, we demonstrate through quantitative evaluations using widely-used benchmark circuits that PSYLOCKE strikes a favorable balance between practical performance and security. Concretely, PSYLOCKE reduced the Area-Power-Delay overhead by an order of magnitude while achieving the same security level, compared to the existing provably secure logic locking scheme.
Last updated:  2025-05-23
Attacking Poseidon via Graeffe-Based Root-Finding over NTT-Friendly Fields
Antonio Sanso and Giuseppe Vitto
This paper explores the algebraic structure of the Poseidon and Poseidon2 permutations over NTT-friendly finite fields, with a focus on preimage recovery via root-finding techniques. We introduce an algorithm for efficiently identifying single roots of high-degree univariate polynomials that emerge from these constructions, based on the Graeffe transform and the tangent Graeffe method. Our approach is evaluated on reduced-round bounty instances of these permutations at various security levels, as proposed by the Ethereum Foundation, demonstrating practical effectiveness. These results yield new insights into the security of permutation-based cryptographic primitives instantiated over NTT-friendly prime fields.
Last updated:  2025-05-23
Justvengers: Batched VOLE ZK Disjunctions in Communication
Yibin Yang
Recent progress on zero-knowledge proofs (ZKPs) based on vector oblivious linear evaluation (VOLE) offers a promising paradigm for scaling ZKPs over extremely large statements. In particular, VOLE-based ZK is currently the best choice in terms of end-to-end execution time. However, VOLE-based ZK incurs high communication overhead — it usually scales linearly with the circuit size. To mitigate this, existing literature considers VOLE-based ZK over structured statements. In this work, we focus on the batched disjunctive statement — and agree on fan-in circuits over a field ; each circuit is of size with inputs. 's goal is to demonstrate the knowledge of witnesses , for each s.t. where neither nor is revealed. Batched disjunctive statements are effective, e.g., in emulating the CPU execution inside ZK. Note, the naïve solution results in a circuit of size . To prove such a statement using VOLE-based ZK, the prior state-of-the-art protocol (Weng et al., CCS'22) incurred communication by additionally relying on AHE, whereas (Yang et al., CCS'23) achieved communication using only VOLE. In this work, we combine these two protocols non-trivially and present a novel protocol — targeting the batched disjunctive statement — that incurs only communication and computation for prover, using AHE and VOLE.
Last updated:  2025-05-23
Side-channel safe conditional moves and swaps
David Santos and Michael Scott
Constant-time implementations are a cornerstone of secure cryptographic systems, particularly in the context of key exchange protocols and digital signature schemes. These implementations are designed to eliminate timing side-channel vulnerabilities by ensuring that the program’s execution time is independent of secret data. A fundamental building block for achieving constant-time behavior is the conditional move operation. Unlike traditional branching constructs (such as if statements), which may introduce data-dependent timing variations, conditional moves allow developers to write logic that behaves identically at the hardware level regardless of input values. As a result, they are widely used in cryptographic libraries and standards to ensure both functional correctness and resistance to timing attacks. In this work, we describe our efforts to implement elliptic curve cryptography with some immunity against certain power leakage side-channel attacks, using standard C and Rust code.
Last updated:  2025-05-27
Diving Deep Into UC: Uncovering and Resolving Issues in Universal Composability
Céline Chevalier and Éric Sageloli
Introduced by Canetti in 2001, Universal Composability (UC) is a widely adopted security model that enables the specification and proof of security for a broad range of protocols, offering strong security guarantees. At its core lies the universal composition theorem (UC theorem), which ensures that protocols proven secure within the framework remain secure even when deployed in real-world environments with multiple instances of them. In this work, we present two key contributions. First, we identify several problems with the UC framework, in particular the UC Theorem. They include counterexamples, limitations that make it unusable for important classes of protocols, and weaknesses in its proof. These problems reveal flaws in nearly all the fundamental concepts of UC. Secondly, we update the main concepts of UC to address these problems. Although these revisions are nontrivial, our updated definitions are intended to stay as closely aligned with the original model as possible, while providing greater simplicity overall. To ensure the validity of these updates, we present a proof of the updated UC theorem, which is more detailed and modular than the original.
Last updated:  2025-05-22
Fast elliptic curve scalar multiplications in SN(T)ARK circuits
Liam Eagen, Youssef El Housni, Simon Masson, and Thomas Piellard
Proof systems of arbitrary computations have found many applications in recent years. However, the proving algorithm has a consequent complexity tied to the size of the computation being proved. Thus, proving large computations is quite inefficient. One of these large computations is the scalar multiplication over an elliptic curve. In this work, we provide new techniques for reducing the time corresponding to proving a scalar multiplication, using integer lattice reduction or a (half) extended Euclidean algorithm in a ring of integers. We investigate optimizations in the case of small (complex multiplication) discriminant curves, and its generalization for multi scalar multiplications as used in signature verification. We provide an optimized Golang implementation for different elliptic curves in different proof systems settings. The speed-up in proving time is between 22% and 53% compared to the previous state-of-the-art.
Last updated:  2025-05-22
Integral cryptanalysis in characteristic
Tim Beyne and Michiel Verbauwhede
Integral and ultrametric integral cryptanalysis are generalized to finite rings of prime characteristic that are isomorphic to a product of fields. This extends, for instance, the complete state of the art in integral cryptanalysis from to , for all prime powers . A compact representation of transition matrices, based on convex polyhedra, is introduced to ensure that the proposed methods are computationally efficient even for large . Automated tools are developed and applied to a few generic and several concrete primitives. The analysis shows that previous degree estimates for Feistel-GMiMC, HadesMiMC, AES-Prime, small-pSquare and mid-pSquare are overly optimistic. Furthermore, except for AES-Prime, these primitives do not meet their design criteria unless their number of rounds is substantially increased.
Last updated:  2025-05-22
Multivalued Broadcast with Optimal Length
Gabriel Dettling, Martin Hirt, and Chen-Da Liu-Zhang
A multi-valued broadcast protocol allows a sender to broadcast an -bit message to recipients. For all relevant models, multi-valued broadcast protocols with asymptotically optimal communication complexity have been published. Despite their very low communication complexity, these protocols perform poorly in modern networks. Even if the network allows all parties to send messages at the same time, the execution time of the protocols is proportional to (instead of ). Even if the network allows to use all bilateral channels at the same time, the execution time is still proportional to (instead of ). We ask the natural question whether multi-valued broadcast protocols exist which take time proportional to if parties can simultaneously send messages, and even take time proportional to if the bilateral channels can be used simultaneously. We provide a complete characterization of multi-valued broadcast with a two-fold answer: On the negative side, we prove that for , multi-valued broadcast requires time proportional to , if parties can simultaneously send messages, respectively take time proportional to if bilateral channels can be used simultaneously. On the positive side, we prove that for (for any fixed ), multi-valued broadcast in time proportional to (when parties can send messages simultaneously), respectively proportional to (if bilateral channels can be used simultaneously) is possible. We provide such protocols both with cryptographic security as well as with statistical security.
Last updated:  2025-05-22
SEEC: Memory Safety Meets Efficiency in Secure Two-Party Computation
Henri Dohmen, Robin Hundt, Nora Khayata, and Thomas Schneider
Secure Multi-Party Computation (MPC) allows multiple parties to perform privacy-preserving computation on their secret data. MPC protocols based on secret sharing have high throughput which makes them well-suited for batch processing, where multiple instances are evaluated in parallel. So far, practical implementations of secret sharing-based MPC protocols mainly focus on runtime and communication efficiency, so the memory overhead of protocol implementations is often overlooked. Established techniques to reduce the memory overhead for constant-round garbled circuit protocols cannot be directly applied to secret sharing-based protocols because they would increase the round complexity. Additionally, state-of-the-art implementations of secret sharing-based MPC protocols are implemented in C/C++ and may exhibit memory unsafety and memory leaks, which could lead to undefined behavior. In this paper, we present SEEC: SEEC Executes Enormous Circuits, a framework for secret sharing-based MPC with a novel approach to address memory efficiency and safety without compromising on runtime and communication efficiency. We realize SEEC in Rust, a language known for memory-safety at close-to-native speed. To reduce the memory footprint, we develop an in-memory representation for sub-circuits. Thus, we never inline sub-circuit calls during circuit evaluation, a common issue that blows up memory usage in MPC implementations. We compare SEEC with the state-of-the-art secret sharing-based MPC frameworks ABY (NDSS'15), MP-SPDZ (CCS'20), and MOTION (TOPS'22) w.r.t. runtime, memory, and communication efficiency. Our results show that our reliable and memory-safe implementation has competitive or even better performance.
Last updated:  2025-05-22
The DROP Protocol: Dispute Resolution via Observation in Public for Verifiable, In-Person Voting
Josh Benaloh, Michael Naehrig, and Olivier Pereira
Dispute resolution has been a significant challenge in verifiable election protocols since such protocols were first proposed more than forty years ago. This work explores the problem from a new perspective and offers strong dispute resolution for in-person voting by depending on observers. It proposes a simple definition of dispute resolution as a property of a voting protocol---a definition that is independent of any other security goal. It also presents the DROP protocol, a verifiable, in-person voting protocol that runs in the presence of observers who will always reach a correct conclusion in the case of a dispute without ever being able to compromise privacy or facilitate coercion.
Last updated:  2025-05-22
HAWK: Having Automorphisms Weakens Key
Daniël M. H. van Gent and Ludo N. Pulles
The search rank-2 module Lattice Isomorphism Problem (smLIP), over a cyclotomic ring of degree a power of two, can be reduced to an instance of the Lattice Isomorphism Problem (LIP) of at most half the rank if an adversary knows a nontrivial automorphism of the underlying integer lattice. Knowledge of such a nontrivial automorphism speeds up the key recovery attack on HAWK at least quadratically, which would halve the number of security bits. Luo et al. (ASIACRYPT 2024) recently found an automorphism that breaks omSVP, the initial underlying hardness assumption of HAWK. The team of HAWK amended the definition of omSVP to include this so-called symplectic automorphism in their submission to the second round of NIST's standardization of additional signatures. This work provides confidence in the soundness of this updated definition, assuming smLIP is hard, since there are plausibly no more trivial automorphisms that allow winning the omSVP game easily. Although this work does not affect the security of HAWK, it opens up a new attack avenue involving the automorphism group that may be theoretically interesting on its own.
Last updated:  2025-05-22
Enhancing Meme Token Market Transparency: A Multi-Dimensional Entity-Linked Address Analysis for Liquidity Risk Evaluation
Qiangqiang Liu, Qian Huang, Frank Fan, Haishan Wu, and Xueyan Tang
Meme tokens represent a distinctive asset class within the cryptocurrency ecosystem, characterized by high community engagement, significant market volatility, and heightened vulnerability to market manipulation. This paper introduces an innovative approach to assessing liquidity risk in meme token markets using entity-linked address identification techniques. We propose a multi-dimensional method integrating fund flow analysis, behavioral similarity, and anomalous transaction detection to identify related addresses. We develop a comprehensive set of liquidity risk indicators tailored for meme tokens, covering token distribution, trading activity, and liquidity metrics. Empirical analysis of tokens like BabyBonk, NMT, and BonkFork validates our approach, revealing significant disparities between apparent and actual liquidity in meme token markets. The findings of this study provide significant empirical evidence for market participants and regulatory authorities, laying a theoretical foundation for building a more transparent and robust meme token ecosystem.
Last updated:  2025-05-22
Polocolo: A ZK-Friendly Hash Function Based on S-boxes Using Power Residues (Full Version)
Jincheol Ha, Seongha Hwang, Jooyoung Lee, Seungmin Park, and Mincheol Son
Conventional hash functions are often inefficient in zero-knowledge proof settings, leading to design of several ZK-friendly hash functions. On the other hand, lookup arguments have recently been incorporated into zero-knowledge protocols, allowing for more efficient handling of ``ZK-unfriendly'' operations, and hence ZK-friendly hash functions based on lookup tables. In this paper, we propose a new ZK-friendly hash function, dubbed , that employs an S-box constructed using power residues. Our approach reduces the numbers of gates required for table lookups, in particular, when combined with Plonk, allowing one to use such nonlinear layers over multiple rounds. We also propose a new MDS matrix for the linear layer of . In this way, requires fewer Plonk gates compared to the state-of-the-art ZK-friendly hash functions. For example, when , requires less Plonk gates compared to Anemoi, which is currently the most efficient ZK-friendly hash function, where denotes the size of the underlying permutation in blocks of . For , requires less Plonk gates than Reinforced Concrete, which is one of the recent lookup-based ZK-friendly hash functions.
Last updated:  2025-05-22
SCMAC and LOL2.0: An AEAD Design Framework and A New Version of LOL Stream Cipher Design Framework
Dengguo Feng, Lin Jiao, Yonglin Hao, Qunxiong Zheng, Wenling Wu, Wenfeng Qi, Lei Zhang, Liting Zhang, Siwei Sun, and Tian Tian
In this paper, we introduce SCMAC, a general framework that transforms large-memory stream ciphers into AEAD schemes. It represents an intermediate design paradigm between Encrypt-then-MAC and dedicated single-pass AEAD, partially integrating encryption and authentication mechanisms while mitigating the risk of state leakage associated with immediate absorption and squeezing. Consequently, this approach harmonizes high performance with enhanced security. Additionally, we propose LOL2.0, an enhanced version of the blockwise stream cipher design framework LOL. This new framework improves security through modifications to the FSM update and output functions, and increases flexibility in constructing LFSR components. Based on SCMAC}$ and LOL2.0, we present two AEAD ciphers, LOL2.0-Mini and LOL2.0-Double, which support both stream cipher and AEAD modes. These ciphers are tailored to Beyond 5G/6G environments, offering 256-bit key length and resistance to known cryptanalysis methods, including differential, linear, and integral attacks. They also provide 128-bit security against forgery attacks in the nonce-respecting setting. Due to their compatibility with AES-NI and SIMD instructions, LOL2.0-Mini and LOL2.0-Double achieve software performance of 90 Gbps and 144 Gbps in stream cipher mode, respectively. In AEAD mode, they perform at 59 Gbps and 110 Gbps, significantly faster than their predecessor's Encrypt-then-MAC versions.
Last updated:  2025-05-22
Card-Based Protocol Counting Connected Components of Graphs
Koji Nuida
Card-based cryptography is a research area for realizing cryptographic functionality, such as secure multiparty computation and zero-knowledge proofs, by using a deck of physical cards and/or other non-electrical tools. Motivated by zero-knowledge proofs for solutions in pencil puzzles, there is a direction of recent studies on card-based protocols to verify connectivity of a set of cells or edges on lattice-shaped boards. In this paper, we generalize the problem to counting connected components of subsets on any graph, and propose a card-based protocol for the problem.
Last updated:  2025-05-22
SPECK: Signatures from Permutation Equivalence of Codes and Kernels
Marco Baldi, Michele Battagliola, Rahmi El Mechri, Paolo Santini, Riccardo Schiavoni, and Davide De Zuane
The ongoing search for secure post-quantum cryptographic primitives has led to the development of numerous novel digital signature schemes. In this paper we introduce , a new signature protocol based on the similarities between the Permuted Kernel Problem () and the Permutation Code Equivalence Problem (). At its core, is built on the permutation version of LESS, but introduces a key modification to the commitment step. Indeed, instead of committing to an entire permuted code, the prover commits to a random relaxed (that we call , Permutation Equivalence of Codes and Kernel) instance by randomly choosing a codeword from a random permutation of the initial code. In this sense, the secret key is used as a trapdoor to solve the committed instance. The new approach allows for a faster verification that does not involve gaussian elimination, while maintains roughly the same signature size as LESS. We present the protocol in detail and provide a deep analysis of the security of the new introduced assumptions.
Last updated:  2025-05-22
: Efficient Polynomial Commitment Schemes from Lattices
Lizhen Zhang, Shang Gao, and Bin Xiao
This work introduces , a Hypercube-Wise Optimized polynomial commitment scheme based on Lattices over ring Fields. The scheme achieves succinctness with proof size and verifier time, along with linear prover cost. It supports both univariate and multilinear polynomials under a unified framework. Inspired by the two-dimensional tensor structure employed in \cite{golovnev2021brakedown} to achieve sublinear efficiency, we generalize the idea to a -dimensional tensor (hypercube) structure and design a -round recursive proof protocol, where each round performs a dimensionality reduction, which results in an overall efficiency of . By setting , our scheme achieves near-optimal asymptotic performance. is fully transparent and relies only on the standard lattice assumption over rings. In terms of concrete efficiency, for polynomials with coefficients, our scheme yields proof sizes that are smaller than the lattice-based scheme of \cite{cini2024polynomial} (Crypto24), and over smaller than \cite{albrecht2024slap} (Eurocrypt24). Compared to \cite{nguyen2024greyhound} (Crypto24), our proof size is of similar magnitude, while achieving significantly lower verifier time.
Last updated:  2025-05-23
Zero-knowledge Authenticator for Blockchain: Policy-private and Obliviously Updateable
Kostas Kryptos Chalkias, Deepak Maram, Arnab Roy, Joy Wang, and Aayush Yadav
Transaction details and participant identities on the blockchain are often publicly exposed. In this work, we posit that blockchain's transparency should not come at the cost of privacy. To that end, we introduce zero-knowledge authenticators (zkAt), a new cryptographic primitive for privacy-preserving authentication on public blockchains. zkAt utilizes zero-knowledge proofs to enable users to authenticate transactions, while keeping the underlying authentiction policies private. Prior solutions for such {policy-private authentication} required the use of threshold signatures, which can only hide the threshold access structure itself. In comparison, zkAt provides privacy for arbitrarily complex authentication policies, and offers a richer interface even within the threshold access structure by, for instance, allowing for the combination of signatures under distinct signature schemes. In order to construct zkAt, we design a compiler that transforms the popular Groth16 non-interactive zero knowledge (NIZK) proof system into a NIZK with equivocable verification keys, a property that we define in this work. Then, for any zkAt constructed using proof systems with this new property, we show that all public information must be independent of the policy, thereby achieving policy-privacy. Next, we give an extension of zkAt, called zkAt+ wherein, assuming a trusted authority, policies can be updated obliviously in the sense that a third-party learns no new information when a policy is updated by the policy issuer. We also give a theoretical construction for zkAt+ using recursive NIZKs, and explore the integration of zkAt into modern blockchains. Finally, to evaluate their feasibility, we implement both our schemes for a specific threshold access structure. Our findings show that zkAt achieves comparable performance to traditional threshold signatures, while also attaining privacy for significantly more complex policies with very little overhead.
Last updated:  2025-05-22
SQIsign2D: New SQIsign2D Variant by Leveraging Power Smooth Isogenies in Dimension One
Zheng Xu, Kaizhan Lin, Chang-An Zhao, and Yi Ouyang
In this paper, we propose SQIsign2D, a novel digital signature scheme within the SQIsign2D family. Unlike other SQIsign2D variants, SQIsign2D employs the prime as the field characteristic, where , and . By leveraging accessible -isogenies, SQIsign2D significantly reduces the degree requirements for two-dimensional isogeny computations, thereby lowering the overall computational overhead compared to other SQIsign2D variants. We also provide a proof-of-concept implementation of SQIsign2D, and give an efficiency comparison between SQIsign2D and other SQIsign2D variants. In particular, the experimental results demonstrate that the key generation and signing phases of SQIsign2D are more than twice as fast as those of SQIsign2D-East at the NIST-I security level, respectively. Additionally, the verification performance in SQIsign2D exhibits marginally improved efficiency.
Last updated:  2025-05-22
Rep3 Reloaded: On the Cost of Function-Dependent Preprocessing in Semi-Honest 3PC with Honest Majority
Marcel Keller
Rep3 denotes the implementation of semi-honest three-party computation with an honest majority in MP-SPDZ (CCS'20). It uses replicated secret sharing with one message per multiplication and party as proposed by Araki et al. (CCS'16). This approach is rivaled by Astra (CCSW'19) and Trio (PETS'25), which use function-dependent preprocessing. The latter is more involved than, e.g., Beaver triples which can be used as a commodity. In this work, we present a full implementation of Astra and Trio in MP-SPDZ, and we evaluate the costs of the different approaches. We show the equivalence of the schemes, which implies that a protocol in any of the schemes can be translated to one in another with the same overall communication cost. We also present an improvement to two important building blocks for privacy-preserving computation, namely secure comparison and probabilistic truncation used in fixed-point arithmetic. To evaluate our implementation, we have benchmarked machine learning training and inference in all three schemes, improving on Keller and Sun (ICML'22) by over 30%. Our implementation also highlights the large storage requirements of function-dependent preprocessing as it runs the two phases separately. To the best of our knowledge, this is the first implementation to do so.
Last updated:  2025-05-22
The Accidental Computer: Polynomial Commitments from Data Availability
Alex Evans and Guillermo Angeris
In this paper, we show two simple variations of a data availability scheme which enable it to act as a multilinear polynomial commitment scheme over the data in a block. The first variation enables commitments over all of the block's data with zero prover overhead: the data availability construction simply serves both purposes. The second variation allows commitments over subsets of data with nonzero but still concretely small proving costs, since most work is already done during data encoding. This works especially well for blockchains with a high degree of data parallelism, as data-parallel computation is particularly amenable to efficient GKR proofs. Since, in GKR, opening the polynomial commitment contributes significantly to prover costs, our construction enables the prover to reuse work already done by the data availability scheme, reducing—or wholly removing—work associated with the polynomial commitment scheme.
Last updated:  2025-05-21
Jagged Polynomial Commitments (or: How to Stack Multilinears)
Tamir Hemo, Kevin Jue, Eugene Rabinovich, Gyumin Roh, and Ron D. Rothblum
Modern SNARK constructions, almost ubiquitously, rely on a polynomial commitment scheme (PCS) --- a method by which a prover can commit to a large polynomial and later provide evaluation proofs of the form "P(x)=y" to the verifier. In the context of zkVMs (i.e., proof-systems for general-purpose RAM computations), the common design is to represent the computation trace as a sequence of tables, one per CPU instruction, and commit to the these tables, or even their individual columns, as separate polynomials. Committing separately to these polynomials has a large overhead in verification costs, especially in hash-based systems. In this work we drastically reduce this cost via a new construction which we call the jagged PCS. This PCS enables the prover to commit to the entire computation trace as a single polynomial, but then allows for the verifier to emulate access to the individual table or column polynomials, so that the arithmetization can proceed in the usual manner. The jagged PCS may be thought of as a sparse PCS for a very particular form of sparsity - namely, a "jagged" matrix in which each column has a different height. Our construction of the jagged PCS is highly performant in practice. In contrast to existing sparse PCS constructions for general sparse polynomials, the jagged PCS does not require the prover to commit to any additional oracles and the prover cost is dominated by 5 finite field multiplications per element in the trace area. Furthermore, we implement the verifier as an arithmetic circuit that depends only on the total trace area - thereby significantly reducing the problem of "combinatorial explosion" often encountered in zkVM recursion.
Last updated:  2025-05-21
Automated Verification of Consistency in Zero-Knowledge Proof Circuits
Jon Stephens, Shankara Pailoor, and Isil Dillig
Circuit languages like Circom and Gnark have become essential tools for programmable zero-knowledge cryptography, allowing developers to build privacy-preserving applications. These domain-specific languages (DSLs) encode both the computation to be verified (as a witness generator) and the corresponding arithmetic circuits, from which the prover and verifier can be automatically generated. However, for these programs to be correct, the witness generator and the arithmetic circuit need to be mutually consistent in a certain technical sense, and inconsistencies can result in security vulnerabilities. This paper formalizes the consistency requirement for circuit DSLs and proposes the first automated technique for verifying it. We evaluate the method on hundreds of real-world circuits, demonstrating its utility for both automated verification and uncovering errors that existing tools are unable to detect.
Last updated:  2025-05-21
Improved differential cryptanalysis of SPEEDY
Tim Beyne and Addie Neyt
SPEEDY is a family of lightweight block ciphers designed by Leander et al. Several differential attacks have been reported on the SPEEDY variants. However, nearly all of these attacks are based on differential characteristics with probabilities that differ from their reported values. These discrepancies arise from incorrect calculations of the (key-averaged) probability, particularly in consecutive steps within one round without intermediate key addition. In this paper, we revisit all reported differential characteristics and accurately calculate their key-averaged probabilities using quasidifferential trails. We extend this to also estimate the fixed-key probability. Our analysis reveals several characteristics with zero or significantly altered probability, invalidating several proposed attacks. We further implement a search algorithm and find a 5.5-round differential distinguisher that can be used to mount a full-round key recovery attack with a data complexity of and a time complexity of . The memory complexity varies: in the chosen-plaintext setting, it is , whereas in the chosen-ciphertext setting, it is .
Last updated:  2025-05-21
Tweakable Permutation-based Luby-Rackoff Constructions
Bishwajit Chakraborty and Abishanka Saha
Liskov, Rivest, and Wagner, in their seminal work, formulated tweakable blockciphers and proposed two blockcipher-based design paradigms, LRW1 and LRW2, where the basic design strategy is to xor the masked tweak to the input and output of a blockcipher. The 2-round cascaded LRW2 and 4-round cascaded LRW1 have been proven to be secure up to queries, but -bit optimal security still remains elusive for these designs. In their paper, Liskov also posed an open challenge of embedding the tweak directly in the blockcipher, and to address this, Goldenberg et al. introduced the tweakable Luby-Rackoff (LR) constructions. They showed that if the internal primitives are random functions, then for tweaks with blocks, the construction needs rounds to be optimally -bit CPA secure and rounds to be optimally -bit CCA secure, where respectively and rounds were required to process the tweaks. Since blockciphers can be designed much more efficiently than pseudorandom functions, in many practical applications the internal primitives of LR ciphers are instantiated as blockciphers, which however would lead to a birthday-bound factor, which is not ideal for say lightweight cryptography. This paper addresses the following two key questions affirmatively: (1) Can Goldenberg et al.'s results be extended to LR constructions with random permutations as internal primitives without compromising the optimal -bit security? (2) Can the number of rounds required for handling long tweaks be reduced? We formally define TLR-compatible functions, for processing the tweak, which when composed with 4-rounds and 5-rounds of LR construction with random permutations as internal primitives gives us respectively -bit CPA and CCA secure tweakable permutations. For the security analysis, we proved general Mirror Theory result for three permutations. We also propose instantiating TLR-compatible functions with one round LR where a permutation (resp, two AXU hash functions) is used to mask single-block tweaks (resp., variable-length tweaks), thus proposing the -bit CPA (resp., CCA) secure tweakable permutation candidates, and (resp., and ), using (resp., ) LR rounds, which is a significant reduction from the tweak-length-dependent results of Goldenberg et al. As a corollary, we also show -bit CPA (resp., CCA) security of -rounds (resp. -rounds) permutation-based LR construction, which is quite an improvement over the existing -bit security proved by Guo et al.
Last updated:  2025-05-21
Hidden Number Problems in Fiat-Shamir based Post-Quantum Signatures
Yi-Fu Lai, Jonas Meers, and Julian Nowakowski
Schnorr and (EC)DSA signatures famously become completely insecure once a few bits of the random nonce are revealed to an attacker. In this work, we investigate whether post-quantum signature schemes allow for similar attacks. Specifically, we consider three Fiat-Shamir based signature schemes: Dilithium (lattices), LESS (codes) and CSI-FiSh (isogenies). Analogous to the attacks on Schnorr and (EC)DSA, we assume knowledge of some bits of the commitment randomness used in the underlying ID protocol. Such attacks belong to the broader class of Hidden Number Problems. In the case of LESS, we give an efficient attack that requires knowledge of a single bit of the randomness in roughly signatures to completely recover the secret key. Our attack is based on the observation that knowledge of a single bit in the randomness is enough to distinguish secret key entries from random ones. Since for proposed parameters there are at most candidates for each entry, we can enumerate all candidates and use the distinguisher to recover the secret key one entry at a time. For Dilithium we are able to recover the whole secret key using at most 1792 signatures when either 3 or 4 least significant bits of the randomness are known (depending on the parameter set). Here the attack is based on the simple observation that knowledge of the least significant bits of the randomness immediately yields a linear relation in the secret key. For CSI-FiSh, on the other hand, we obtain only an inefficient attack that requires exponentially many signatures. However, we prove that this attack is essentially optimal. Thus, our results show that CSI-FiSh comes equipped with an inherent leakage resilience. The argument easily extends to a wider class of signature schemes, but notably does not apply to the predecessor SeaSign.
Last updated:  2025-05-21
Enforcing arbitrary constraints on Bitcoin transactions
Federico Barbacovi and Enrique Larraia
The challenge of enforcing constraints on Bitcoin transac- tions has recently gained a lot of attention. The current approach to solve this problem falls short in certain aspects, such as privacy and programmability. We design a new solution that leverages zkSNARKs and allows enforcing arbitrary constraints on Bitcoin transactions while maintaining some information private. Our approach also bypasses the non-Turing completeness of Bitcoin Script, allowing the enforcement of unbounded constraints, namely constraints that repeat a certain opera- tion an unbounded number of times.
Last updated:  2025-05-21
Fuzzy Private Set Intersection from VOLE
Aron van Baarsen and Sihang Pu
Private set intersection (PSI) is a well-researched cryptographic primitive that allows two parties to compute the intersection of their input sets without revealing any information about items outside of the intersection. Fuzzy private set intersection is a relatively new variant of PSI, where items are not matched exactly but ``fuzzily''. Most commonly, items are points in -dimensional integer space and a point is a fuzzy match to another if it lies within a ball of radius centered at this point, with respect to some distance metric. Previous works either only support infinity ) distance metric and standard PSI functionality, or support general Minkowski (, ) distance metrics and realize richer functionalities but rely on expensive homomorphic encryptions. Our work aims to bridge this gap by giving the first construction of a fuzzy PSI protocol for general Minkowski distance metrics relying on significantly cheaper operations during the online phase. Our main building block is a novel fuzzy matching protocol based on an oblivious pseudorandom function (OPRF), which can be realized very efficiently from vector oblivious linear evaluation (VOLE). Our protocol is able to preserve the asymptotic complexity as well as the simplicity of the fuzzy matching protocol from van Baarsen and Pu (Eurocrypt '24), while being much more concretely efficient. Additionally, we achieve several asymptotic improvements by representing intervals succinctly. Finally, we present the first fuzzy PSI protocol for infinity distance that places no assumptions on the sets of points, while maintaining asymptotic complexities comparable to the state-of-the-art fuzzy PSI protocol.
Last updated:  2025-05-21
Robust Threshold ECDSA with Online-Friendly Design in Three Rounds
Guofeng Tang and Haiyang Xue
Threshold signatures, especially ECDSA, enhance key protection by addressing the single-point-of-failure issue. Threshold signing can be divided into offline and online phases, based on whether the message is required. Schemes with low-cost online phases are referred to as ``online-friendly". Another critical aspect of threshold ECDSA for real-world applications is robustness, which guarantees the successful completion of each signing execution whenever a threshold number of semi-honest participants is met, even in the presence of misbehaving signatories. The state-of-the-art online-friendly threshold ECDSA without robustness was developed by Doerner et al. in S\&P'24, requiring only three rounds. Recent work by Wong et al. in NDSS'23 (WMY23) and NDSS'24 (WMC24) achieves robustness but demands additional communication rounds (7 and 4, respectively) or incurs costly operations in the online phase, such as computations over a homomorphic encryption scheme. This paper presents the first three-round threshold ECDSA scheme with both robustness and an online-friendly design. The online phase of our scheme relies solely on several elliptic-curve group operations, which are 2 to 3 orders of magnitude less computationally intensive than those based on linearly homomorphic encryption schemes. We implement our protocol and conduct a comprehensive comparison with WMY23 and WMC24. Benchmark results show that the online phase of our scheme is faster than that of WMY23 and hundreds of times faster than that of WMC24. Lastly, we demonstrate that our techniques can be extended to construct an online-friendly and robust three-round threshold BBS+ scheme.
Last updated:  2025-05-21
Energy Consumption Framework and Analysis of Post-Quantum Key-Generation on Embedded Devices
J Cameron Patterson, William J Buchanan, and Callum Turino
The emergence of quantum computing and Shor's algorithm necessitates an imminent shift from current public key cryptography techniques to post-quantum robust techniques. NIST has responded by standardising Post-Quantum Cryptography (PQC) algorithms, with ML-KEM (FIPS-203) slated to replace ECDH (Elliptic Curve Diffie-Hellman) for key exchange. A key practical concern for PQC adoption is energy consumption. This paper introduces a new framework for measuring the PQC energy consumption on a Raspberry Pi when performing key generation. The framework uses both available traditional methods and the newly standardised ML-KEM algorithm via the commonly utilised OpenSSL library.
Last updated:  2025-05-21
SubLogarithmic Linear Time SNARKs from Compressed Sum-Check
Nitin Singh and Sikhar Patranabis
We leverage recently proposed multilinear polynomial commitment schemes, with linear time prover and constant proof size to reduce the communication complexity of the classical sum-check protocol for multivariate polynomials. Specifically, for degree multivariate polynomial in variables which can be decomposed into multilinear polynomials, we exhibit a sum-check protocol with prover cost and communication for . This enables us to break the barrier for this ubiquitous primitive used in multilinear SNARKs and implies first construction of prover efficient (with proving cost) SNARKs using multilinear polynomials with proof-size. Currently, lower communication is only obtained by SNARKs based on univariate polynomials which incur prover complexity due to inherent polynomial multiplication. Concretely, using compressed sum-check in HyperPlonk protocol allows us to reduce the proof size from about KB KB to only about KB, making it competitive with univariate SNARKs like PLONK.
Last updated:  2025-05-21
New Framework for Structure-Aware PSI From Distributed Function Secret Sharing
Dung Bui, Gayathri Garimella, Peihan Miao, and Phuoc Van Long Pham
Private set intersection (PSI) allows two parties to jointly compute the intersection of their private sets without revealing any additional information. Structure-aware PSI (sa-PSI), introduced by Garimella et al. (Crypto'22), is a variant where Alice's input set has a publicly known structure and Bob's input set remains unstructured, enabling new applications like fuzzy PSI. Their construction relies solely on lightweight cryptographic primitives such as symmetric-key primitives and oblivious transfer (OT) extension. Since then, there has been active research on sa-PSI based on lightweight cryptography. Notably, recent work by Garimella et al. (Crypto'24) achieves sa-PSI with both communication and computation costs only scaling with the description size of Alice's set, rather than its potentially large cardinality. However, this line of work remains largely theoretical, lacking efficient concrete implementations. In this work, we close this gap by presenting a new framework for sa-PSI that achieves practical efficiency. We identify and eliminate a hidden multiplicative overhead proportional to the security parameter (e.g., 128) in prior symmetric-key-based sa-PSI constructions. A key building block of our new framework is a distributed Function Secret Sharing (dFSS) key generation protocol tailored to the structure of Alice's set, which may be of independent interest. To demonstrate the practicality of our framework, we extend our dFSS protocol to support incremental evaluation, introduce new techniques for spatial hashing, and develop several new optimization techniques, including reducing the exponential dependence on dimension and enabling load balancing between the two parties. We instantiate our framework for structured sets defined by unions of -dimensional balls, and implement our protocols using only lightweight symmetric-key primitives and OT extension. Our experiments show concrete performance improvements of up to speedup in computation and reduction in communication in low-dimensional, large-radius settings compared to existing public-key-based fuzzy PSI protocols by van Baarsen & Pu (Eurocrypt'24) and Gao et al. (Asiacrypt'24).
Last updated:  2025-05-21
Covert Attacks on Machine Learning Training in Passively Secure MPC
Matthew Jagielski, Rahul Rachuri, Daniel Escudero, and Peter Scholl
Secure multiparty computation (MPC) allows data owners to train machine learning models on combined data while keeping the underlying training data private. The MPC threat model either considers an adversary who passively corrupts some parties without affecting their overall behavior, or an adversary who actively modifies the behavior of corrupt parties. It has been argued that in some settings, active security is not a major concern, partly because of the potential risk of reputation loss if a party is detected cheating. In this work we show explicit, simple, and effective attacks that an active adversary can run on existing passively secure MPC training protocols, while keeping essentially zero risk of the attack being detected. The attacks we show can compromise both the integrity and privacy of the model, including attacks reconstructing exact training data. Our results challenge the belief that a threat model that does not include malicious behavior by the involved parties may be reasonable in the context of PPML, motivating the use of actively secure protocols for training.
Last updated:  2025-05-21
Authenticated Key Exchange Protocol with Remote Randomness
John C. W. Chan
A conventional Authenticated Key Exchange (AKE) protocol consumes fresh random coins from the local random source. However, recent studies of bad randomness expose the vulnerability of some AKE protocols under small subgroup attacks when the random coins are manipulated or being repeated. It is important to ensure the bad randomness of one random source will not affect the security of the AKE protocol as a whole. Thus, we introduce the notion of remote randomness by introducing additional ephemeral keys generated by a fresh remote random source in the AKE protocol. In this paper, we argue that because of the thrive of cloud computing, it encourages high speed internal data transfer within server clusters or virtual machines, including entropy pool data used in our remote randomness AKE protocols. We present a remote randomness modification to the HMQV protocol to demonstrate its resilience under the manipulation of local random sources. We then provide a new security model with the consideration of remote randomness and show thatthe modified AKE protocol is secure under our new model.
Last updated:  2025-05-20
The Security of ML-DSA against Fault-Injection Attacks
Haruhisa Kosuge and Keita Xagawa
Deterministic signatures are often used to mitigate the risks associated with poor-quality randomness, where the randomness in the signing process is generated by a pseudorandom function that takes a message as input. However, some studies have shown that such signatures are vulnerable to fault-injection attacks. To strike a balance, recent signature schemes often adopt "hedged" randomness generation, where the pseudorandom function takes both a message and a nonce as input. Aranha et al. (EUROCRYPT 2020) investigated the security of hedged Fiat-Shamir signatures against 1-bit faults and demonstrated security for certain types of bit-tampering faults. Grilo et al. (ASIACRYPT 2021) extended this proof to the quantum random oracle model. Last year, NIST standardized the lattice-based signature scheme ML-DSA, which adopts the hedged Fiat-Shamir with aborts. However, existing security proofs against bit-tampering faults do not directly apply, as Aranha et al. left this as an open problem. To address this gap, we analyze the security of ML-DSA against multi-bit fault-injection attacks. We provide a formal proof of security for a specific class of intermediate values, showing that faults at these points cannot be exploited. Furthermore, to highlight the infeasibility of stronger fault resilience, we present key-recovery attacks that exploit signatures generated under fault injection at the other intermediate values.
Last updated:  2025-05-21
Rock and a Hard Place: Attack Hardness in Neural Network-assisted Side Channel Analysis
Seyedmohammad Nouraniboosjin and Fatemeh Ganji
Side-channel analysis (SCA) has become a crucial area in ensuring the security of hardware implementations against potential vulnerabilities. With advancements in machine learning (ML) and artificial intelligence (AI), neural network(NN)-assisted techniques for SCA have demonstrated significant effectiveness. However, a fundamental question remains unanswered: are traces corresponding to different subkeys equally hard to attack? This paper addresses this issue by leveraging explainable AI techniques to analyze the hardness levels and, consequently, the root cause of hardness. To systematically investigate this problem, we reintroduce hardness metrics in SCA, which have been known to the ML community. Those metrics include query hardness (QH), log odds (LO), and matrix-based Rényi entropy (MRE). The challenge in this regard is that (virtually all) hardness metrics in ML cannot be adopted as they are. This is because ML and SCA metrics have conflicting goals, namely boosting accuracy and rank. Through careful experimentation, we identify the shortcomings of QH and LO in SCA and recommend MRE as a suitable hardness metric for SCA. We also study how hardness has been seen in SCA, where recent work has suggested the influence of class “labels” on the attack difficulty. Employing rigorous evaluation, our paper demonstrates that no statistically significant evidence can be found to support this claim. This leads us to the question of how much traces’ time samples affect the inherent hardness in distinguishing key candidates. Our novel explainable AI (XAI) approach not only answers this, but also makes a link between hardness and rank as the common performance metric in SCA. Our findings indicate that hardness values are influenced mainly by time samples rather than specific key labels. Furthermore, we examine whether hardness captures intrinsic properties of the traces, specifically, their lack of separability in feature space due to their inherent similarities. To this end, we introduce, for the first time in the context of SCA, the use of maximum mean discrepancy (MMD) as a principled metric. MMD effectively links trace hardness with attack difficulty by quantifying distributional differences induced by traces’ time samples. In addition to visualization techniques showing the root cause of hardness based on MMD, we employ XAI to explain the connection between MMD and attack hardness. Our proposed methodology enhances the understanding of attack difficulty in SCA and contributes to developing more robust evaluation metrics for profiling attacks.
Last updated:  2025-05-27
On the Fiat–Shamir Security of Succinct Arguments from Functional Commitments
Alessandro Chiesa, Ziyi Guan, Christian Knabenhans, and Zihan Yu
We study the security of a popular paradigm for constructing SNARGs, closing a key security gap left open by prior work. The paradigm consists of two steps: first, construct a public-coin succinct interactive argument by combining a functional interactive oracle proof (FIOP) and a functional commitment scheme (FC scheme); second, apply the Fiat–Shamir transformation in the random oracle model. Prior work did not consider this generalized setting nor prove the security of this second step (even in special cases). We prove that the succinct argument obtained in the first step satisfies state-restoration security, thereby ensuring that the second step does in fact yield a succinct non-interactive argument. This is provided the FIOP satisfies state-restoration security and the FC scheme satisfies a natural state-restoration variant of function binding (a generalization of position binding for vector commitment schemes). Moreover, we prove that notable FC schemes satisfy state-restoration function binding, allowing us to establish, via our main result, the security of several SNARGs of interest (in the random oracle model). This includes a security proof of Plonk, in the ROM, based on ARSDH (a falsifiable assumption).
Last updated:  2025-05-28
A Generic Framework for Practical Lattice-Based Non-interactive Publicly Verifiable Secret Sharing
Behzad Abdolmaleki, John Clark, Mohammad Foroutani, Shahram Khazaei, and Sajjad Nasirzadeh
Non-interactive publicly verifiable secret sharing (PVSS) schemes enable the decentralized (re-)sharing of secrets in adversarial environments, allowing anyone to verify the correctness of distributed shares. Such schemes are essential for large-scale decentralized applications, including committee-based systems that require both transparency and robustness. However, existing PVSS schemes rely on group-based cryptography, resulting them vulnerable to quantum attacks and limiting their suitability for post-quantum applications. In this work, we propose the first practical, fully lattice-based, non-interactive PVSS scheme, grounded on standard lattice assumptions for post-quantum security. At the heart of our design is a generic framework that transforms vector commitments and linear encryption schemes into efficient PVSS protocols. We enhance vector commitments by incorporating functional hiding and proof of smallness, ensuring that encrypted shares are both verifiable and privacy-preserving. Our construction introduces two tailored lattice-based encryption schemes, each supporting efficient proofs of decryption correctness. This framework provides strong verifiability guarantees while maintaining low proof sizes and computational efficiency, making it suitable for systems with large numbers of participants.
Last updated:  2025-05-20
Exclusive Ownership of Fiat-Shamir Signatures: ML-DSA, SQIsign, LESS, and More
Michael Meyer, Patrick Struck, and Maximiliane Weishäupl
Exclusive ownership (EO) security is a feature of signature schemes that prevents adversaries from "stealing" an honestly generated signature by finding a new public key which verifies said signature. It is one of the beyond unforgeability features (BUFF) which were declared to be desirable features by NIST. The BUFF transform allows to generically achieve exclusive ownership (and other properties) at the cost of an increased signature size. In this work, we study the EO security of (different variants of) Fiat-Shamir signatures. As our main result, we show that the commonly used variant of Fiat-Shamir signatures (where signatures consist of challenge-response tuples) with λ-bit challenges, can achieve about λ-bit EO security through its implicit usage of the BUFF transform—this presents a significant improvement to existing results that only provide λ/2-bit of EO security. This benefit of our result comes without an increase in signature size. For other variants of Fiat-Shamir signatures, we show worse bounds, which nevertheless improve upon existing results. Finally, we apply our results to several signature schemes: SQIsign and LESS (both round-2 NIST candidates); ML-DSA (NIST standard); CSI-FiSh; and Schnorr signatures. This shows that all these schemes achieve significantly better bounds regarding their EO security compared to existing results.
Last updated:  2025-05-20
Improved Noise Bound in BFV Homomorphic Encryption and Its Application to Multiplication
Akshit Aggarwal, Yang Li, and Srinibas Swain
Fully Homomorphic Encryption (FHE) enables computations on encrypted data without requiring decryption. However, each computation increases the noise level, which can eventually cause decryption failures once a certain threshold is reached. In particular, homomorphic multiplication significantly amplifies noise in the ciphertext. In this work, we revisit Ring-learning-With-Error (RLWE) based encryption proposed by Fan et al. and present an optimized noise growth approach by swapping the sample space for secret key and error distribution. Thereafter, we revisit BFV homomorphic multiplication proposed by Kim et al. (ASIACRYPT'21) and present an optimized noise bound. Later, we empirically check the hardness of proposed scheme using lattice estimator. Our analysis demonstrates that the proposed method achieves more than 128-bit security and achieves a lower noise bound than existing techniques.
Last updated:  2025-05-20
A New Approach for LPN-based Pseudorandom Functions: Low-Depth and Key-Homomorphic
Youlong Ding, Aayush Jain, and Ilan Komargodski
We give new constructions of pseudorandom functions (PRFs) computable in from (variants of the) Learning Parity with Noise (LPN) assumption. Prior to our work, the only -computable PRF from LPN-style assumptions was due to Boyle et al. (FOCS 2020) who constructed a weak PRF from a new heuristic variant of LPN called variable-density LPN. We give the following three results: (1) A weak PRF computable in from standard LPN. (2) A (strong) encoded-input PRF (EI-PRF) computable in from sparse LPN. An EI-PRF is a PRF whose input domain is restricted to an efficiently sampleable and recognizable set. The input encoding can be computed in for any constant , implying a strong PRF computable in . (3) A (strong) PRF computable in from a (new, heuristic) seeded LPN assumption. In our assumption, columns of the public LPN matrix are generated by an -wise independent distribution. Supporting evidence for the security of the assumption is given by showing resilience to linear tests. As a bonus, all of our PRF constructions are key-homomorphic, an algebraic property that is useful in many symmetric-cryptography applications. No previously-known LPN-based PRFs are key-homomorphic, even if we completely ignore depth-efficiency. In fact, our constructions support key homomorphism for linear functions (and not only additive), a property that no previously-known PRF satisfies, including ones from LWE. Additionally, all of our PRF constructions nicely fit into the substitution-permutation network (SPN) design framework used in modern block ciphers (e.g. AES). No prior PRF construction that has a reduction to a standard cryptographic assumptions (let alone LPN) has an SPN-like structure. Technically, all of our constructions leverage a new recursive derandomization technique for LPN instances, which allows us to generate LPN error terms deterministically. This technique is inspired by a related idea from the LWE literature (Kim, EUROCRYPT 2020) for which devising an LPN analogue has been an outstanding open problem.
Last updated:  2025-05-20
SQIsign2DPush: Faster Signature Scheme Using 2-Dimensional Isogenies
Kohei Nakagawa and Hiroshi Onuki
Isogeny-based cryptography is cryptographic schemes whose security is based on the hardness of a mathematical problem called the isogeny problem, and is attracting attention as one of the candidates for post-quantum cryptography. A representative isogeny-based cryptography is the signature scheme called SQIsign, which was submitted to the NIST PQC standardization competition for additional signature. SQIsign has attracted much attention because of its very short signature and key size among candidates for the NIST PQC standardization. Recently, many new signature schemes using high-dimensional isogenies have been proposed, such as: SQIsignHD, SQIsign2D-West, SQIsgn2D-East, and SQIPrime. Last year, SQIsign advanced to Round 2 of the NIST competition and was updated to version 2.0 (we call it SQIsign-v2.0), which is based on SQIsign2D-West. SQIsign-v2.0 achieves smaller signature sizes and faster verification. However, the signing costs are relatively high. In this paper, we propose a new signature scheme `SQIsign2DPush', which has a smaller signing cost than SQIsign-v2.0 while the signature size and the verification cost are almost the same.
Last updated:  2025-05-19
InstaRand: Instantly Available and Instantly Verifiable On-chain Randomness
Jacob Gorman, Lucjan Hanzlik, Aniket Kate, Pratyay Mukherjee, Pratik Sarkar, and Sri AravindaKrishnan Thyagarajan
Web3 applications, such as on-chain gaming, require unbiased and publicly verifiable randomness that can be obtained quickly and cost-effectively whenever needed. Existing services, such as those based on Verifiable Random Functions (VRF), incur network delays and high fees due to their highly interactive nature. FlexiRand [CCS 2023] addressed these problems by hiding the output of the VRF and using that as a seed to derive many randomnesses locally. These randomnesses are instantly available for usage. However, these randomnesses can not be verified independently (or instantly) without disclosing the seed, leaving scope for malicious actors to cheat. To solve this problem, we introduce a new notion, called instantly-verifiable VRF (iVRF), which enables the generation of many randomnesses from one VRF output seed, such that each of them is verifiable independently - this enables the solution to generate randomnesses, such that they are and also . To instantiate we propose a generic construction called InstaRand - it combines any (possibly distributed) VRF at the server's end with another VRF at the client's end to construct an iVRF. Our specific instantiation uses the BLS-based GLOW-DVRF [Euro S&P 2021] at the server's end and the DDH-based VRF of Goldberg et al. [RFC 2023] at the client's end. We use the universal composability framework to analyze the security. Moreover, due to its generality, InstaRand can be instantiated with any post-quantum secure VRF to yield a post-quantum secure iVRF. Our experiments demonstrate that our instantiation of InstaRand is . The client incurs a cost to generate the seed (server's VRF output) by querying the GLOW-dVRF servers once. Once the seed is set up, the client locally generates the pseudorandom value on demand in , avoiding the client-server round-trip delay. Each value can be independently verified in . This yields a improvement in terms of output generation and improvement in verification cost over existing solutions.
Last updated:  2025-05-19
Blinding Post-Quantum Hash-and-Sign Signatures
Charles Bouillaguet, Thibauld Feneuil, Jules Maire, Matthieu Rivain, Julia Sauvage, and Damien Vergnaud
Blind signature schemes are essential for privacy-preserving applications such as electronic voting, digital currencies or anonymous credentials. In this paper, we revisit Fischlin's framework for round-optimal blind signature schemes and its recent efficient lattice-based instantiations. Our proposed framework compiles any post-quantum hash-and-sign signature scheme into a blind signature scheme. The resulting scheme ensures blindness by design and achieves one-more unforgeability, relying solely on the unforgeability of the underlying signature scheme and the random oracle model. To achieve this we introduce the notion of commit-append-and-prove (CAP) systems, which generalizes traditional commit-and-prove system by making their commitments updatable before proving. This building block allows us to unlock the technical challenges encountered when generalizing previous variants of the Fischlin's framework to any hash-and-sign signature scheme. We provide efficient CAP system instantiations based on recent MPC-in-the-Head techniques. We showcase our framework by constructing blind versions of UOV and Wave, thereby introducing the first practical blind signatures based on multivariate cryptography and code-based cryptography. Our blind UOV signatures range from 3.8 KB to 11 KB, significantly outperforming previous post-quantum blind signatures, such as the 22 KB lattice-based blind signatures, which were the most compact until now.
Last updated:  2025-05-23
Achieving "beyond CCA1" security for linearly homomorphic encryption, without SNARKs?
Marina Checri, Pierre-Emmanuel Clet, Marc Renard, and Renaud Sirdey
In the wake of Manulis and Nguyen's Eurocrypt'24 paper, new CCA security notions, vCCA and vCCAD, and associated construction blueprints have been proposed to leverage either CPA or CPAD secure FHE beyond the CCA1 security barrier. These two notions are the strongest CCA security notions so far achievable, respectively, by correct and approximate homomorphic schemes. However, the only known construction strategies intimately require advanced SNARK machinery, undermining their practicality. In this context, this paper is an attempt to achieve these advanced CCA security notions in the restricted case of linearly homomorphic encryption, without resorting to SNARKs. To do so, we investigate the relationship between the Linear-Only Homomorphism (LOH) assumption, an assumption that has been used for more than a decade at the core of several proof-of-knowledge constructions, and these two recent security notions (vCCA and vCCAD). On the bright side, when working under the correctness assumption, we establish that the LOH property is sufficient to achieve vCCA security in both the private and public key settings. In the public key setting, we further show that a surprisingly simple and previously known Paillier-based construction also achieves this level of security, at only twice the cost of the baseline scheme. We then turn our attention to LWE-based schemes for which the Pandora box of decryption errors opens up. In the private key setting, we are able to achieve CPAD and vCCAD security but only in a fairly restrictive non-adaptive setting, in which vCCAD collapses onto a weak relaxation of CCA1. Finally, we eventually achieve adaptive vCCAD security provided that the number of ciphertexts given to the adversary is suitably restricted. While bridging the gap towards credible practicality requires further work, this is a first step towards obtaining linear homomorphic schemes achieving these recent CCA security notions by means only of relatively lightweight machinery.
Last updated:  2025-05-20
MacaKey: Full-State Keyed Sponge Meets the Summation-Truncation Hybrid
Uncategorized
Charlotte Lefevre and Mario Marhuenda Beltrán
Show abstract
Uncategorized
The keyed sponge construction has benefited from various efficiency advancements over time, most notably leading to the possibility to absorb over the entire state, as in the full-state keyed sponge. However, squeezing has always remained limited to blocks smaller than the permutation size, as security is determined by the capacity c, the size of the non-squeezed state. In this work, we present Macakey, an improved version of the full-state keyed sponge that not only absorbs over the entire state but also squeezes over the entire state. The scheme combines ideas of the full-state keyed sponge with those of the summation-truncation hybrid of Gunsing and Mennink. We demonstrate that, with no sacrifice in generic security and with only using c bits of extra storage, Macakey can significantly boost performance, particularly in scenarios requiring large amounts of output. For example, using the 320-bit Ascon permutation with a 256-bit capacity, Macakey outputs five times as many bits as the full-state keyed sponge.
Last updated:  2025-05-20
Practical cryptanalysis of pseudorandom correlation generators based on quasi-Abelian syndrome decoding
Charles Bouillaguet, Claire Delaplace, Mickaël Hamdad, and Damien Vergnaud
Quasi-Abelian Syndrome Decoding (QA-SD) is a recently introduced generalization of Ring-LPN that uses multivariate polynomials rings. As opposed to Ring-LPN, it enables the use of small finite field such as GF(3) and GF(4). It was introduced by Bombar et al (Crypto 2023) in order to obtain pseudorandom correlation generators for Beaver triples over small fields. This theoretical work was turned into a concrete and efficient protocol called F4OLEage by Bombar et al. (Asiacrypt 2024) that allows several parties to generate Beaver triples over GF(2). We propose efficient algorithms to solve the decoding problem underlying the QA-SD assumption. We observe that it reduce to a sparse multivariate polynomial interpolation problem over a small finite field where the adversary only has access to random evaluation points, a blind spot in the otherwise rich landscape of sparse multivariate interpolation. We develop new algorithms for this problem: using simple techniques we interpolate polynomials with up to two monomials. By sending the problem to the field of complex numbers and using convex optimization techniques inspired by the field of “compressed sensing”, we can interpolate polynomials with more terms. This enables us to break in practice parameters proposed by Bombar et al. at Crypto’23 and Asiacrypt’24 as well as Li et al. at Eurocrypt’25 (IACR flagship conferences Grand Slam). In the case of the F4OLEage protocol, our implementation recovers all the secrets in a few hours with probability 60%. This not only invalidates the security proofs, but it yields real-life privacy attacks against multiparty protocols using the Beaver triples generated by the broken pseudorandom correlation generators.
Last updated:  2025-05-19
Obfuscation of Unitary Quantum Programs
Mi-Ying (Miryam) Huang and Er-Cheng Tang
Program obfuscation aims to hide the inner workings of a program while preserving its functionality. In the quantum setting, recent works have obtained obfuscation schemes for specialized classes of quantum circuits. For instance, Bartusek, Brakerski, and Vaikuntanathan (STOC 2024) constructed a quantum state obfuscation scheme, which supports the obfuscation of quantum programs represented as quantum states for pseudo-deterministic quantum programs with classical inputs and outputs in the classical oracle model. In this work, we improve upon existing results by constructing the first quantum state obfuscation scheme for unitary (or approximately unitary) quantum programs supporting quantum inputs and outputs in the classical oracle model. At the core of our obfuscation scheme are two novel ingredients: a functional quantum authentication scheme that allows key holders to learn specific functions of the authenticated quantum state with simulation-based security, and a compiler that represents an arbitrary quantum circuit as a projective linear-plus-measurement quantum program described by a sequence of non-adaptive Clifford gates interleaved with adaptive and compatible measurements.
Last updated:  2025-05-19
SPEEDY: Caught at Last
Christina Boura, Patrick Derbez, Baptiste Germon, Rachelle Heim Boissier, and María Naya-Plasencia
SPEEDY is a family of ultra-low-latency block ciphers designed by Leander et al. in 2021. In 2023, Boura et al. proposed a differential attack on the full 7-round variant, SPEEDY-7-192. However, shortly thereafter, Beyne and Neyt demonstrated that this attack was invalid, as the dominant differential characteristic it relied upon had probability zero. A similar issue affects another differential attack proposed the same year by Wang et al., which also targets SPEEDY-7-192 and suffers from the same flaw. As a result, although SPEEDY-7-192 was initially believed to be broken, it remained unbroken in practice, and the question of finding a valid attack on this cipher remained an open problem. In this work, we resolve this problem by presenting the first valid differential attack on SPEEDY-7-192. We verify the validity of our distinguisher using the quasidifferential framework. Moreover, our search for the differential distinguisher is significantly more rigorous than in the previous works, allowing us to explore a larger portion of the search space. We also fully exploit probabilistic extensions of the distinguisher to identify optimal parameters for the key recovery step. Our attack on SPEEDY-7-192 has data and time complexities of 2^{186.36} encryption calls and a memory complexity of 2^{84} 192-bit states. In addition, we present differential attacks on 4-round SPEEDY-5-192 and 5-round SPEEDY-6-192 which currently represent the best attacks against these smaller variants.
Last updated:  2025-05-19
At the Top of the Hypercube -- Better Size-Time Tradeoffs for Hash-Based Signatures
Dmitry Khovratovich, Mikhail Kudinov, and Benedikt Wagner
Hash-based signatures have been studied for decades and have recently gained renewed attention due to their post-quantum security. At the core of the most prominent hash-based signature schemes, XMSS and SPHINCS+, lies a one-time signature scheme based on hash chains due to Winternitz. In this scheme, messages are encoded into vectors of positions (i.e., vertices in a hypercube) in the hash chains, and the signature contains the respective chain elements. The encoding process is crucial for the efficiency and security of this construction. In particular, it determines a tradeoff between signature size and computational costs. Researchers have been trying to improve this size-time tradeoff curve for decades, but all improvements have been arguably marginal. In this work, we revisit the encoding process with the goal of minimizing verification costs and signature sizes. As our first result, we present a novel lower bound for the verification cost given a fixed signature size. Our lower bound is the first to directly apply to general encodings including randomized, non-uniform, and non-injective ones. Then, we present new encodings and prove their security. Inspired by our lower bound, these encodings follow a counterintuitive approach: we map messages non-uniformly into the top layers of a much bigger hypercube than needed but the encoding itself has (hard to find) collisions. With this, we get a 20 % to 40 % improvement in the verification cost of the signature while keeping the same security level and the same size. Our constructions can be directly plugged into any signature scheme based on hash chains, which includes SPHINCS+ and XMSS.
Last updated:  2025-05-19
Bootstrapping GBFV with CKKS
Jaehyung Kim
The Generalized BFV [Geelen and Vercauteren; Eurocrypt'25] is an efficient fully homomorphic encryption scheme that supports integer computations over large cyclotomic moduli. However, the only known bootstrapping approach cannot support large precision as it uses BFV linear transformation as a subroutine. In this work, we introduce a GBFV bootstrapping that relies on CKKS bootstrapping as in the BFV bootstrapping from CKKS [Kim et al.; CCS'24]. The new bootstrapping can handle arbitrary precision, notably bootstrapping the CLPX scheme [Chen et al.; CT-RSA'18] for the first time, bootstrapping up to bits of plaintext modulus in less than seconds. In addition, we introduce conversions between GBFV and CKKS and discuss its impact.
Last updated:  2025-05-20
Adaptively Secure Blockchain-Aided Decentralized Storage Networks: Formalization and Generic Construction
Xiangyu Su, Yuma Tamagawa, Mario Larangeira, and Keisuke Tanaka
This work revisits the current Decentralized Storage Network (DSN) definition to propose a novel general construction based on a UTxO based ledger. To the best of our knowledge, this is the first adaptively secure UTxO blockchain-aided DSN. More concretely, we revisit the currently existing designs to thoroughly formalize the DSN definition and its security. Moreover we present a general construction, which a client delegates data to a DSN that keeps custody of it during a jointly agreed period. Our newly proposed approach, leveraged by the Extended UTxO (EUTxO) Model, neatly allows the storage network to offer automatic verifiability, i.e., without any interaction of the data owner, via proofs published in the blockchain. In summary, this work presents a redesign of the DSN with the aid of a EUTxO based blockchain, by (1) putting forth a formal and rigorous description of a blockchain-aided DSN protocol, (2) offering a thorough description of a practical EUTxO based DSN, and (3) detailing a security analysis showing that our protocol is adaptively secure by providing (rational) security guarantees.
Last updated:  2025-05-19
PaCo: Bootstrapping for CKKS via Partial CoeffToSlot
Jean-Sébastien Coron and Tim Seuré
We introduce PaCo, a novel and efficient bootstrapping procedure for the CKKS homomorphic encryption scheme, where PaCo stands for “(Bootstrapping via) Partial CoeffToSlot”. At a high level, PaCo reformulates the CKKS decryption equation in terms of blind rotations and modular additions. This reformulated decryption circuit is then evaluated homomorphically within the CKKS framework. Our approach makes use of the circle group in the complex plane to simulate modular additions via complex multiplication, and utilizes alternative polynomial ring structures to support blind rotations. These ring structures are enabled by a variant of the CoeffToSlot operation, which we call a partial CoeffToSlot. This yields a new bootstrapping approach within CKKS, achieving a computational complexity which is logarithmic in the number of complex slots. We further introduce a parallelized variant that enables bootstrapping over all CKKS slots with enhanced throughput, highlighting PaCo’s suitability for practical and large-scale homomorphic applications. In addition to the bootstrapping technique itself, we develop several supporting tools — particularly in the context of bit-reversing and alternative ring structures for CKKS — which can be of independent interest to the community. Finally, a proof-of-concept implementation confirms that PaCo achieves performance competitive with state-of-the-art methods for CKKS bootstrapping.
Last updated:  2025-05-22
Fast Fuzzy PSI from Symmetric-Key Techniques
Cong Zhang, Yu Chen, Yang Cao, Yujie Bai, Shuaishuai Li, Juntong Lin, Anyu Wang, and Xiaoyun Wang
Private set intersection (PSI) enables a sender holding a set and a receiver holding a set to securely compute the intersection . Fuzzy PSI (FPSI) is a PSI variant where the receiver learns the items for which there exists such that with respect to some distance metric. Recently, Gao et al. (ASIACRYPT 2024) proposed the first FPSI protocols for and distance with linear complexity. They summarized their FPSI construction into two steps: fuzzy mapping and fuzzy matching. However, their realizations of the two steps heavily rely on public key operations, namely the DH-key exchange and additively homomorphic encryption, resulting in low efficiency. In this work, we propose new FPSI protocols for and distances, primarily leveraging symmetric-key primitives. We revisit the definition of fuzzy mapping and rigorously redefine it as a cryptographic scheme. We further introduce consistency for fuzzy mapping scheme, which could simplify the fuzzy matching step into plain PSI. We then demonstrate how to execute fuzzy mapping step satisfying consistency. We also propose several new technologies to completely avoid the extensive use of computationally expensive public-key operations burden inherent in existing solutions. We implement our FPSI protocols and compare them with the state-of-the-art FPSI protocols. Experiments show that our protocols perform better than state-of-the-art under all the parameters we tested. Specifically, our protocols achieve a speedup in running time and shrinking in communication cost, depending on set sizes, dimension and distance threshold.
Last updated:  2025-05-17
-out-of- Proofs and Application to Privacy-Preserving Cryptocurrencies
Min Zhang, Yu Chen, Xiyuan Fu, and Zhiying Cui
Cryptocurrencies enable transactions among mutually distrustful users, necessitating strong privacy, namely, concealing both transfer amounts and participants' identities, while maintaining practical efficiency. While UTXO-based cryptocurrencies offer mature solutions achieving strong privacy and supporting multi-receiver transfers, account-based cryptocurrencies currently lack practical solutions that simultaneously guarantee these properties. With the aim to close this gap, we propose a generic framework for account-based cryptocurrencies that achieve strong privacy and support multi-receiver transfers, and then give a practical instantiation called \textit{Anonymous PGC}. Experimental results demonstrate that, for a 64-sized anonymity set and 8 receivers, Anonymous PGC outperforms Anonymous Zether (IEEE S\&P 2021) --- which offers limited anonymity and no multi-receiver support --- achieving 2.6 faster transaction generation, 5.1 faster verification, and 2.1 reduction in transaction size. Along the way of building Anonymous PGC, we present two novel -out-of- proofs. First, we generalize the Groth-Kohlweiss (GK) -out-of- proof (EUROCRYPT 2015) to the -out-of- case, resolving an open problem of its natural generalization. Particularly, the obtained -out-of- proof lends itself to integrate with range proofs in a seamless way, yielding an efficient -out-of- range proof, which demonstrates that witnesses among instances lie in specific ranges. Second, we extend the Attema-Cramer-Fehr (ACF) -out-of- proof (CRYPTO 2021) to support distinct group homomorphisms, improving its expressiveness while reducing both prover and verifier complexities from quadratic to linear. We believe these two -out-of- proofs are of independent interest, and will find more applications in privacy-preserving scenarios.
Last updated:  2025-05-17
A Fast, Efficient, Platform-Adaptive, and AIS-20/31 Compliant PLL-Based True Random Number Generator on an SoC FPGA
Oğuz Yayla and Yunus Emre Yılmaz
Phase-locked loops (PLLs) embedded within field-program-mable gate arrays (FPGAs) or system-on-chip FPGAs (SoCs) present a promising methodology for the generation of random numbers. Their widespread integration across these platforms, combined with their isolated operational characteristics and robust entropy generation, as substantiated by prior research, positions PLL-based true random number generators (PLL-TRNGs) as highly effective solutions for this purpose. The present study focuses on the implementation of PLL-TRNGs utilizing the ZC702 Rev1.1 Evaluation Board, which incorporates the Zynq-7020 SoC from Xilinx. For experimental validation, a configuration involving three such boards is employed. The parameters governing the PLL-TRNG are optimized through a backtracking algorithm. Additionally, a novel, platform-adaptive technique is proposed to enhance the rate of random data bit generation while preserving entropy characteristics. The system's performance is rigorously evaluated against the criteria established by the German Federal Office for Information Security (BSI) AIS-20/31 Tests, with a detailed account of the implementation process provided. Furthermore, the study demonstrates the minimal resource utilization of the PLL-TRNG design within a SoC, thereby underscoring its suitability for Internet-of-Things (IoT) applications, where logic resources are often highly constrained.
Last updated:  2025-05-17
Leveled Homomorphic Encryption over Composite Groups
Mahdi Mahdavi, Ehsan Meamari, Emad Heydari Beni, and Maryam Sheikhi
Homomorphic encryption is a powerful tool that enables computation on encrypted data without requiring decryption. While many Fully Homomorphic Encryption schemes, supporting arbitrary computations on encrypted data, have been developed using lattice-based and AGCD-based approaches, progress in composite groups has been limited to Partial Homomorphic Encryption schemes, which only support either addition or multiplication. This paper introduces the first -leveled homomorphic encryption schemes over composite groups, based on factoring problem, that achieve both multiplicative and additive homomorphic properties. %Additionally, a modified version of Rothblum’s transformation technique is developed to provide public key variants of the symmetric schemes. Our novel design eliminates the need for relinearization operation, which is common in LWE-based HE schemes, and removes the requirement for the circular security assumption. For applications where the traffic must be indistinguishable as encrypted, a scrambled scheme is designed using a labeling technique. While the initial schemes offer an expanded message space, the introduction of the double-sized Message technique enables the encryption of messages up to twice the size without increasing the ciphertext size. Implementation results show that our schemes significantly outperform existing solutions, particularly in multiplication operations, achieving speeds up to 1000 times faster than well-known schemes such as BFV, BGV, CKKS, and TFHE.
Last updated:  2025-05-17
One-Way Homomorphic Encryption: A Composite Group Approach
Mahdi Mahdavi and Helena Rifà-Pous
Homomorphic Encryption (HE) is a fundamental Privacy-Enhancing Technology (PET) that enables computations on encrypted data without decryption. Despite its utility, designing an efficient and secure HE scheme is highly complex, requiring sophisticated cryptographic techniques. This paper introduces a novel approach to achieving homomorphic properties—supporting either one addition or one multiplication—within composite groups. While the proposed technique exhibits one-wayness, it has a good potential to serve as a foundational building block for constructing indistinguishable cryptosystems. This work contributes to the advancement of homomorphic encryption by providing a new perspective on secure computation within structured algebraic settings.
Last updated:  2025-05-17
Optimistic Asynchronous Dynamic-committee Proactive Secret Sharing
Bin Hu, Jianwei Liu, Zhenliang Lu, Qiang Tang, Zhuolun Xiang, and Zongyang Zhang
Dynamic-committee Proactive Secret Sharing (DPSS) has gained increased attention for its ability to dynamically update shareholder committees and refresh secret shares, even under adversaries that gradually corrupt all nodes. However, existing state-of-the-art asynchronous DPSS protocols suffer from significant message complexity and communication complexity, where denotes the security parameter and is the committee size. In this paper, under the trusted setup assumption, we propose the first asynchronous DPSS protocol with message complexity in all scenarios. Additionally, our protocol achieves communication complexity in the optimistic case, where all nodes are honest and the network is synchronous, and communication complexity in the worst case. Without a trusted setup, in the optimistic case, the message complexity is , and the communication complexity is . In the worst case, our protocol preserves state-of-the-art message and communication complexities, i.e., and , respectively. Besides, our protocol supports batch amortization and accommodates high thresholds. For committee sizes of 4 to 400, the estimated concrete communication cost of our DPSS is 19--100x (resp., 8--14x) smaller in the optimistic case (resp., worst case) compared to the state-of-the-art. Experiments in AWS show that our DPSS achieves a latency of 1.9--8 seconds for committee sizes from 4 to 64. Single-machine benchmarks reveal a (computational) runtime reduction of up to 44%.
Last updated:  2025-05-17
Papercraft: Lattice-based Verifiable Delay Function Implemented
Michał Osadnik, Darya Kaviani, Valerio Cini, Russell W. F. Lai, and Giulio Malavolta
A verifiable delay function (VDF) requires a specified number of sequential steps to compute, yet the validity of its output can be verified efficiently, much faster than recomputing the function from scratch. VDFs are a versatile cryptographic tool, with many industrial applications, such as blockchain consensus protocols, lotteries and verifiable randomness. Unfortunately, without exceptions, all known practical VDF constructions are broken by quantum algorithms. In this work, we investigate the practicality of VDFs with plausible post-quantum security. We propose Papercraft, a working implementation of a VDF based entirely on lattice techniques and thus plausibly post-quantum secure. Our VDF is based on new observations on lattice-based succinct argument systems with many low-level optimisations, yielding the first lattice-based VDF that is implementable on today's hardware. As an example, our Papercraft implementation can verify a computation of almost 6 minutes in just 7 seconds. Overall, our work demonstrates that lattice-based VDFs are not just a theoretical construct, paving the way for their practical deployment.
Last updated:  2025-05-17
Blockcipher-Based Key Derivation without PRP/PRF Switching
Fabrice Benhamouda, Shai Halevi, Panos Kampanakis, and Hugo Krawczyk
We examine the use of blockcipher-based key derivation beyond the birthday bound, arguing that the analysis step of PRP/PRF switching can be eliminated in many cases. To support this, we consider a modified ``ideal model'' for keying cryptographic applications in the multi-instance setting, where keys are chosen to be random \emph{but distinct}, rather than completely independent). Our analysis shows that typical cryptographic applications remain secure in this model. One consequence is that it is typically safe to derive close to keys using an -bit blockcipher in counter mode. In particular, considering the practice of nonce-derived keys for authenticated encryption, our results imply that modes such as XAES-256-GCM that use CMAC-based key derivation are safe to use with more than distinct nonces.
Last updated:  2025-05-17
Towards Improving Throughput and Scalability of DAG-based BFT SMR
Nibesh Shrestha and Aniket Kate
Directed Acyclic Graph (DAG)-based BFT consensus protocols often suffer from limited throughput and scalability due to bandwidth-intensive data replication to all participants. However, it is sufficient to replicate data to a smaller sub-committee of parties that holds an honest majority with high probability. In this work, we introduce tribe-assisted reliable broadcast, a novel primitive that ensures reliable broadcast (RBC) properties within a smaller honest-majority sub-committee—referred to as a clan—drawn from the entire network, called the tribe. Leveraging this primitive, we develop two efficient DAG-based BFT consensus protocols. First, we present a single-clan protocol, in which a single clan is elected from the tribe, and data is disseminated exclusively to this designated clan using tribe-assisted RBC. We then extend this design to a multi-clan setting, where multiple clans are elected and data is distributed within each respective clan via the same mechanism. Experimental results demonstrate that both protocols offer substantial improvements in throughput and latency over existing DAG-based BFT protocols, even at moderately large scales.
Last updated:  2025-05-16
Lower Bounds for Garbled Circuits from Shannon-Type Information Inequalities
Jake Januzelli, Mike Rosulek, and Lawrence Roy
We establish new lower bounds on the size of practical garbled circuits, which hold against any scheme satisfying the following simple properties: (1) Its security is based on symmetric-key cryptography only. More formally, security holds in Minicrypt, a model in which a random oracle is the only available source of cryptography. (2) The evaluation algorithm makes non-adaptive queries to the random oracle. (3) The evaluation algorithm "knows" which of its oracle queries are made by which other input combinations. These restrictions are reasonable for garbling single gates. In particular, unlike prior attempts at lower bounds, we make no assumptions about the internal behavior of the garbling algorithms --- i.e., how it uses random oracle outputs and wire labels to compute the garbled gate, etc. We prove separate lower bounds depending on whether the scheme uses the free-XOR technique (Kolesnikov & Schneider, ICALP 2008). In the free-XOR case, we prove that a garbled AND-gate requires bits; thus, the garbling scheme of Rosulek & Roy (Crypto 2022) is optimal. In the non-free-XOR case, we prove that a garbled AND-gate requires bits and a garbled XOR-gate requires bits; thus, the garbling scheme of Gueron, Lindell, Nof, and Pinkas (CCS 2015) is optimal. We prove our lower bounds using tools from information theory. A garbling scheme can be characterized as a joint distribution over various quantities: wire labels, garbled gate information, random oracle responses. We show that different properties of a garbling scheme imply certain Shannon-type information inequalities about this distribution. We then use an automated theorem prover for Shannon-type inequalities to prove that our inequalities imply lower bounds on the entropy---hence, size---of the garbled gate information.
Last updated:  2025-05-16
Improved Cryptanalysis of an RSA Variant Based on Cubic Pell Curve
Mohammed Rahmani and Abderrahmane Nitaj
In 2024, based on the cubic Pell curve, Nitaj and Seck proposed a variant of the RSA cryptosystem where the modulus is in the form , and the public key and private key satisfy the equation . They showed that can be factored when is less than a certain bound that depends on and in the situation , which is not extendable to . In this paper, we propose a cryptanalysis of this scheme in the situation , and give an explicit bound for that makes the scheme insecure. The method is based on Coppersmith's method and lattice reduction.
Last updated:  2025-05-16
Decentralized Multi-Authority Attribute-Based Inner-Product Functional Encryption: Noisy and Evasive Constructions from Lattices
Jiaqi Liu, Yan Wang, and Fang-Wei Fu
We initiate the study of multi-authority attribute-based functional encryption for noisy inner-product functionality, and propose two new primitives: (1) multi-authority attribute-based (noisy) inner-product functional encryption (MA-AB(N)IPFE), and (2) multi-authority attribute-based evasive inner-product functional encryption (MA-ABevIPFE). The MA-AB(N)IPFE primitive generalizes the existing multi-authority attribute-based inner-product functional encryption schemes by Agrawal et al. [AGT21], by enabling approximate inner-product computation under decentralized attribute-based control. This newly proposed notion combines the approximate function evaluation of noisy inner-product functional encryption (IPFE) with the decentralized key-distribution structure of multi-authority attribute-based encryption. To better capture noisy functionalities within a flexible security framework, we formulate the MA-ABevIPFE primitive under a generic-model view, inspired by the evasive IPFE framework by Hsieh et al. [HLL24]. It shifts the focus from pairwise ciphertext indistinguishability to a more relaxed pseudorandomness-based game. To support the above notions, we introduce two variants of lattice-based computational assumptions: - The evasive IPFE assumption (evIPFE): it generalizes the assumption introduced in [HLL24] to the multi-authority setting and admits a reduction from the evasive LWE assumption proposed by Waters et al. [WWW22]; - The indistinguishability-based evasive IPFE assumption (IND-evIPFE): it is an indistinguishability-based variant of the evasive IPFE assumption designed to capture the stronger security guarantees required by our MA-AB(N)IPFE scheme. We present concrete lattice-based constructions for both primitives supporting subset policies, building upon the framework of [WWW22]. Our schemes are proven to be statically secure in the random oracle model under the standard LWE assumption and the newly introduced assumptions. Additionally, we demonstrate that our MA-AB(N)IPFE scheme can be transformed, via standard modulus switching, into a noiseless MA-ABIPFE scheme that supports exact inner-product functionality consistent with the MA-IPFE syntax in [AGT21,DP23]. This yields the first lattice-based construction of such a primitive. All our schemes support arbitrary polynomial-size attribute policies and are secure in the random oracle model under lattice assumptions with a sub-exponential modulus-to-noise ratio, making them practical candidates for noise-tolerant, fine-grained access control in multi-authority settings.
Last updated:  2025-05-16
Improvement of Side-Channel Attacks on Mitaka
Vladimir Sarde and Nicolas Debande
Mitaka is a variant of Falcon, which is one of the three postquantum signature schemes selected by the NIST for standardization. Introduced in 2021, Mitaka offers a simpler, parallelizable, and maskable version of Falcon. Our article focuses on its physical security, and our results are threefold. Firstly, we enhance a known profiled side-channel attack on an unprotected Mitaka implementation by a factor of 512. Secondly, we analyze the masked implementation of Mitaka described in the reference article, which incorporates a different sampler and a protective gadget. We expose the first side-channel flaw on this sampler. This flaw enables to break the masked implementation with a first-order side-channel attack. In this scenario, the key can be recovered using only three times more traces compared to the attack on the unprotected implementation. Finally, we discuss and recommend new countermeasures to mitigate these attacks.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.