Papers updated in last 14 days (Page 2 of 174 results)

Last updated:  2025-04-16
Eva: Efficient Privacy-Preserving Proof of Authenticity for Lossily Encoded Videos
Chengru Zhang, Xiao Yang, David Oswald, Mark Ryan, and Philipp Jovanovic
With the increasing usage of fake videos in misinformation campaigns, proving the provenance of an edited video becomes critical, in particular, without revealing the original footage. We formalize the notion and security model of proofs of video authenticity and give the first cryptographic video authentication protocol Eva, which supports lossy codecs and arbitrary edits and is proven secure under well-established cryptographic assumptions. Compared to previous cryptographic methods for image authentication, Eva is not only capable of handling significantly larger amounts of data originating from the complex lossy video encoding but also achieves linear prover time, constant RAM usage, and constant proof size with respect to video size. These improvements have optimal theoretic complexity and are enabled by our two new theoretical advancements of integrating lookup arguments with folding-based incrementally verifiable computation (IVC) and compressing IVC proof efficiently, which may be of independent interest. For our implementation of Eva, we then integrate them with the Nova folding scheme, which we call Lova. As for concrete performance, we additionally utilize various optimizations such as tailored circuit design and GPU acceleration to make Eva highly practical: for a 2-minute HD (1280×720) video encoded in H.264 at 30 frames per second, Eva generates a 448 B proof in about 2.4 hours on consumer-grade hardware at 2.6 μs per pixel, surpassing state-of-the-art cryptographic image authentication schemes by more than an order of magnitude in terms of prover time and proof size.
Last updated:  2025-04-16
SASTA: Ambushing Hybrid Homomorphic Encryption Schemes with a Single Fault
Aikata Aikata, Ahaan Dabholkar, Dhiman Saha, and Sujoy Sinha Roy
Fully Homomorphic Encryption offers an effective solution for privacy-preserving computation, but its adoption is hindered by substantial computational and communication overheads. To address these, the Hybrid Homomorphic Encryption (HHE) protocol was developed, where the client encrypts data using a symmetric encryption scheme (SE), and the server homomorphically evaluates its decryption. Previous studies have demonstrated that the HHE protocol has no impact on the correctness of applications; however, in this work, we shift the focus to its security resilience when subjected to Differential Fault Analysis (DFA). While DFA has proven effective against standalone symmetric-key primitives, no DFA study has been proposed that exploits the HHE protocol as a whole. Furthermore, previous DFA approaches on SE rely on strong assumptions such as nonce reuse, which limits their applicability in real-world protocols or practical applications. In this work, we show that the structure of the HHE protocol itself exposes new avenues for fault exploitation. We introduce Sasta-DFA, which, to our knowledge, is the first DFA targeting HHE protocol in its entirety. Our study demonstrates that an attacker can achieve complete key recovery with a single fault injection. A key feature of this attack is that it does not require nonce reuse, thus adhering to nonce-related specifications. We adapt the IND-CPAD threat model proposed by Li and Micciancio at Eurocrypt’21 for HHE in the context of fault attacks. We conduct the first DFA study on the emerging HHE-specific integer-based SE schemes— Rubato, Hera, Pasta, and Masta. Notably, our attack methodology is generalizable and applicable to a broader class of HHE-friendly SE schemes, including boolean schemes like Rasta and even the standard scheme AES. We also present the first experimental validation of fault analysis on these new HHE-enabling schemes. Our attack, mounted on an ATXmega128D4-AU microcontroller, successfully demonstrates full key recovery. Finally, we also extend Sasta-DFA to Authenticated Transciphering protocols under a weaker threat model that removes any functional dependency.
Last updated:  2025-04-16
Zero-Knowledge Protocol for Knowledge of Known Discrete Logarithms: Applications to Ring Confidential Transactions and Anonymous Zether
Li Lin, Tian Qiu, Xin Wang, Hailong Wang, Changzheng Wei, Ying Yan, Wei Wang, and Wenbiao Zhao
The securities of a large fraction of zero-knowledge arguments of knowledge schemes rely on the discrete logarithm (DL) assumption or the discrete logarithm relation assumption, such as Bulletproofs (S&P 18) and compressed -protocol (CRYPTO 20). At the heart of these protocols is an interactive proof of knowledge between a prover and a verifier showing that a Pedersen vector commitment to a vector satisfies multi-variate equations, where the DL relations among the vector of generators are unknown. However, in some circumstances, the prover may know the DL relations among the generators, and the DL relation assumption no longer holds, such as ring signatures, ring confidential transactions (RingCT) and K-out-of-N proofs, which will make the soundness proof of these protocols infeasible. This paper is concerned with a problem called knowledge of known discrete logarithms (KKDL) that appears but has not been clearly delineated in the literature. Namely, it asks to prove a set of multi-exponent equalities, starting with the fact that the prover may know the DL relations among the generators of these equalities. Our contributions are three-fold: (1) We propose a special honest-verifier zero-knowledge protocol for the problem. Using the Fiat-Shamir heuristic and the improved inner-product argument of Bulletproofs, the proof size of our protocol is logarithmic to the dimension of the vector. (2) As applications, our protocol can be utilized to construct logarithmic-size RingCT securely which fixes the issues of Omniring (CCS 19), ring signatures (with signature size for ring size ) and -out-of- proof of knowledge (with proof size ) which achieves the most succinct proof size improving on previous results. Meanwhile, we propose the first account-based multi-receiver privacy scheme considering the sender's privacy with logarithmic proof size (to the best of our knowledge). (3) We describe an attack on RingCT-3.0 (FC 20) where an attacker can spend a coin of an arbitrary amount that never existed on the blockchain.
Last updated:  2025-04-16
Neural network design options for RNG's verification
José Luis Crespo, Jaime Gutierrez, and Angel Valle
In this work, we explore neural network design options for discriminating Random Number Generators(RNG), as a complement to existing statistical test suites, being a continuation of a recent paper of the aothors. Specifically, we consider variations in architecture and data preprocessing. We test their impact on the network's ability to discriminate sequences from a low-quality RNG versus a high-quality one—that is, to discriminate between "optimal" sequence sets and those from the generator under test. When the network fails to distinguish them, the test is passed. For this test to be useful, the network must have real discrimination capabilities. We review several network design possibilities showing significant differences in the obtained results. The best option presented here is convolutional networks working on 5120-byte sequences.
Last updated:  2025-04-16
An NVMe-based Secure Computing Platform with FPGA-based TFHE Accelerator
Yoshihiro Ohba, Tomoya Sanuki, Claude Gravel, Kentaro Mihara, Asuka Wakasugi, and Kenta Adachi
In this study, we introduce a new approach to secure computing by implementing a platform that utilizes a non-volatile memory express (NVMe)-based system with an FPGA-based Torus fully homomorphic encryption (TFHE) accelerator, solid state drive (SSD), and middleware on the host-side. Our platform is the first to offer completely secure computing capabilities for TFHE by using an FPGA-based accelerator. We defined secure computing instructions to evaluate 14-bit to 14-bit functions using TFHE. Our middleware allows for the communication of ciphertexts, keys, and secure computing programs while invoking secure computing programs through NVMe commands with metadata. Our performance evaluation demonstrates that our secure computing platform outperforms CPU-based and GPU-based platforms by 15 to 120 times and 2.5 to 3 times, respectively, in gate bootstrapping execution time. Additionally, our platform uses 7 to 12 times less electric energy consumption during the gate bootstrapping execution time than CPU-based platforms and 4.95 times less than a GPU-based platform. The performance of a machine learning application running on our platform shows that bootstrapping accounts for more than 80% of ciphertext learning time.
Last updated:  2025-04-16
Trilithium: Efficient and Universally Composable Distributed ML-DSA Signing
Antonín Dufka, Semjon Kravtšenko, Peeter Laud, and Nikita Snetkov
In this paper, we present Trilithium: a protocol for distributed key generation and signing compliant with FIPS 204 (ML-DSA). Our protocol allows two parties, "server" and "phone" with assistance of correlated randomness provider (CRP) to produce a standard ML-DSA signature. We prove our protocol to be secure against a malicious server or phone in the universal composability (UC) model, introducing some novel techniques to argue the security of two-party secure computation protocols with active security against one party, but only active privacy against the other. We provide an implementation of our protocol in Rust and benchmark it, showing the practicality of the protocol.
Last updated:  2025-04-16
Tree-based Quantum Carry-Save Adder
Hyunjun Kim, Sejin Lim, Kyungbae Jang, Siyi Wang, Anubhab Baksi, Anupam Chattopadhyay, and Hwajeong Seo
Quantum computing is regarded as one of the most significant upcoming advancements in computer science. Although fully operational quantum computers have yet to be realized, they are expected to solve specific problems that are difficult to solve using classical computers. Given the limitations of quantum computing resources, it is crucial to design compact quantum circuits for core operations, such as quantum arithmetic. In this paper, we focus on optimizing the circuit depth of quantum multi-operand addition, which is a fundamental component in quantum implementations (as an example, SHA-2). Building on the foundational quantum carry-save approach by Phil Gossett, we introduce a tree-based quantum carry-save adder. Our design integrates the Wallace and Dadda trees to optimize carry handling during multi-operand additions. To further reduce circuit depth, we utilize additional ancilla qubits for parallel operations and introduce an efficient technique for reusing these ancilla qubits. Our tree-based carry-save adder achieves the lowest circuit depth (-depth) and provides an improvement of over 82% (up to 99%) in the qubit count–circuit depth product for multi-operand addition. Furthermore, we apply our method to multiplication, achieving the lowest circuit depth and an improvement of up to 87% in the qubit count–circuit depth product.
Last updated:  2025-04-16
Uncertainty Estimation in Neural Network-enabled Side-channel Analysis and Links to Explainability
Seyedmohammad Nouraniboosjin and Fatemeh Ganji
Side-channel analysis (SCA) has emerged as a critical field in securing hardware implementations against potential vulnerabilities. With the advent of artificial intelligence(AI), neural network-based approaches have proven to be among the most useful techniques for profiled SCA. Despite the success of NN-assisted SCA, a critical challenge remains, namely understanding predictive uncertainty. NNs are often uncertain in their predictions, leading to incorrect key guesses with high probabilities, corresponding to a higher rank associated with the correct key. This uncertainty stems from multiple factors, including measurement errors, randomness in physical quantities, and variability in NN training. Understanding whether this uncertainty arises from inherent data characteristics or can be mitigated through better training is crucial. Additionally, if data uncertainty dominates, identifying specific trace features responsible for misclassification becomes essential. We propose a novel approach to estimating uncertainty in NN-based SCA by leveraging Renyi entropy, which offers a generalized framework for capturing various forms of uncertainty. This metric allows us to quantify uncertainty in NN predictions and explain its impact on key recovery. We decompose uncertainty into epistemic (model-related) and aleatoric (data-related) components. Given the challenge of estimating probability distributions in high-dimensional spaces, we use matrix-based Renyi α-entropy and α-divergence to better approximate leakage distributions, addressing the limitations of KL divergence in SCA. We also explore the sources of uncertainty, e.g., resynchronization, randomized keys, as well as hyperparameters related to NN training. To identify which time instances (features in traces) contribute most to uncertainty, we also integrate SHAP explanations with our framework, overcoming the limitations of conventional sensitivity analysis. Lastly, we show that predictive uncertainty strongly correlates with standard SCA metrics like rank, offering a complementary measure for evaluating attack complexity. Our theoretical findings are backed by extensive experiments on available datasets and NN models.
Last updated:  2025-04-15
Post-quantum Cryptographic Analysis of SSH
Benjamin Benčina, Benjamin Dowling, Varun Maram, and Keita Xagawa
The Secure Shell (SSH) protocol is one of the first security protocols on the Internet to upgrade itself to resist attacks against future quantum computers, with the default adoption of the "quantum (otherwise, classically)" secure hybrid key exchange in OpenSSH from April 2022. However, there is a lack of a comprehensive security analysis of this quantum-resistant version of SSH in the literature: related works either focus on the hybrid key exchange in isolation and do not consider security of the overall protocol, or analyze the protocol in security models which are not appropriate for SSH, especially in the "post-quantum" setting. In this paper, we remedy the state of affairs by providing a thorough post-quantum cryptographic analysis of SSH. We follow a "top-down" approach wherein we first prove security of SSH in a more appropriate model, namely, our post-quantum extension of the so-called authenticated and confidential channel establishment (ACCE) protocol security model; our extension which captures "harvest now, decrypt later" attacks could be of independent interest. Then we establish the cryptographic properties of SSH's underlying primitives, as concretely instantiated in practice, based on our protocol-level ACCE security analysis: for example, we prove relevant cryptographic properties of "Streamlined NTRU Prime", a key encapsulation mechanism (KEM) which is used in recent versions of OpenSSH and TinySSH, in the quantum random oracle model, and address open problems related to its analysis in the literature. Notably, our ACCE security analysis of post-quantum SSH relies on the weaker notion of IND-CPA security of the ephemeral KEMs used in the hybrid key exchange. This is in contrast to prior works which rely on the stronger assumption of IND-CCA secure ephemeral KEMs. Hence we conclude the paper with a discussion on potentially replacing IND-CCA secure KEMs in current post-quantum implementations of SSH with simpler and faster IND-CPA secure counterparts, and also provide the corresponding benchmarks.
Last updated:  2025-04-15
Hedging Public-Key Encryption in the Real World
Alexandra Boldyreva, Christopher Patton, and Thomas Shrimpton
Hedged PKE schemes are designed to provide useful security when the per-message randomness fails to be uniform, say, due to faulty implementations or adversarial actions. A simple and elegant theoretical approach to building such schemes works like this: Synthesize fresh random bits by hashing all of the encryption inputs, and use the resulting hash output as randomness for an underlying PKE scheme. The idea actually goes back to the Fujisaki-Okamoto transform for turning CPA-secure encryption into CCA-secure encryption, and is also used to build deterministic PKE schemes. In practice, implementing this simple construction is surprisingly difficult, as the high- and mid-level APIs presented by the most commonly used crypto libraries (e.g. OpenSSL and forks thereof) do not permit one to specify the per-encryption randomness. Thus application developers are forced to piece together low-level functionalities and attend to any associated, security-critical algorithmic choices. Other approaches to hedged PKE present similar problems in practice. We reconsider the matter of building hedged PKE schemes, and the security notions they aim to achieve. We lift the current best-possible security notion for hedged PKE (IND-CDA) from the CPA setting to the CCA setting, and then show how to achieve it using primitives that are readily available from high-level APIs. We also propose a new security notion, MM-CCA, which generalizes traditional IND-CCA to admit imperfect randomness. Like IND-CCA, and unlike IND-CDA, our notion gives the adversary the public key. We show that MM-CCA is achieved by RSA-OAEP in the random-oracle model; this is significant in practice because RSA-OAEP is directly available from high-level APIs across all libraries we surveyed. We sort out relationships among the various notions, and also develop new results for existing hedged PKE constructions.
Last updated:  2025-04-15
On the Definition of Malicious Private Information Retrieval
Bar Alon and Amos Beimel
A multi-server private information retrieval (PIR) protocol allows a client to obtain an entry of its choice from a database, held by one or more servers, while hiding the identity of the entry from small enough coalitions of servers. In this paper, we study PIR protocols in which some of the servers are malicious and may not send messages according to the pre-described protocol. In previous papers, such protocols were defined by requiring that they are correct, private, and robust to malicious servers, i.e., by listing 3 properties that they should satisfy. However, 40 years of experience in studying secure multiparty protocols taught us that defining the security of protocols by a list of required properties is problematic. In this paper, we rectify this situation and define the security of PIR protocols with malicious servers using the real vs. ideal paradigm. We study the relationship between the property-based definition of PIR protocols and the real vs. ideal definition, showing the following results: - We prove that if we require full security from PIR protocols, e.g., the client outputs the correct value of the database entry with high probability even if a minority of the servers are malicious, then the two definitions are equivalent. This implies that constructions of such protocols that were proven secure using the property-based definition are actually secure under the ``correct'' definition of security. - We show that if we require security-with-abort from PIR protocols (called PIR protocols with error-detection in previous papers), i.e., protocols in which the user either outputs the correct value or an abort symbol, then there are protocols that are secure under the property-based definition; however, they do not satisfy the real vs. ideal definition, that is, they can be attacked allowing selective abort. This shows that the property-based definition of PIR protocols with security-with-abort is problematic. - We consider the compiler of Eriguchi et al. (TCC 22) that starts with a PIR protocol that is secure against semi-honest servers and constructs a PIR protocol with security-with-abort; this compiler implies the best-known PIR protocols with security-with-abort. We show that applying this protocol does not result in PIR protocols that are secure according to the real vs. ideal definition. However, we prove that a simple modification of this compiler results in PIR protocols that are secure according to the real vs. ideal definition.
Last updated:  2025-04-15
SUMAC: an Efficient Administrated-CGKA Using Multicast Key Agreement
Nicolas Bon, Céline Chevalier, Guirec Lebrun, and Ange Martinelli
Since the standardization of the Secure Group Messaging protocol Messaging Layer Security (MLS) [4 ], whose core subprotocol is a Continuous Group Key Agreement (CGKA) mechanism named TreeKEM, CGKAs have become the norm for group key exchange protocols. However, in order to alleviate the security issue originating from the fact that all users in a CGKA are able to carry out sensitive operations on the member group, an augmented protocol called Administrated-CGKA (A-CGKA) has been recently created [2]. An A-CGKA includes in the cryptographic protocol the management of the administration rights that restrict the set of privileged users, giving strong security guarantees for the group administration. The protocol designed in [2] is a plugin added to a regular (black-box) CGKA, which consequently add some complexity to the underlying CGKA and curtail its performances. Yet, leaving the fully decentralized paradigm of a CGKA offers the perspective of new protocol designs, potentially more efficient. We propose in this paper an A-CGKA called SUMAC, which offers strongly enhanced communication and storage performances compared to other A-CGKAs and even to TreeKEM. Our protocol is based on a novel design that modularly combines a regular CGKA used by the administrators of the group and a Tree-structured Multicast Key Agreement (TMKA) [9] – which is a centralized group key exchange mechanism administrated by a single group manager – between each administrator and all the standard users. That TMKA gives SUMAC an asymptotic communication cost logarithmic in the number of users, similarly to a CGKA. However, the concrete performances of our protocol are much better than the latter, especially in the post-quantum framework, due to the intensive use of secret-key cryptography that offers a lighter bandwidth than the public-key encryption schemes from a CGKA. In practice, SUMAC improves the communication cost of TreeKEM by a factor 1.4 to 2.4 for admin operations and a factor 2 to 38 for user operations. Similarly, its storage cost divides that of TreeKEM by a factor 1.3 to 23 for an administrator and 3.9 to 1,070 for a standard user. Our analysis of SUMAC is provided along with a ready-to-use open-source rust implementation that confirms the feasibility and the performances of our protocol.
Last updated:  2025-04-15
Quantum Periodic Distinguisher Construction: Symbolization Method and Automated Tool
Qun Liu, Haoyang Wang, Jinliang Wang, Boyun Li, and Meiqin Wang
As one of the famous quantum algorithms, Simon's algorithm enables the efficient derivation of the period of periodic functions in polynomial time. However, the complexity of constructing periodic functions has hindered the widespread application of Simon's algorithm in symmetric-key cryptanalysis. Currently, aside from the exhaustive search-based testing method introduced by Canale et al. at CRYPTO 2022, there is no unified model for effectively searching for periodic distinguishers. Although Xiang et al. established a link between periodic function and truncated differential theory at ToSC 2024, their approach lacks the ability to construct periods using unknown differentials and does not provide automated tools. This limitation underscores the inadequacy of existing methods in identifying periodic distinguishers for complex structures. In this paper, we address the challenge of advancing periodic distinguishers for symmetric-key ciphers. First, we propose a more generalized theory for constructing periodic distinguishers, addressing the limitations of Xiang et al.'s theory in handling unknown differences. We further extend our theory to probabilistic periodic distinguishers, thereby extending the separability property proposed by Hodžić et al. in 2020. As a result, our theory can cover a wider range of periodic distinguishers. Second, we introduce a novel symbolic representation to simplify the search of periodic distinguishers. Based upon this representation, we propose the first fully automated SMT-based search model, which efficiently addresses the challenges of manual searching in complex structures. Finally, we extend the model to SPN structures based on our new theory. Our model has broad applicability through significant advancements in analyzing generalized Feistel structures (GFSs) and SPN-based ciphers. As a general model, we have achieved new quantum distinguishers with the following round configurations: 10 rounds for GFS-4F, 10 rounds for LBlock, 10 rounds for TWINE, and 16 rounds for Skipjack-B, improving the previous best results by 2, 2, 2, and 3 rounds, respectively. In the domain of SPN-based ciphers, our model has enabled the identification of novel periodic distinguishers, including the first 9-round distinguisher for SKINNY and the first 12-round distinguisher for CRAFT. These achievements lay the foundation for quantum cryptanalysis of SPN-based ciphers using Simon’s algorithm.
Last updated:  2025-04-15
Pirouette: Query Efficient Single-Server PIR
Jiayi Kang and Leonard Schild
Private information retrieval (PIR) allows a client to query a public database privately and serves as a key building block for privacy-enhancing applications. Minimizing query size is particularly important in many use cases, for example when clients operate on low-power or bandwidth-constrained devices. However, existing PIR protocols exhibit large query sizes: to query records, the smallest query size of 14.8KB is reported in Respire [Burton et al., CCS'24]. Respire is based on fully homomorphic encryption (FHE), where a common approach to lower the client-to-server communication cost is transciphering. When combining the state-of-the-art transciphering [Bon et al., CHES'24] with Respire, the resulting protocol (referred to as T-Respire) has a 336B query size, while incurring a 16.2x times higher server computation cost than Respire. Our work presents the Pirouette protocol, which achieves a query size of just 36B without transciphering. This represents a 9.3x reduction compared to T-Respire and a 420x reduction to Respire. For queries over records, the single-core server computation in Pirouette is only 2x slower than Respire and 8.1x faster than T-Respire, and the server computation is highly parallelizable. Furthermore, Pirouette requires no database-specific hint for clients and naturally extends to support queries over encrypted databases.
Last updated:  2025-04-15
Efficient SPA Countermeasures using Redundant Number Representation with Application to ML-KEM
Rishub Nagpal, Vedad Hadžić, Robert Primas, and Stefan Mangard
Simple power analysis (SPA) attacks and their extensions, profiled and soft-analytical side-channel attacks (SASCA), represent a significant threat to the security of cryptographic devices and remain among the most powerful classes of passive side-channel attacks. In this work, we analyze how numeric representations of secrets can affect the amount of exploitable information leakage available to the adversary. We present an analysis of how mutual information changes as a result of the integer ring size relative to the machine word-size. Furthermore, we study the Redundant Number Representation (RNR) countermeasure and show that its application to ML-KEM can resist the most powerful SASCA attacks and provides a low-cost alternative to shuffling. We eval- uate the performance of RNR-ML-KEM with both simulated and prac- tical SASCA experiments on the ARM Cortex-M4 based on a worst-case attack methodology. We show that RNR-ML-KEM sufficiently renders these attacks ineffective. Finally, we evaluate the performance of the RNR-ML-KEM NTT and INTT and show that SPA security can be achieved with a 62.8% overhead for the NTT and 0% overhead for the INTT relative to the ARM Cortex-M4 reference implementation used.
Last updated:  2025-04-15
A Critical Analysis of Deployed Use Cases for Quantum Key Distribution and Comparison with Post-Quantum Cryptography
Nick Aquina, Bruno Cimoli, Soumya Das, Kathrin Hövelmanns, Fiona Johanna Weber, Chigo Okonkwo, Simon Rommel, Boris Škorić, Idelfonso Tafur Monroy, and Sebastian Verschoor
Quantum Key Distribution (QKD) is currently being discussed as a technology to safeguard communication in a future where quantum computers compromise traditional public-key cryptosystems. In this paper, we conduct a comprehensive security evaluation of QKD-based solutions, focusing on real-world use cases sourced from academic literature and industry reports. We analyze these use cases, assess their security and identify the possible advantages of deploying QKD-based solutions. We further compare QKD-based solutions with Post-Quantum Cryptography (PQC), the alternative approach to achieving security when quantum computers compromise traditional public-key cryptosystems, evaluating their respective suitability for each scenario. Based on this comparative analysis, we critically discuss and comment on which use cases QKD is suited for, considering factors such as implementation complexity, scalability, and long-term security. Our findings contribute to a better understanding of the role QKD could play in future cryptographic infrastructures and offer guidance to decision-makers considering the deployment of QKD.
Last updated:  2025-04-15
Ciphertext-Ciphertext Matrix Multiplication: Fast for Large Matrices
Jai Hyun Park
Matrix multiplication of two encrypted matrices (CC-MM) is a key challenge for privacy-preserving machine learning applications. As modern machine learning models focus on scalability, fast CC-MM on large datasets is increasingly in demand. In this work, we present a CC-MM algorithm for large matrices. The algorithm consists of plaintext matrix multiplications (PP-MM) and ciphertext matrix transpose algorithms (C-MT). We propose a fast C-MT algorithm, which is computationally inexpensive compared to PP-MM. By leveraging high-performance BLAS libraries to optimize PP-MM, we implement large-scale CC-MM with substantial performance improvements. Furthermore, we propose lightweight algorithms, significantly reducing the key size from MB to MB for CC-MM with comparable efficiency. In a single-thread implementation, the C-MT algorithm takes seconds to transpose a encrypted matrix. The CC-MM algorithm requires seconds to multiply two encrypted matrices. For large matrices, our algorithm outperforms the state-of-the-art CC-MM method from Jiang-Kim-Lauter-Song [CCS'18] by a factor of over .
Last updated:  2025-04-15
Multi-Party Private Set Operations from Predicative Zero-Sharing
Minglang Dong, Yu Chen, Cong Zhang, Yujie Bai, and Yang Cao
Typical protocols in the multi-party private set operations (MPSO) setting enable parties to perform certain secure computation on the intersection or union of their private sets, realizing a very limited range of MPSO functionalities. Most works in this field focus on just one or two specific functionalities, resulting in a large variety of isolated schemes and a lack of a unified framework in MPSO research. In this work, we present an MPSO framework, which allows parties, each holding a set, to securely compute any set formulas (arbitrary compositions of a finite number of binary set operations, including intersection, union and difference) on their private sets. Our framework is highly versatile and can be instantiated to accommodate a broad spectrum of MPSO functionalities. To the best of our knowledge, this is the first framework to achieve such a level of flexibility and generality in MPSO, without relying on generic secure multi-party computation (MPC) techniques. Our framework exhibits favorable theoretical and practical performance. With computation and communication complexity scaling linearly with the set size , it achieves optimal complexity that is on par with the naive solution for widely used functionalities, such as multi-party private set intersection (MPSI), MPSI with cardinality output (MPSI-card), and MPSI with cardinality and sum (MPSI-card-sum), for the first time in the standard semi-honest model. Furthermore, the instantiations of our framework, which primarily rely on symmetric-key techniques, provide efficient protocols for MPSI, MPSI-card, MPSI-card-sum, and multi-party private set union (MPSU), with online performance that either surpasses or matches the state of the art in standard semi-honest model. At the technical core of our framework is a newly introduced primitive called predicative zero-sharing. This primitive captures the universality of a number of MPC protocols and is composable. We believe it may be of independent interest.
Last updated:  2025-04-15
Efficient Multi-Party Private Set Union Without Non-Collusion Assumptions
Minglang Dong, Cong Zhang, Yujie Bai, and Yu Chen
Multi-party private set union (MPSU) protocol enables parties, each holding a set, to collectively compute the union of their sets without revealing any additional information to other parties. There are two main categories of multi-party private set union (MPSU) protocols: The first category builds on public-key techniques, where existing works require a super-linear number of public-key operations, resulting in poor practical efficiency. The second category builds on oblivious transfer and symmetric-key techniques. The only work in this category, proposed by Liu and Gao (ASIACRYPT 2023), features the best concrete performance among all existing protocols, but still has super-linear computation and communication. Moreover, it does not achieve the standard semi-honest security, as it inherently relies on a non-collusion assumption, which is unlikely to hold in practice. There remain two significant open problems so far: no MPSU protocol achieves semi-honest security based on oblivious transfer and symmetric-key techniques; no MPSU protocol achieves both linear computation and linear communication complexity. In this work, we resolve both of them. - We propose the first MPSU protocol based on oblivious transfer and symmetric-key techniques in the standard semi-honest model. This protocol's online performance is faster than Liu and Gao in the LAN setting. Concretely, our protocol requires only seconds in online phase for parties with sets of items each. - We propose the first MPSU protocol achieving linear computation and linear communication complexity, based on public-key operations. This protocol has the best total performance in the WAN settings, due to a factor of improvement in terms of overall communication compared to Liu and Gao. Concretely, on slow network ( Mbps), their protocol takes hours to run for parties with sets of items each, whereas ours only takes minutes. We implement our protocols and conduct extensive experiments to compare the performance of our protocols and the state-of-the-art. To the best of our knowledge, our code is the first correct and secure implementation of MPSU that reports on large-size experiments.
Last updated:  2025-04-15
A Note on P NP
Ping Wang
The question of whether the complexity class P equals NP is a major unsolved problem in theoretical computer science. The key to proving that P NP is to show that there is no efficient (polynomial time) algorithm for a language in NP. For a language in NP, it is impossible to prove that it is not in P, because we can only claim that no better algorithm has been found so far, and there is no way to guarantee (or prove) that a more efficient algorithm does not exist. It is impossible to prove P NP rigorously, as any problem with size and solution space has certain structures and properties, and there is no way to prove that more efficient algorithms that take advantage of these structures or properties do not exist. In all attempts to prove P NP, all we can do is try to provide the best clues or "evidence" for P NP. In this paper, we introduce a new language, the Add/XNOR problem, which has the simplest structure and perfect randomness, by extending the subset sum problem. We conjecture that the square-root time complexity is required to solve the Add/XNOR problem, making it surpass the subset sum problem as the latter has algorithms that break the square-root complexity bound, a better evidence for P NP. Furthermore, by giving up commutative and associative properties, we design a magma equipped with a permutation and successfully achieve Conjecture 2, which is the first problem with exponential complexity that is believed to require an exhaustive search to solve, by far the strongest evidence for P NP.
Last updated:  2025-04-15
Recovering S-Box Design Structures and Quantifying Distances between S-Boxes using Deep Learning
Donggeun Kwon, Deukjo Hong, Jaechul Sung, and Seokhie Hong
At ASIACRYPT’19, Bonnetain et al. demonstrated that an S-box can be distinguished from a permutation chosen uniformly at random by quantifying the distances between their behaviors. In this study, we extend this approach by proposing a deep learning-based method to quantify distances between two different S-boxes and evaluate similarities in their design structures. First, we introduce a deep learning-based framework that trains a neural network model to recover the design structure of a given S-box based on its cryptographic table. We then interpret the decision-making process of our trained model to analyze which coefficients in the table play significant roles in identifying S-box structures. Additionally, we investigate the inference results of our model across various scenarios to evaluate its generalization capabilities. Building upon these insights, we propose a novel approach to quantify distances between structurally different S-boxes. Our method effectively assesses structural similarities by embedding S-boxes using the deep learning model and measuring the distances between their embedding vectors. Furthermore, experimental results confirm that this approach is also applicable to structures that the model has never seen during training. Our findings demonstrate that deep learning can reveal the underlying structural similarities between S-boxes, highlighting its potential as a powerful tool for S-box reverse-engineering.
Last updated:  2025-04-15
ToFA: Towards Fault Analysis of GIFT and GIFT-like Ciphers Leveraging Truncated Impossible Differentials
Anup Kumar Kundu, Shibam Ghosh, Aikata Aikata, and Dhiman Saha
In this work, we introduce ToFA, the first fault attack (FA) strategy that attempts to leverage the classically well-known idea of impossible differential cryptanalysis to mount practically verifiable attacks on bit-oriented ciphers like GIFT and BAKSHEESH. The idea used stems from the fact that truncated differential paths induced due to fault injection in certain intermediate rounds of the ciphers lead to active SBox-es in subsequent rounds whose inputs admit specific truncated differences. This leads to a (multi-round) impossible differential distinguisher, which can be incrementally leveraged for key-guess elimination via partial decryption. The key-space reduction further exploits the multi-round impossibility, capitalizing on the relations due to the quotient-remainder (QR) groups of the GIFT and BAKSHEESH linear layer, which increases the filtering capability of the distinguisher. Moreover, the primary observations made in this work are independent of the actual SBox. Clock glitch based fault attacks were mounted on 8-bit implementations of GIFT-64/GIFT-128 using a ChipWhisperer Lite board on an 8-bit ATXmega128D4-AU micro-controller. Unique key recovery was achieved for GIFT-128 with 3 random byte faults, while for GIFT-64, key space was reduced to , the highest achievable for GIFT-64, with a single level fault due to its key-schedule. This work also reports the highest fault injection penetration for any variant of GIFT and BAKSHEESH. Finally, this work reiterates the role of classical cryptanalysis strategies in fault vulnerability assessment while leading to the most efficient fault attacks on GIFT.
Last updated:  2025-04-15
Impossible Differential Attack on SAND-128
Nobuyuki Sugio
Impossible differential attack is one of the major cryptanalytical methods for symmetric-key block ciphers. In this paper, we evaluate the security of SAND-128 against impossible differential attack. SAND is an AND-RX-based lightweight block cipher proposed by Chen et al. in Designs, Codes and Cryptography 2022. There are two variants of SAND, namely SAND-64 and SAND-128, due to structural differences. In this paper, we search for impossible differential distinguishers of SAND-128 using the Constraint Programming (CP) and reveal 14-round impossible differential distinguishers. The number of 14-round distinguishers is . Furthermore, we demonstrate a key recovery attack on 21-round SAND-128. The complexities for the attack require data, encryptions, and bytes of memory, respectively. Although this result currently achieves the best attack on round-reduced SAND-128, this attack does not threaten the security of SAND-128 against impossible differential attack.
Last updated:  2025-04-15
Fuzzy Extractors are Practical: Cryptographic Strength Key Derivation from the Iris
Sohaib Ahmad, Sixia Chen, Luke Demarest, Benjamin Fuller, Caleb Manicke, Alexander Russell, and Amey Shukla
Despite decades of effort, a chasm existed between the theory and practice of device-level biometric authentication. Deployed authentication algorithms rely on data that overtly leaks private information about the biometric; thus systems rely on externalized security measures such as trusted execution environments. The authentication algorithms have no cryptographic guarantees. We close this chasm. We introduce a key derivation system with 105 bits of entropy and a 91% true accept rate for the iris. Our system advances 1) the feature extraction from the iris and 2) the fuzzy extractor used to derive keys. The fuzzy extractor builds on sample-then-lock (Canetti et al., Journal of Cryptology 2021). We 1) Introduce a new method of sampling that achieves a better TAR versus entropy tradeoff when features have different quality, 2) Correct their security proof, showing the minimum of min-entropy of subsets is the relevant security measure, and 3) Tighten their concrete analysis, nearly doubling security under reasonable assumptions. Our final feature extractor incorporates ideas from the new sampling method to produce features optimized for the sample-then-lock construction. The only statistical assumption needed to show security of our system is necessary: the accuracy of min-entropy estimation. Our quantitive level of security is well above prior work. Simhadri et al. (ISC, 2019) report bits on the iris, but they have a bug in their analysis (that we fix) that reduces their strength. Zhang et al.'s (ePrint 2021/1559) system achieves bits on the face but assumes independence between biometrics and the used error-correcting code, an assumption that cannot be easily verified. Other prior work assumes that bits of biometrics are i.i.d., an assumption that is demonstrably false. (Or that all correlation is pairwise between features (Hine et al., TIFS 2023).) Irises used to evaluate TAR and security are class disjoint from those used for training and collecting statistics (the open dataset regime).
Last updated:  2025-04-15
Precision For A Qubit Operation and Failure of Achieving Quantum Supremacy
Zhengjun Cao and Zhenfu Cao
M. Dyakonov [IEEE Spectrum, 2019] asked: what precision is required for manipulating a quantum gate, and what role is the notion of ENERGY playing in quantum computing theory. We answer the questions by investigating the Quantum Fourier Transformation (QFT). We show that the precision for a qubit operation is no less than , in accordance with the energy of a free particle. Quantum entanglement is indispensable to quantum supremacy. The present quantum computers have failed to instantiate quantum entanglement up to our expectation, due to inexact qubit operations. Quantum transition incurred during the transition of space-time continuum covertly changes qubits and corrupts quantum supremacy.
Last updated:  2025-04-15
VerITAS: Verifying Image Transformations at Scale
Trisha Datta, Binyi Chen, and Dan Boneh
Verifying image provenance has become an important topic, especially in the realm of news media. To address this issue, the Coalition for Content Provenance and Authenticity (C2PA) developed a standard to verify image provenance that relies on digital signatures produced by cameras. However, photos are usually edited before being published, and a signature on an original photo cannot be verified given only the published edited image. In this work, we describe VerITAS, a system that uses zero-knowledge proofs (zk-SNARKs) to prove that only certain edits have been applied to a signed photo. While past work has created image editing proofs for photos, VerITAS is the first to do so for realistically large images (30 megapixels). Our key innovation enabling this leap is the design of a new proof system that enables proving knowledge of a valid signature on a large amount of witness data. We run experiments on realistically large images that are more than an order of magnitude larger than those tested in prior work. In the case of a computationally weak signer, such as a camera, we are able to generate a proof of valid edits for a 90 MB image in just over thirteen minutes, costing about $0.54 on AWS per image. In the case of a more powerful signer, we are able to generate a proof of valid edits for a 90 MB image in just over three minutes, costing only 0.13 on AWS per image. Either way, proof verification time is less than a second. Our techniques apply broadly whenever there is a need to prove that an efficient transformation was applied correctly to a large amount of signed private data.
Last updated:  2025-04-14
Revocable Encryption, Programs, and More: The Case of Multi-Copy Security
Prabhanjan Ananth, Saachi Mutreja, and Alexander Poremba
Fundamental principles of quantum mechanics have inspired many new research directions, particularly in quantum cryptography. One such principle is quantum no-cloning which has led to the emerging field of revocable cryptography. Roughly speaking, in a revocable cryptographic primitive, a cryptographic object (such as a ciphertext or program) is represented as a quantum state in such a way that surrendering it effectively translates into losing the capability to use this cryptographic object. All of the revocable cryptographic systems studied so far have a major drawback: the recipient only receives one copy of the quantum state. Worse yet, the schemes become completely insecure if the recipient receives many identical copies of the same quantum state---a property that is clearly much more desirable in practice. While multi-copy security has been extensively studied for a number of other quantum cryptographic primitives, it has so far received only little treatment in context of unclonable primitives. Our work, for the first time, shows the feasibility of revocable primitives, such as revocable encryption and revocable programs, which satisfy multi-copy security in oracle models. This suggest that the stronger notion of multi-copy security is within reach in unclonable cryptography more generally, and therefore could lead to a new research direction in the field.
Last updated:  2025-04-14
The Learning Stabilizers with Noise problem
Alexander Poremba, Yihui Quek, and Peter Shor
Random classical codes have good error correcting properties, and yet they are notoriously hard to decode in practice. Despite many decades of extensive study, the fastest known algorithms still run in exponential time. The Learning Parity with Noise (LPN) problem, which can be seen as the task of decoding a random linear code in the presence of noise, has thus emerged as a prominent hardness assumption with numerous applications in both cryptography and learning theory. Is there a natural quantum analog of the LPN problem? In this work, we introduce the Learning Stabilizers with Noise (LSN) problem, the task of decoding a random stabilizer code in the presence of local depolarizing noise. We give both polynomial-time and exponential-time quantum algorithms for solving LSN in various depolarizing noise regimes, ranging from extremely low noise, to low constant noise rates, and even higher noise rates up to a threshold. Next, we provide concrete evidence that LSN is hard. First, we show that LSN includes LPN as a special case, which suggests that it is at least as hard as its classical counterpart. Second, we prove a worst-case to average-case reduction for variants of LSN. We then ask: what is the computational complexity of solving LSN? Because the task features quantum inputs, its complexity cannot be characterized by traditional complexity classes. Instead, we show that the LSN problem lies in a recently introduced (distributional and oracle) unitary synthesis class. Finally, we identify several applications of our LSN assumption, ranging from the construction of quantum bit commitment schemes to the computational limitations of learning from quantum data.
Last updated:  2025-04-14
Tighter Provable Security for TreeKEM
Karen Azari and Andreas Ellison
The Messaging Layer Security (MLS) protocol, recently standardized in RFC 9420, aims to provide efficient asynchronous group key establishment with strong security guarantees. The main component of MLS, which is the source of its key efficiency and security properties, is a protocol called TreeKEM. Given that a major vision for the MLS protocol is for it to become the new standard for messaging applications like WhatsApp, Facebook Messenger, Signal, etc., it has the potential to be used by a huge number of users. Thus, it is important to better understand the security of MLS and hence also of TreeKEM. In previous work, TreeKEM was proven adaptively secure in the Random Oracle Model (ROM) with a polynomial loss in security by proving a result about the security of an arbitrary IND-CPA secure public-key encryption scheme in a public-key version of a security game called GSD. Unfortunately, the concrete security guarantees implied by this line of work are not sufficient for parameters used in practice. In this work, we prove a tighter bound for the security of TreeKEM when DHIES is used for public-key encryption; DHIES is currently the only standardized scheme in MLS. Our bound implies meaningful security guarantees even for small practical parameters. We follow the above approach and first introduce a modified version of the public-key GSD game better suited for analyzing TreeKEM. We then provide a simple and detailed proof of security for DHIES in this game in the ROM and achieve a smaller security loss compared to the previous best result. Finally, we state the result on the security of TreeKEM implied by this bound and give an interpretation of the result with protocol parameters used in practice.
Last updated:  2025-04-14
On the Security of Two IKKR-type Code-Based Cryptosystems
Kirill Vedenev
The paper analyzes the security of two recently proposed code-based cryptosystems that employ encryption of the form : the Krouk-Kabatiansky-Tavernier (KKT) cryptosystem and the Lau-Ivanov-Ariffin-Chin-Yap (LIACY) cryptosystem. We demonstrate that the KKT cryptosystem can be reduced to a variant of the McEliece scheme, where a small set of columns in the public generator matrix is replaced with random ones. This reduction implies that the KKT cryptosystem is vulnerable to existing attacks on Wieschebrink's encryption scheme, particularly when Generalized Reed-Solomon (GRS) codes are used. In addition, we present a full key-recovery attack on the LIACY cryptosystem by exploiting its linear-algebraic structure and leveraging distinguishers of subcodes of GRS codes. Our findings reveal critical vulnerabilities in both systems, effectively compromising their security despite their novel designs.
Last updated:  2025-04-14
Assessing the Impact of a Variant of the Latest Dual Attack
Kevin Carrier, Charles Meyer-Hilfiger, Yixin Shen, and Jean-Pierre Tillich
The dual attacks on the Learning With Errors (LWE) problem are currently a subject of controversy. In particular, the results of [Matzov,2022], which claim to significantly lower the security level of CRYSTALS-Kyber, a lattice-based cryptosystem currently being standardized by NIST, are not widely accepted. The analysis behind their attack depends on a series of assumptions that, in certain scenarios, have been shown to contradict established theorems or well-tested heuristics [Ducas,Pulles,CRYPTO2023]. In this paper, we introduce a new dual lattice attack on LWE, drawing from ideas in coding theory. Our approach revisits the dual attack proposed by [Matzov,2022], replacing modulus switching with an efficient decoding algorithm. This decoding is achieved by generalizing polar codes over , and we confirm their strong distortion properties through benchmarks. This modification enables a reduction from small-LWE to plain-LWE, with a notable decrease in the secret dimension. Additionally, we replace the enumeration step in the attack by assuming the secret is zero for the portion being enumerated, iterating this assumption over various choices for the enumeration part. We make an analysis of our attack without using the flawed independence assumptions used in [Matzov,2022] and we fully back up our analysis with experimental evidences. Lastly, we assess the complexity of our attack on CRYSTALS-Kyber; showing that the security levels for Kyber-512/768/1024 are 3.5/11.9/12.3 bits below the NIST requirements (143/207/272 bits) in the same nearest-neighbor cost model as in the Kyber submission. All in all the cost of our attack matches and even slightly beat in some cases the complexities originally claimed by the attack of [Matzov,2022].
Last updated:  2025-04-14
Hybrid Fingerprinting for Effective Detection of Cloned Neural Networks
Can Aknesil, Elena Dubrova, Niklas Lindskog, Jakob Sternby, and Håkan Englund
As artificial intelligence plays an increasingly important role in decision-making within critical infrastructure, ensuring the authenticity and integrity of neural networks is crucial. This paper addresses the problem of detecting cloned neural networks. We present a method for identifying clones that employs a combination of metrics from both the information and physical domains: output predictions, probability score vectors, and power traces measured from the device running the neural network during inference. We compare the effectiveness of each metric individually, as well as in combination. Our results show that the effectiveness of both the information and the physical domain metrics is excellent for a clone that is a near replica of the target neural network. Furthermore, both the physical domain metric individually and the hybrid approach outperformed the information domain metrics at detecting clones whose weights were extracted with low accuracy. The presented method offers a practical solution for verifying neural network authenticity and integrity. It is particularly useful in scenarios where neural networks are at risk of model extraction attacks, such as in cloud-based machine learning services.
Last updated:  2025-04-14
Simpler and Faster Pairings from the Montgomery Ladder
Giacomo Pope, Krijn Reijnders, Damien Robert, Alessandro Sferlazza, and Benjamin Smith
We show that Montgomery ladders compute pairings as a by-product, and explain how a small adjustment to the ladder results in simple and efficient algorithms for the Weil and Tate pairing on elliptic curves using cubical arithmetic. We demonstrate the efficiency of the resulting cubical pairings in several applications from isogeny-based cryptography. Cubical pairings are simpler and more performant than pairings computed using Miller's algorithm: we get a speed-up of over 40% for use-cases in SQIsign, and a speed-up of about 7% for use-cases in CSIDH. While these results arise from a deep connection to biextensions and cubical arithmetic, in this article we keep things as concrete (and digestible) as possible. We provide a concise and complete introduction to cubical arithmetic as an appendix.
Last updated:  2025-04-14
Switching the Top Slice of the Sandwich with Extra Filling Yields a Stronger Boomerang for NLFSR-based Block Ciphers
Amit Jana, Mostafizar Rahman, Prathamesh Ram, Dhiman Saha, and Goutam Paul
The Boomerang attack was one of the first attempts to visualize a cipher () as a composition of two sub-ciphers () to devise and exploit two high-probability (say ) shorter trails instead of relying on a single low probability (say ) longer trail for differential cryptanalysis. The attack generally works whenever . However, it was later succeeded by the so-called ``sandwich attack'' which essentially splits the cipher in three parts adding an additional middle layer () with distinguishing probability of . It is primarily the generalization of a body of research in this direction that investigate what is referred to as the switching activity and capture the dependencies and potential incompatibilities of the layers that the middle layer separates. This work revisits the philosophy of the sandwich attack over multiple rounds for NLFSR-based block ciphers and introduces a new method to find high probability boomerang distinguishers. The approach formalizes boomerang attacks using only ladder, And switches. The cipher is treated as , a specialized form of a sandwich attack which we called as the ``open-sandwich attack''. The distinguishing probability for this attack configuration is . Using this innovative approach, the study successfully identifies a deterministic boomerang distinguisher for the keyed permutation of the TinyJambu cipher over 320 rounds. Additionally, a 640-round boomerang with a probability of is presented with 95% success rate. In the related-key setting, we unveil full-round boomerangs with probabilities of , , and for all three variants, demonstrating a 99% success rate. Similarly, for Katan-32, a more effective related-key boomerang spanning 140 rounds with a probability of is uncovered with 70% success rate. Further, in the single-key setting, a 84-round boomerang with probability found with success rate of 60%. This research deepens the understanding of boomerang attacks, enhancing the toolkit for cryptanalysts to develop efficient and impactful attacks on NLFSR-based block ciphers.
Last updated:  2025-04-14
Reusable Fuzzy Extractors for Low-Entropy Distributions
Uncategorized
Ran Canetti, Benjamin Fuller, Omer Paneth, Leonid Reyzin, and Adam Smith
Show abstract
Uncategorized
Fuzzy extractors (Dodis et al., Eurocrypt 2004) convert repeated noisy readings of a secret into the same uniformly distributed key. To eliminate noise, they require an initial enrollment phase that takes the first noisy reading of the secret and produces a nonsecret helper string to be used in subsequent readings. Reusable fuzzy extractors (Boyen, CCS 2004) remain secure even when this initial enrollment phase is repeated multiple times with noisy versions of the same secret, producing multiple helper strings (for example, when a single person's biometric is enrolled with multiple unrelated organizations). We construct the first reusable fuzzy extractor that makes no assumptions about how multiple readings of the source are correlated (the only prior construction assumed a very specific, unrealistic class of correlations). The extractor works for binary strings with Hamming noise; it achieves computational security under assumptions on the security of hash functions or in the random oracle model. It is simple and efficient and tolerates near-linear error rates. Our reusable extractor is secure for source distributions of linear min-entropy rate. The construction is also secure for sources with much lower entropy rates--lower than those supported by prior (nonreusable) constructions--assuming that the distribution has some additional structure, namely, that random subsequences of the source have sufficient minentropy. We show that such structural assumptions are necessary to support low entropy rates. We then explore further how different structural properties of a noisy source can be used to construct fuzzy extractors when the error rates are high, providing a computationally secure and an information-theoretically secure construction for large-alphabet sources.
Last updated:  2025-04-14
SQIAsignHD: SQIsignHD Adaptor Signature
Farzin Renan and Péter Kutas
Adaptor signatures can be viewed as a generalized form of standard digital signature schemes by linking message authentication to the disclosure of a secret value. As a recent cryptographic primitive, they have become essential for blockchain applications, including cryptocurrencies, by reducing on-chain costs, improving fungibility, and enabling off-chain payments in payment-channel networks, payment-channel hubs, and atomic swaps. However, existing adaptor signature constructions are vulnerable to quantum attacks due to Shor's algorithm. In this work, we introduce , a new quantum-resistant adaptor signature scheme based on isogenies of supersingular elliptic curves, using SQIsignHD - as the underlying signature scheme - and exploiting the idea of the artificial orientation on the supersingular isogeny Diffie-Hellman key exchange protocol, SIDH, to define the underlying hard relation. We, furthermore, provide a formal security proof for our proposed scheme.
Last updated:  2025-04-14
Aether: Approaching the Holy Grail in Asynchronous BFT
Xiaohai Dai, Chaozheng Ding, Julian Loss, and Ling Ren
State-of-the-art asynchronous Byzantine Fault Tolerance (BFT) protocols integrate a partially-synchronous optimistic path. The holy grail in this paradigm is to match the performance of a partially-synchronous protocol in favorable situations and match the performance of a purely asynchronous protocol in unfavorable situations. Several prior works have made progress toward this goal by matching the efficiency of a partially-synchronous protocol in favorable conditions. However, their performance compared to purely asynchronous protocols is reduced when network conditions are unfavorable. To address these shortcomings, a recent work, Abraxas (CCS'23), presents the first optimistic asynchronous BFT protocol that retains stable throughput in all situations. However, Abraxas still incurs very high worst-case latency in unfavorable situations because it is slow at detecting the failure of its optimistic path. Another recent work, ParBFT (CCS'23) guarantees good latency in all situations, but suffers from reduced throughput in unfavorable situations due to its use of extra Asynchronous Binary Agreement (ABA) instances. To approach our holy grail, we propose Aether, which delivers performance comparable to partially-synchronous protocols in favorable situations, and attains performance on par with purely asynchronous protocols in unfavorable situations—in both throughput and latency. Aether also runs the two paths simultaneously. It adopts two-chain HotStuff as the optimistic path, thus achieving high performance in favorable situations. As for the pessimistic path, we introduce a new primitive Dual-functional Byzantine Agreement (DBA), which packs the functionalities of biased ABA and Validated Asynchronous Byzantine Agreement (VABA). Aether runs DBA instances continuously as the pessimistic path. DBA’s ABA functionality quickly detects the optimistic path’s failure, ensuring Aether’s low latency in unfavorable situations. Meanwhile, the VABA functionality continuously produces blocks, maintaining Aether’s high throughput. Additionally, the biased property ensures that blocks committed via the optimistic path are respected by DBA instances, guaranteeing consistency across two paths. We conduct extensive experiments to demonstrate that Aether achieves high throughput and low latency in all situations.
Last updated:  2025-04-14
A Dilithium-like Multisignature in Fully Split Ring and Quantum Random Oracle Model
Shimin Pan, Tsz Hon Yuen, and Siu-Ming Yiu
Multisignature schemes are crucial for secure operations in digital wallets and escrow services within smart contract platforms, particularly in the emerging post-quantum era. Existing post-quantum multisignature constructions either do not address the stringent requirements of the Quantum Random Oracle Model (QROM) or fail to achieve practical efficiency due to suboptimal parameter choices. In this paper, we present a novel Dilithium-based multisignature scheme designed to be secure in the QROM and optimized for practical use. Our scheme operates over the polynomial ring with , enabling full splitting of the ring and allowing for efficient polynomial arithmetic via the Number Theoretic Transform (NTT). This structure not only ensures post-quantum security but also bridges the gap between theoretical constructs and real-world implementation needs. We further propose a new hardness assumption, termed -SelfTargetMSIS, extending SelfTargetMSIS (Eurocrypt 2018) to accommodate multiple challenge targets. We prove its security in the QROM and leverage it to construct a secure and efficient multisignature scheme. Our approach avoids the limitations of previous techniques, reduces security loss in the reduction, and results in a more compact and practical scheme suitable for deployment in post-quantum cryptographic systems.
Last updated:  2025-04-14
On the Average Random Probing Model
Julien Béguinot and Loïc Masure
Masking is one of the main countermeasures against side-channel analysis since it relies on provable security. In this context, “provable” means that a security bound can be exhibited for the masked implementation through a theoretical analysis in a given threat model. The main goal in this line of research is therefore to provide the tightest security bound, in the most realistic model, in the most generic way. Yet, all of these objectives cannot be reached together. That is why the masking literature has introduced a large spectrum of threat models and reductions between them, depending on the desired trade-off with respect to these three goals. In this paper, we focus on three threat models, namely the noisy-leakage model (realistic yet hard to work with), the random probing (unrealistic yet easy to work with), and more particularly a third intermediate model called average random probing. Average random probing has been introduced by Dziembowski et al. at Eurocrypt 2015, in order to exhibit a tight reduction between noisy-leakage and random probing models, recently proven by Brian et al. at Eurocrypt 2024. This milestone has strong practical consequences, since otherwise the reduction from the noisy leakage model to the random probing model introduces a prohibitively high constant factor in the security bound, preventing security evaluators to use it in practice. However, we exhibit a gap between the average random probing definitions of Dziembowski et al. (denoted hereafter by DFS-ARP) and Brian et al. (simply denoted by ARP). Whereas any noisy leakage can be tightly reduced to DFS-ARP, we show in this paper that it cannot be tightly reduced to ARP, unless requiring extra assumptions, e.g., if the noisy leakage is deterministic. Our proof techniques do not involve more tools than the one used so far in such reductions, namely basic probability facts, and known properties of the total variation distance. As a consequence, the reduction from the noisy leakage to the random probing — without high constant factor — remains unproven. This stresses the need to clarify the practical relevance of analyzing the security of masking in the random probing model since most of the current efforts towards improving the constructions and their security proofs in the random probing model might be hindered by potentially unavoidable loss in the reduction from more realistic but currently less investigated leakage models.
Last updated:  2025-04-14
: Towards More Efficient and BBB-secure AE From a Single Public Permutation
Arghya Bhattacharjee, Ritam Bhaumik, Avijit Dutta, and Eik List
Four recent trends have emerged in the evolution of authenticated encryption schemes: (1) Regarding simplicity, the adoption of public permutations as primitives allows for sparing a key schedule and the need for storing round keys; (2) using the sums of permutation outputs, inputs, or outputs has been a well-studied means to achieve higher security beyond the birthday bound; (3) concerning robustness, schemes should provide graceful security degradation if a limited amount of nonces repeats during the lifetime of a key, and (4) Andreeva et al.'s ForkCipher approach can increase the efficiency of a scheme since they can use fewer rounds per output branch compared to full-round primitives. In this work, we improve on the state of the art by combining those aspects for efficient authenticated encryption. We propose , an efficient nonce-based AE scheme that employs a public permutation and one call to an XOR-universal hash function. provides -bit security and high throughput by combining forked public-permutation-based variants of and an Encrypted Davies-Meyer. Thus, it can use a single, in part round-reduced, public permutation for most operations, spare a key schedule, and guarantee security beyond the birthday bound even under limited nonce reuse.
Last updated:  2025-04-14
HQC Beyond the BSC: Towards Error Structure-Aware Decoding
Marco Baldi, Sebastian Bitzer, Nicholas Lilla, and Paolo Santini
In Hamming Quasi-Cyclic (HQC), one of the finalists in the NIST competition for the standardization of post-quantum cryptography, decryption relies on decoding a noisy codeword through a public error-correcting code. The noise vector has a special form that depends on the secret key (a pair of sparse polynomials). However, the decoder, which is currently employed in HQC, is agnostic to the secret key, operating under the assumption that the error arises from a Binary Symmetric Channel (BSC). In this paper, we demonstrate that this special noise structure can instead be leveraged to develop more powerful decoding strategies. We first study the problem from a coding-theoretic perspective. The current code design, which admits a non-zero decryption failure rate, is close to optimal in the setting of a decoder that is agnostic to the error structure. We show that there are code-decoder pairs with a considerably shorter code length that can guarantee unique decoding by taking the error structure into account. This result is non-constructive, i.e., we do not provide an explicit code construction and it remains open whether efficient decoding is possible. Nevertheless, it highlights the potential that can be tapped by taking the error structure into account. We then argue that, in practice, the matter of decoding in HQC can be related to solving an instance of the noisy syndrome decoding problem, in which the parity-check matrix is constituted by the polynomials in the secret key. We show that, using decoders for Low-Density Parity-Check (LDPC) and Moderate-Density Parity-Check (MDPC) codes, one can significantly reduce the entity of the noise and, de facto, also the Decoding Failure Rate (DFR) of the HQC decoder. This preliminary study leaves some open questions and problems. While it shows that decoding in HQC can be improved, the modeling of the DFR gets more complicated: even for the basic decoder we propose in this paper, we have not been able to devise a reliable DFR model. This is likely due to the fact that the decoder structure resembles the iterative nature of LDPC/MDPC decoders, for which devising a reliable DFR estimation is a well-known difficult problem.
Last updated:  2025-04-14
NTWE: A Natural Combination of NTRU and LWE
Joel Gärtner
Lattice-based cryptosystems are some of the primary post-quantum secure alternatives to the asymmetric cryptography that is used today. These lattice-based cryptosystems typically rely on the hardness of some version of either the NTRU or the LWE problem. In this paper, we present the NTWE problem, a natural combination of the NTRU and LWE problems, and construct a new lattice-based cryptosystem based on the hardness of the NTWE problem. As with the NTRU and LWE problems, the NTWE problem naturally corresponds to a problem in a -ary lattice. This allows the hardness of the NTWE problem to be estimated in the same way as it is estimated for the LWE and NTRU problems. We parametrize our cryptosystem from such a hardness estimate and the resulting scheme has performance that is competitive with that of typical lattice-based schemes. In some sense, our NTWE-based cryptosystem can be seen as a less structured and more compact version of a cryptosystem based on the module-NTRU problem. Thus, parameters for our cryptosystem can be selected with the flexibility of a module-LWE-based scheme, while other properties of our system are more similar to those in an NTRU-based system.
Last updated:  2025-04-14
PipeSwap: Forcing the Timely Release of a Secret for Atomic Cross-Chain Swaps
Peifang Ni, Anqi Tian, and Jing Xu
Atomic cross-chain swaps mitigate the interoperability challenges faced by current cryptocurrencies, thereby facilitating inter-currency exchange and trading between the distrusting users. Although numerous atomic swaps protocols utilizing Hash Timelock Contracts have been deployed and put into practice, they are substantially far from universality due to their inherent dependence of rich scripting language supported by the underlying blockchains. The recently proposed Universal Atomic Swaps protocol [IEEE S&P'22] represents a significant advancement in the field of scriptless cross-chain swaps by ingeniously delegating scripting functionalities to cryptographic locking mechanisms, particularly the adaptor signatures and timed commitment schemes. However, we identify a new form of attack termed the double-claiming attack that leverages these scriptless functionalities to undermine atomicity with a high probability. This attack is inherent to the designs adopted by the existing scriptless cross-chain swaps protocols as well as the payment channel networks. We further quantify the severity of this attack based on real-word swap transactions processed by the most widely deployed decentralized exchange platforms, highlighting the critical challenges in designing universal atomic swaps. To address the double-claiming attack while ensuring both security and practical universality, we also present a cross-chain swaps protocol called \textup{PipeSwap}. Specifically, PipeSwap protects the frozen coins from being double-claimed by a novelly designed paradigm of pipelined coins flow that utilizes the techniques of two-hop swap and two-hop refund. In addition to a comprehensive security analysis in the Universal Composability framework, we develop a proof-of-concept implementation of PipeSwap with Schnorr/ECDSA signatures, and conduct extensive experiments to evaluate the overhead. The experimental results show that PipeSwap can be performed in less than 1.7 seconds while maintaining less than 7 kb of communication overhead on commodity machines.
Last updated:  2025-04-14
Biextensions in pairing-based cryptography
Jianming Lin, Damien Robert, Chang-An Zhao, and Yuhao Zheng
Bilinear pairings constitute a cornerstone of public-key cryptography, where advancements in Tate pairings and their efficient variants have emerged as a critical research domain within cryptographic science. Currently, the computation of pairings can be effectively implemented through three distinct algorithmic approaches: Miller’s algorithm, the elliptic net algorithm (as developed by Stange), and cubical-based algorithms (as proposed by Damien Robert). Biextensions are the geometric object underlying the arithmetic of pairings, and all three approaches can be seen as a different way to represent biextension elements. In this paper, we revisit the biextension geometric point of view for pairing computation and investigate in more detail the cubical representation for pairing-based cryptography. Utilizing the twisting isomorphism, we derive explicit formulas and algorithmic frameworks for the ate pairing and optimal ate pairing computations. Additionally, we present detailed formulas and introduce an optimized shared cubical ladder algorithm for super-optimal ate pairings. Through concrete computational analyses, we compare the performance of our cubical-based methods with the Miller's algorithm on various well-known families of pairing-friendly elliptic curves. Our results demonstrate that the cubical-based algorithm outperforms the Miller's algorithm by bits in certain specific situations, establishing its potential as an alternative for pairing computation.
Last updated:  2025-04-14
Self-Orthogonal Minimal Codes From (Vectorial) p-ary Plateaued Functions
René Rodríguez Aldama, Enes Pasalic, Fengrong Zhang, and Yongzhuang Wei
In this article, we derive the weight distribution of linear codes stemming from a subclass of (vectorial) -ary plateaued functions (for a prime ), which includes all the explicitly known examples of weakly and non-weakly regular plateaued functions. This construction of linear codes is referred in the literature as the first generic construction. First, we partition the class of -ary plateaued functions into three classes and , according to the behavior of their dual function . Using these classes, we refine the results presented in a series of articles \cite{Mesnager2017, MesOzSi,Pelen2020, RodPasZhaWei, WeiWangFu}. Namely, we derive the full weight distributions of codes stemming from all -plateaued functions for odd (parametrized by the weight of the dual ), whereas for even, the weight distributions are derived from the class of -plateaued functions in parametrized using two parameters (including and a related parameter ). Additionally, we provide more results on the different weight distributions of codes stemming from functions in subclasses of the three different classes. The exact derivation of such distributions is achieved by using some well-known equations over finite fields to count certain dual preimages. In order to improve the dimension of these codes, we then study the vectorial case, thus providing the weight distributions of a few codes associated to known vectorial plateaued functions and obtaining codes with parameters . For the first time, we provide the full weight distributions of codes from (a subclass of) vectorial -ary plateaued functions. This class includes all known explicit examples in the literature. The obtained codes are minimal and self-orthogonal virtually in all cases. Notably, we show that this is the best one can achieve---there are no -ary self-dual minimal codes for any prime power , except for the ternary tetracode and the binary repetition code.
Last updated:  2025-04-14
Revisiting the attacker's knowledge in inference attacks against Searchable Symmetric Encryption
Marc Damie, Jean-Benoist Leger, Florian Hahn, and Andreas Peter
Encrypted search schemes have been proposed to address growing privacy concerns. However, several leakage-abuse attacks have highlighted some security vulnerabilities. Recent attacks assumed an attacker's knowledge containing data "similar" to the indexed data. However, this vague assumption is barely discussed in literature: how likely is it for an attacker to obtain a "similar enough" data? Our paper provides novel statistical tools usable on any attack in this setting to analyze its sensitivity to data similarity. First, we introduce a mathematical model based on statistical estimators to analytically understand the attackers' knowledge and the notion of similarity. Second, we conceive statistical tools to model the influence of the similarity on the attack accuracy. We apply our tools on three existing attacks to answer questions such as: is similarity the only factor influencing accuracy of a given attack? Third, we show that the enforcement of a maximum index size can make the ``similar-data'' assumption harder to satisfy. In particular, we propose a statistical method to estimate an appropriate maximum size for a given attack and dataset. For the best known attack on the Enron dataset, a maximum index size of 200 guarantees (with high probability) the attack accuracy to be below 5%.
Last updated:  2025-04-14
SoK: FHE-Friendly Symmetric Ciphers and Transciphering
Chao Niu, Benqiang Wei, Zhicong Huang, Zhaomin Yang, Cheng Hong, Meiqin Wang, and Tao Wei
Fully Homomorphic Encryption (FHE) enables computation on encrypted data without decryption, demonstrating significant potential for privacy-preserving applications. However, FHE faces several challenges, one of which is the significant plaintext-to-ciphertext expansion ratio, resulting in high communication overhead between client and server. The transciphering technique can effectively address this problem by first encrypting data with a space-efficient symmetric cipher, then converting symmetric ciphertext to FHE ciphertext without decryption. Numerous FHE-friendly symmetric ciphers and transciphering methods have been developed by researchers, each with unique advantages and limitations. These often require extensive knowledge of both symmetric cryptography and FHE to fully grasp, making comparison and selection among these schemes challenging. To address this, we conduct a comprehensive survey of over 20 FHE-friendly symmetric ciphers and transciphering methods, evaluating them based on criteria such as security level, efficiency, and compatibility. We have designed and executed experiments to benchmark the performance of the feasible combinations of symmetric ciphers and transciphering methods across various application scenarios. Our findings offer insights into achieving efficient transciphering tailored to different task contexts. Additionally, we make our example code available open-source, leveraging state-of-the-art FHE implementations.
Last updated:  2025-04-14
K-Linkable Ring Signatures and Applications in Generalized Voting
Wonseok Choi, Xiangyu Liu, Lirong Xia, and Vassilis Zikas
(LRS) allow a user to sign anonymously on behalf of a ring, while maintaining linkability—two signatures from the same signer are publicly identified, i.e., linked. This linkability makes LRS suitable to prevent double-voting in classical, voting protocols—each voter casts one vote and the candidate with the most votes wins the election. Several voting scenarios rely on (generalized) rules rather than plurality. For example, in , voters submit a ranking of the candidates, and the outcome is a function of these rankings. Such generalized voting rules are common in social choice theory, and have recently found their way into blockchain governance, e.g., for prioritizing (voting on) proposed (candidate) projects. However, unlike plurality voting, using LRS for voters to sign their votes (rankings) does not guarantee vote privacy as one can observe the rankings of each individual voter, which, depending on the scoring rule, is more information than what the outcome of the election offers. We introduce - (-LRS) as a primitive for simultaneously achieving anonymity and privacy in generalized voting. A -LRS scheme has the following properties: (-): a user can sign anonymously (on behalf of the ring) up to times, so that even an unbounded adversary cannot link his signatures. (-): If any signer signs more than times, all his signatures are publicly linked ; and, any set of signers cannot generate more than unlinked signatures . We provide two constructions of -LRS: one is from the DDH, and the other is from SIS (hence post-quantum). Finally, we show how -LRS can be applied to a broad range of voting rules, including , , and . Our protocols are non-interactive voting—each voter just posts a message on a bulletin board—which highlights the potential of -LRS in blockchain-governance scenarios.
Last updated:  2025-04-13
A Unified Treatment of Anamorphic Encryption
Wonseok Choi, Daniel Collins, Xiangyu Liu, and Vassilis Zikas
Receiver anamorphic encryption (hereafter anamorphic encryption), introduced by Persiano et al. at Eurocrypt 2022, allows for a double message to be symmetrically hidden in a public-key encryption ciphertext via a pre-shared -double key-. In anamorphic encryption, confidentiality must be preserved even if the adversary (or the -dictator-) has access to all regular keys. It has been the subject of several works since its introduction that explore tweaks and extensions to the core primitive. However, this study has not been systematic, and so disparate security notions have been proposed, for which their relationships are not clear. Moreover, there are clear gaps in the literature, including in the treatment of chosen-ciphertext attacks. In this work, we conduct a systematic study of receiver anamorphic encryption. We unify existing security notions and propose several new ones, and prove implications and separations between them. Our main findings are as follows. First, we identify gaps in previous security notions against an anamorphic -sender-, namely an adversary who is given the double key, and propose three new security notions to bridge these gaps. We also identify several gaps in the treatment of chosen-ciphertext attacks, a setting only very recently considered in anamorphic cryptography (Jaeger and Stracovsky, Asiacrypt 2024). Moreover, observing that no previous construction achieves all desirable security properties in this setting, we propose a suitable construction that does. Finally, we propose several security notions for -asymmetric- anamorphic encryption, and explore the case here where the dictator and the anamorphic sender collude.
Last updated:  2025-04-13
Simple and General Counterexamples for Private-Coin Evasive LWE
Nico Döttling, Abhishek Jain, Giulio Malavolta, Surya Mathialagan, and Vinod Vaikuntanathan
We present a simple counterexample to all known variants of the private-coin evasive learning with errors (LWE) assumption. Unlike prior works, our counterexample is direct, it does not use heavy cryptographic machinery (such as obfuscation or witness encryption), and it applies to all variants of the assumption. Our counterexample can be seen as a "zeroizing" attack against evasive LWE, calling into question the soundness of the underlying design philosophy.
Last updated:  2025-04-13
Powerformer: Efficient and High-Accuracy Privacy-Preserving Language Model with Homomorphic Encryption
Dongjin Park, Eunsang Lee, and Joon-Woo Lee
We propose \textit{Powerformer}, an efficient homomorphic encryption (HE)-based privacy-preserving language model (PPLM) designed to reduce computation overhead while maintaining model performance. Powerformer incorporates three key techniques to optimize encrypted computations: 1) A novel distillation technique that replaces softmax and layer normalization (LN) with computationally efficient power and linear functions, ensuring no performance degradation while enabling seamless encrypted computation. 2) A pseudo-sign composite approximation method that accurately approximates GELU and tanh functions with minimal computational overhead. 3) A homomorphic matrix multiplication algorithm specifically optimized for Transformer models, enhancing efficiency in encrypted environments. By integrating these techniques, Powerformer based on the BERT-base model achieves a 45\% reduction in computation time compared to the state-of-the-art HE-based PPLM without any loss in accuracy.
Last updated:  2025-04-13
(Interleaved) Extended Gabidulin Codes and Their Applications to RQC
Yongcheng Song, Rongmao Chen, Fangguo Zhang, Xinyi Huang, Jian Weng, and Huaxiong Wang
In this paper, we investigate the Extended Gabidulin (EG) codes and the Interleaved EG (IEG) codes, and enhance the Rank Quasi-Cyclic (RQC) encryption scheme. Our primary contribution is the development of a general decoding algorithm for (I)EG codes, for which we precisely provide the DFR, bound the decoding capacity, and estimate the decoding complexity. As the core tool, we demonstrate that the Linear Reconstruction (LR) problem derived from the decoding (I)EG codes problem can be probabilistically solved, enabling (I)EG codes to achieve arbitrarily small DFRs, decode up to the rank Gilbert-Varshamov bound (even close to the minimal distance), and decode by the Welch-Berlekamp like algorithm. An interesting and important byproduct is that we demonstrate that decoding interleaved Gabidulin codes can be achieved deterministically by solving the LR problem. We finally apply the EG codes to improve RQC (NIST PQC & Asiacrypt 2023). For 128-bit security, our optimized RQC reduces bandwidth by 69 % and 34 % compared to the original versions, respectively. The scheme also achieves at least 50% improvement in efficiency and mitigates MM algebraic attacks (as discussed in Eurocrypt 2020, Asiacrypt 2020 & 2023) as EG codes facilitate schemes operating over smaller finite fields. Overall, our scheme outperforms code-based schemes of NIST PQC Round 4 submissions, such as HQC, BIKE, and Classic McEliece, in terms of bandwidth. A conservative parameters set still remains competitive bandwidths.
Last updated:  2025-04-13
Eccfrog512ck2: An Enhanced 512-bit Weierstrass Elliptic Curve
Víctor Duarte Melo and William J Buchanan
Whilst many key exchange and digital signature methods use the NIST P256 (secp256r1) and secp256k1 curves, there is often a demand for increased security. With these curves, we have a 128-bit security. These security levels can be increased to 256-bit security with NIST P-521 Curve 448 and Brainpool-P512. This paper outlines a new curve - Eccfrog512ck2 - and which provides 256-bit security and enhanced performance over NIST P-521. Along with this, it has side-channel resistance and is designed to avoid weaknesses such as related to the MOV attack. It shows that Eccfrog512ck2 can have a 61.5% speed-up on scalar multiplication and a 33.3% speed-up on point generation over the NIST P-521 curve.
Last updated:  2025-04-13
Fast, private and regulated payments in asynchronous networks
Maxence Brugeres, Victor Languille, Petr Kuznetsov, and Hamza Zarfaoui
We propose a decentralized asset-transfer system that enjoys full privacy: no party can learn the details of a transaction, except for its issuer and its recipient. Furthermore, the recipient is only aware of the amount of the transaction. Our system does not rely on consensus or synchrony assumptions, and therefore, it is responsive, since it runs at the actual network speed. Under the hood, every transaction creates a consumable coin equipped with a non-interactive zero-knowledge proof (NIZK) that confirms that the issuer has sufficient funds without revealing any information about her identity, the recipient's identity, or the payment amount. Moreover, we equip our system with a regulatory enforcement mechanism that can be used to regulate transfer limits or restrict specific addresses from sending or receiving funds, while preserving the system's privacy guarantees. Finally, we report on Paxpay, our implementation of Fully Private Asset Transfer (FPAT) that uses the Gnark library for the NIZKs. In our benchmark, Paxpay exhibits better performance than earlier proposals that either ensure only partial privacy, require some kind of network synchrony or do not implement regulation features. Our system thus reconciles privacy, responsiveness, regulation enforcement and performance.
Last updated:  2025-04-13
Reusable Dynamic Multi-Party Homomorphic Encryption
Jung Hee Cheon, Hyeongmin Choe, Seunghong Kim, and Yongdong Yeo
Homomorphic Encryption (HE) is a promising primitive for evaluating arbitrary circuits while keeping the user's privacy. We investigate how to use HE in the multi-party setting where data is encrypted with several distinct keys. One may use the Multi-Key Homomorphic Encryption (MKHE) in this setting, but it has space/computation overhead of for the number of users , which makes it impractical when grows large. On the contrary, Multi-Party Homomorphic Encryption (MPHE) is the other Homomorphic Encryption primitive in the multi-party setting, where the space/computation overhead is ; however, is limited in terms of ciphertext reusability and dynamicity, that ciphertexts are encrypted just for a group of parties and cannot be reused for other purposes, and that additional parties cannot join the computation dynamically. Contrary to MKHE, where the secret key owners engage only in the decryption phase, we consider a more relaxed situation where the secret key owners can communicate before the computation. In that case, we can reduce the size of a ciphertext and the evaluation complexity from to as in a single-key HE setting. We call this primitive as {\em Reusable Dynamic Multi-Party Homomorphic Encryption}, which is more suitable in real-world scenarios. We show that 1) the procedures before the computation can be done in a very few rounds of communications, 2) the evaluation/space complexities are independent of the number of users, and 3) the functionalities are as efficient as MKHE, with asymptotic analysis and with implementation.
Last updated:  2025-04-13
Aurora: Leaderless State-Machine Replication with High Throughput
Hao Lu, Jian Liu, and Kui Ren
State-machine replication (SMR) allows a {deterministic} state machine to be replicated across a set of replicas and handle clients' requests as a single machine. Most existing SMR protocols are leader-based requiring a leader to order requests and coordinate the protocol. This design places a disproportionately high load on the leader, inevitably impairing the scalability. If the leader fails, a complex and bug-prone fail-over protocol is needed to switch to a new leader. An adversary can also exploit the fail-over protocol to slow down the protocol. In this paper, we propose a crash-fault tolerant SMR named Aurora, with the following properties: • Leaderless: it does not require a leader, hence completely get rid of the fail-over protocol. • Scalable: it can scale up to 11 replicas. • Robust: it behaves well even under a poor network connection. We provide a full-fledged implementation of Aurora and systematically evaluate its performance. Our benchmark results show that Aurora achieves a throughput of around two million Transactions Per Second (TPS), up to 8.7X higher than the state-of-the-art leaderless SMR.
Last updated:  2025-04-13
Vector Commitment Design, Analysis, and Applications: A Survey
Vir Pathak, Sushmita Ruj, and Ron van der Meyden
Due to their widespread applications in decentralized and privacy preserving technologies, commitment schemes have become increasingly important cryptographic primitives. With a wide variety of applications, many new constructions have been proposed, each enjoying different features and security guarantees. In this paper, we systematize the designs, features, properties, and applications of vector commitments (VCs). We define vector, polynomial, and functional commitments and we discuss the relationships shared between these types of commitment schemes. We first provide an overview of the definitions of the commitment schemes we will consider, as well as their security notions and various properties they can have. We proceed to compare popular constructions, taking into account the properties each one enjoys, their proof/update information sizes, and their proof/commitment complexities. We also consider their effectiveness in various decentralized and privacy preserving applications. Finally, we conclude by discussing some potential directions for future work.
Last updated:  2025-04-12
How to Recover the Full Plaintext of XCB
Peng Wang, Shuping Mao, Ruozhou Xu, Jiwu Jing, and Yuewu Wang
XCB, a tweakable enciphering mode, is part of IEEE Std. 1619.2 for shared storage media. We show that all versions of XCB are not secure through three plaintext recovery attacks. A key observation is that XCB behaves like an LRW1-type tweakable block cipher for single-block messages, which lacks CCA security. The first attack targets one-block XCB, using three queries to recover the plaintext. The second one requires four queries to recover the plaintext that excludes one block. The last one requires seven queries to recover the \emph{full} plaintext. The first attack applies to any scheme that follows the XCB structure, whereas the latter two attacks work on all versions of XCB, exploiting the separable property of the underlying universal hash function. To address these flaws, we propose the XCB* structure, an improved version of XCB that adds only two XOR operations. We prove that XCB* is STPRP-secure when using AXU hash functions, SPRPs, and a secure random-IV-based stream cipher.
Last updated:  2025-04-12
PAC-Private Algorithms
Mayuri Sridhar, Hanshen Xiao, and Srinivas Devadas
Provable privacy typically requires involved analysis and is often associated with unacceptable accuracy loss. While many empirical verification or approximation methods, such as Membership Inference Attacks (MIA) and Differential Privacy Auditing (DPA), have been proposed, these do not offer rigorous privacy guarantees. In this paper, we apply recently-proposed Probably Approximately Correct (PAC) Privacy to give formal, mechanized, simulation-based proofs for a range of practical, black-box algorithms: K-Means, Support Vector Machines (SVM), Principal Component Analysis (PCA) and Random Forests. To provide these proofs, we present a new simulation algorithm that efficiently determines anisotropic noise perturbation required for any given level of privacy. We provide a proof of correctness for this algorithm and demonstrate that anisotropic noise has substantive benefits over isotropic noise. Stable algorithms are easier to privatize, and we demonstrate privacy amplification resulting from introducing regularization in these algorithms; meaningful privacy guarantees are obtained with small losses in accuracy. We propose new techniques in order to reduce instability in algorithmic output and convert intractable geometric stability verification into efficient deterministic stability verification. Thorough experiments are included, and we validate our provable adversarial inference hardness against state-of-the-art empirical attacks.
Last updated:  2025-04-12
Adaptive Robustness of Hypergrid Johnson-Lindenstrauss
Andrej Bogdanov, Alon Rosen, Neekon Vafa, and Vinod Vaikuntanathan
Johnson and Lindenstrauss (Contemporary Mathematics, 1984) showed that for , a scaled random projection from to is an approximate isometry on any set of size at most exponential in . If is larger, however, its points can contract arbitrarily under . In particular, the hypergrid is expected to contain a point that is contracted by a factor of , where . We give evidence that finding such a point exhibits a statistical-computational gap precisely up to . On the algorithmic side, we design an online algorithm achieving , inspired by a discrepancy minimization algorithm of Bansal and Spencer (Random Structures \& Algorithms, 2020). On the hardness side, we show evidence via a multiple overlap gap property (mOGP), which in particular captures online algorithms; and a reduction-based lower bound, which shows hardness under standard worst-case lattice assumptions. As a cryptographic application, we show that the rounded Johnson-Lindenstrauss embedding is a robust property-preserving hash function (Boyle, Lavigne and Vaikuntanathan, TCC 2019) on the hypergrid for the Euclidean metric in the computationally hard regime. Such hash functions compress data while preserving distances between inputs up to some distortion factor, with the guarantee that even knowing the hash function, no computationally bounded adversary can find any pair of points that violates the distortion bound.
Last updated:  2025-04-12
MProve-Nova: A Privacy-Preserving Proof of Reserves Protocol for Monero
Varun Thakore and Saravanan Vijayakumaran
A proof of reserves (PoR) protocol enables a cryptocurrency exchange to prove to its users that it owns a certain amount of coins, as a first step towards proving that it is solvent. We present the design, implementation, and security analysis of MProve-Nova, a PoR protocol for Monero that leverages the Nova recursive SNARK to achieve two firsts (without requiring any trusted setup). It is the first Monero PoR protocol that reveals only the number of outputs owned by an exchange; no other information about the outputs or their key images is revealed. It is also the first Monero PoR protocol where the proof size and proof verification time are constant, i.e. they are independent of the number of outputs on the Monero blockchain and the number of outputs owned by the exchange. To achieve constant verification times, MProve-Nova requires a pre-processing step which creates two Merkle trees from all the outputs and key images on the Monero blockchain. MProve-Nova consists of two Nova-based subprotocols, a reserves commitment generator (RCG) protocol used to compute a commitment to the total reserves owned by an exchange and a non-collusion (NC) protocol used to prove non-collusion between two exchanges. For the RCG protocol, we observed proof sizes of about 28 KB and verification times of 4.3 seconds. For the NC protocol, we observed proof sizes of about 24 KB and verification times of 0.2 seconds. Proving times for both protocols increase linearly with the number of outputs owned by the exchange but remain independent of the number of outputs on the Monero blockchain. On average, the RCG protocol required about 42 minutes per 1000 outputs and the NC protocol required about 5 minutes per 1000 outputs.
Last updated:  2025-04-12
Spilling-Cascade: an Optimal PKE Combiner for KEM Hybridization
Céline Chevalier, Guirec Lebrun, and Ange Martinelli
Hybrid Post-Quantum cryptography is a cautious approach that aims to guard against the threat posed by the quantum computer, through the simultaneous use of Post-Quantum (PQ) and classical (i.e. pre-quantum) cryptosystems, should the post-quantum schemes used prove insecure. Regarding the hybridization of Key Encapsulation Mechanisms (KEMs), most recent studies focus on safely combining the symmetric keys output by a parallel execution of classical and Post-Quantum KEMs. While this architecture is straightforward, it appears to lack bandwidth optimization. Hence, we propose a novel method for hybridizing several KEMs more effectively, by combining the underlying Public-Key Encryption schemes (PKEs) in an innovative variant of the cascade composition that we call “spilling-cascade”, before turning the hybrid PKE into a KEM with a FO transformation. We prove that this architecture constitutes a robust combiner for encryption schemes up to IND-CPA security, which permits to eventually generate an IND-CCA-secure KEM. In terms of performance, our spilling-cascade scheme has a better communication cost than the commonly used parallel combination, with a bandwidth gain of its ciphertext that ranges from 2.8% to 13 % com- pared to the latter, depending on the number and the characteristics of the PKEs that are combined. Moreover, we prove that for given PKEs to hybridize, the ciphertext communication cost of the spilling-cascade is optimal.
Last updated:  2025-04-12
HyperPianist: Pianist with Linear-Time Prover and Logarithmic Communication Cost
Chongrong Li, Pengfei Zhu, Yun Li, Cheng Hong, Wenjie Qu, and Jiaheng Zhang
Recent years have seen great improvements in zero-knowledge proofs (ZKPs). Among them, zero-knowledge SNARKs are notable for their compact and efficiently-verifiable proofs, but suffer from high prover costs. Wu et al. (Usenix Security 2018) proposed to distribute the proving task across multiple machines, and achieved significant improvements in proving time. However, existing distributed ZKP systems still have quasi-linear prover cost, and may incur a communication cost that is linear in circuit size. In this paper, we introduce . Inspired by the state-of-the-art distributed ZKP system Pianist (Liu et al., S&P 2024) and the multivariate proof system HyperPlonk (Chen et al., EUROCRYPT 2023), we design a distributed multivariate polynomial interactive oracle proof (PIOP) system with a linear-time prover cost and logarithmic communication cost. Unlike Pianist, incurs no extra overhead in prover time or communication when applied to general (non-data-parallel) circuits. To instantiate the PIOP system, we adapt two additively-homomorphic multivariate polynomial commitment schemes, multivariate KZG (Papamanthou et al., TCC 2013) and Dory (Lee et al., TCC 2021), into the distributed setting, and get and respectively. Both systems have linear prover complexity and logarithmic communication cost; furthermore, requires no trusted setup. We also propose , incorporating an optimized lookup argument based on Lasso (Setty et al., EUROCRYPT 2024) with lower prover cost. Experiments demonstrate and achieve speedups of and over HyperPlonk with 32 distributed machines. Compared to Pianist, can be and as fast and can be and as fast, on vanilla gates and custom gates respectively. With layered circuits, is up to as fast on custom gates, and achieves a speedup.
Last updated:  2025-04-12
Malleable SNARKs and Their Applications
Suvradip Chakraborty, Dennis Hofheinz, Roman Langrehr, Jesper Buus Nielsen, Christoph Striecks, and Daniele Venturi
Succinct non-interactive arguments of knowledge (SNARKs) are variants of non-interactive zero-knowledge proofs (NIZKs) in which complex statements can be proven in a compact way. SNARKs have had tremendous impact in several areas of cryptography, including verifiable computing, blockchains, and anonymous communication. A recurring concept in many applications is the concept of recursive SNARKs, in which a proof references a previous proof to show an evolved statement. In this work, we investigate malleable SNARKs, a generalization of this concept of recursion. An adaptation of the existing concept of malleable NIZKs, malleable SNARKs allow to modify SNARK proofs to show related statements, but such that such mauled proofs are indistinguishable from “properly generated” fresh proofs of the related statement. We show how to instantiate malleable SNARKs for universal languages and relations, and give a number of applications: the first post-quantum RCCA-secure rerandomizable and updatable encryption schemes, a generic construction of reverse firewalls, and an unlinkable (i.e., computation-hiding) targeted malleable homomorphic encryption scheme. Technically, our malleable SNARK construction relies on recursive proofs, but with a twist: in order to support the strong indistinguishability properties of mauled and fresh SNARK proofs, we need to allow an unbounded recursion depth. To still allow for a reasonable notion of extractability in this setting (and in particular to guarantee that extraction eventually finishes with a “proper” witness that does not refer to a previous SNARK proof), we rely on a new and generic computational primitive called adversarial one-way function (AOWF) that may be of independent interest. We give an AOWF candidate and prove it secure in the random oracle model.
Last updated:  2025-04-12
Revisiting the Robustness of {(R/M)LWR} under Polynomial Moduli and its Applications
Haoxiang Jin, Feng-Hao Liu, Zhedong Wang, Yang Yu, and Dawu Gu
This work conducts a comprehensive investigation on determining the entropic hardness of (R/M)LWR under polynomial modulus. Particularly, we establish the hardness of (M)LWR for general entropic secret distributions from (Module) LWE assumptions based on a new conceptually simple framework called rounding lossiness. By combining this hardness result and a trapdoor inversion algorithm with asymptotically the most compact parameters, we obtain a compact lossy trapdoor function (LTF) with improved efficiency. Extending our LTF with other techniques, we can derive a compact all-but-many LTF and PKE scheme against selective opening and chosen ciphertext attacks, solely based on (Module) LWE assumptions within a polynomial modulus. Additionally, we show a search-to-decision reduction for RLWR with Gaussian secrets from a new R\'enyi Divergence-based analysis.
Last updated:  2025-04-12
Impossible Boomerang Distinguishers Revisited
Xichao Hu, Lin Jiao, Dengguo Feng, Yonglin Hao, Xinxin Gong, Yongqiang Li, and Siwei Sun
The Impossible Boomerang Attack (IBA) has shown significant power in evaluating the security of block ciphers, such as AES. However, current studies still lack foundational theory, user guild and universal method for constructing IBDs. This paper addresses these gaps through comprehensive research. Theoretically, we establish a new framework for constructing a series of IBDs by differential propagation, state propagation, and generalized boomerang tables. We rigorously prove their inclusion relations, resulting in a complete theory and hierarchical apply strategy for both single-key and related-key settings. We further analyze IBD constructions in two types of related-key settings: two-related-keys with arbitrary schedules and four-related-keys with linear schedules, structurally in a unified way. Technically, we develop a scheduling algorithm and a general SAT-based method to search for IBDs across various block cipher designs, including SPN, Feistel, and ARX. Additionally, we propose several strategies to enhance the search process. As applications, we derive (RK-)IBDs for 10 block ciphers, almost for the first time. Compared to impossible differentials, our IBDs are at least as effective, such as DES and PRESENT. Notably, we achieve 1 more round on PRINTcipher48 in single-key setting; 2 more rounds on AES-128, and 1 or 2 more rounds on SPECK variants in two-related-keys settings; 1, 4, 2 more rounds on GIFT-64, CHAM-64/128 and CHAM-128/256 in four-related-keys settings. We also obtain full-round RK-IBDs on GOST. Compared to current IBDs, we achieve 1, 1 more rounds on SKINNY-64/192 and SKINNYee. Furthermore, as an applied case of derived IBDs, we present a 31-round IBA on SKINNYee, which is the first 31-round attack on SKINNYee and the best result to date.
Last updated:  2025-04-11
Intermundium-DL: Assessing the Resilience of Current Schemes to Discrete-Log-Computation Attacks on Public Parameters
Mihir Bellare, Doreen Riepel, and Laura Shea
We consider adversaries able to perform a nonzero but small number of discrete logarithm computations, as would be expected with near-term quantum computers. Schemes with public parameters consisting of a few group elements are now at risk; could an adversary knowing the discrete logarithms of these elements go on to easily compromise the security of many users? We study this question for known schemes and find, across them, a perhaps surprising variance in the answers. In a first class are schemes, including Okamoto and Katz-Wang signatures, that we show fully retain security even when the discrete logs of the group elements in their parameters are known to the adversary. In a second class are schemes like Cramer-Shoup encryption and the SPAKE2 password-authenticated key exchange protocol that we show retain some partial but still meaningful and valuable security. In the last class are schemes we show by attack to totally break. The distinctions uncovered by these results shed light on the security of classical schemes in a setting of immediate importance, and help make choices moving forward.
Last updated:  2025-04-11
Hard Quantum Extrapolations in Quantum Cryptography
Luowen Qian, Justin Raizes, and Mark Zhandry
Although one-way functions are well-established as the minimal primitive for classical cryptography, a minimal primitive for quantum cryptography is still unclear. Universal extrapolation, first considered by Impagliazzo and Levin (1990), is hard if and only if one-way functions exist. Towards better understanding minimal assumptions for quantum cryptography, we study the quantum analogues of the universal extrapolation task. Specifically, we put forth the classicalquantum extrapolation task, where we ask to extrapolate the rest of a bipartite pure state given the first register measured in the computational basis. We then use it as a key component to establish new connections in quantum cryptography: (a) quantum commitments exist if classicalquantum extrapolation is hard; and (b) classicalquantum extrapolation is hard if any of the following cryptographic primitives exists: quantum public-key cryptography (such as quantum money and signatures) with a classical public key or 2-message quantum key distribution protocols. For future work, we further generalize the extrapolation task and propose a fully quantum analogue. We show that it is hard if quantum commitments exist, and it is easy for quantum polynomial space.
Last updated:  2025-04-11
Attribute-Based Publicly Verifiable Secret Sharing
Liang Zhang, Xingyu Wu, Qiuling Yue, Haibin Kan, and Jiheng Zhang
Can a dealer share a secret without knowing the shareholders? We provide a positive answer to this question by introducing the concept of an attribute-based secret sharing (AB-SS) scheme. With AB-SS, a dealer can distribute a secret based on attributes rather than specific individuals or shareholders. Only authorized users whose attributes satisfy a given access structure can recover the secret. Furthermore, we introduce the concept of attribute-based publicly verifiable secret sharing (AB-PVSS). An AB-PVSS scheme allows external users to verify the correctness of all broadcast messages from the dealer and shareholders, similar to a traditional PVSS scheme. Additionally, AB-SS (or AB-PVSS) distinguishes itself from traditional SS (or PVSS) by enabling a dealer to generate shares according to an arbitrary monotone access structure. To construct an AB-PVSS scheme, we first implement a decentralized ciphertext-policy attribute-based encryption (CP-ABE) scheme. The proposed CP-ABE scheme offers a smaller ciphertext size and requires fewer computational operations, although it is not fully-fledged as a trade-off. We then incorporate non-interactive zero-knowledge (NIZK) proofs to enable public verification of the CP-ABE ciphertext. Based on the CP-ABE and NIZK proofs, we construct an AB-PVSS primitive. Furthermore, we present an intuitive implementation of optimistic fair exchange based on the AB-PVSS scheme. Finally, we conduct security analysis and comprehensive experiments on the proposed CP-ABE and AB-PVSS schemes. The results demonstrate that both schemes exhibit plausible performance compared to related works.
Last updated:  2025-04-11
SoK: Self-Generated Nudes over Private Chats: How Can Technology Contribute to a Safer Sexting?
Joel Samper and Bernardo Ferreira
More and more people take advantage of mobile apps to strike up relationships and casual contacts. This sometimes results in the sharing of self-generated nudes. While this opens a way for sexual exploration, it also raises concerns. In this paper, we review existing technology-assisted permissive proposals/features that provide security, privacy or accountability benefits when sharing nudes online. To do so, we performed a systematic literature review combing through 10,026 search results and cross-references, and we identified real-world solutions by surveying OS features and 52 dating, messaging and social network apps. We systematized knowledge by defining a sexting threat model, deriving a taxonomy of the proposals/features, discussing some of their shortcomings, organizing privacy-related concepts, and providing take-aways with some directions for future research and development. Our study found a very diverse ecosystem of academic proposals and app features, showing that safer sexting goes far beyond nude detection. None of the techniques represents the ultimate solution for all threats, but each contributes to a safer sexting in a different way.
Last updated:  2025-04-11
The Art of Bonsai: How Well-Shaped Trees Improve the Communication Cost of MLS
Céline Chevalier, Guirec Lebrun, Ange Martinelli, and Jérôme Plût
Messaging Layer Security (MLS) is a Secure Group Messaging protocol that uses for its handshake a binary tree – called a Ratchet Tree – in order to reach a logarithmic communication cost in the number of group members. This Ratchet Tree represents users as its leaves; therefore any change in the group membership results in adding or removing a leaf in the tree. MLS consequently implements what we call a tree evolution mechanism, consisting of a user add algorithm – determining where to insert a new leaf – and a tree expansion process – stating how to increase the size of the tree when no space is available for a new user. The tree evolution mechanism currently used by MLS is designed so that it naturally left-balances the Ratchet Tree. However, such a tree structure is often quite inefficient in terms of communication cost. Furthermore, one may wonder whether the binary Ratchet Tree has a degree optimized for the features of MLS. Therefore, we study in this paper how to improve the communication cost of the handshake in MLS – realized through an operation called a commit – by considering both the tree evolution mechanism and the tree degree used for the Ratchet Tree. To do so, we determine the tree structure that optimizes its communication cost and we propose algorithms for both the user add and the tree expansion processes, that allow to remain close to that optimal structure and thus to have a communication cost as close as possible to the optimum. We also find out the Ratchet Tree degree that is best suited to a given set of parameters induced by the encryption scheme used by MLS. This study shows that when using classical (i.e. pre-quantum) ciphersuites, a binary tree is indeed the most appropriate Ratchet Tree; nevertheless, with post-quantum algorithms, it generally becomes more interesting to use instead a ternary tree. Our improvements do not change the TreeKEM protocol and are easy to implement. With parameter sets corresponding to practical ciphersuites, they reduce TreeKEM’s communication cost by 5 to 10%. In particular, the gain of 10% appears in the post-quantum setting – when both an optimized tree evolution mechanism and a ternary tree are necessary –, which is precisely the context where any optimization of the protocol’s communication cost is welcome, due to the large bandwidth of PQ encrypted communication.
Last updated:  2025-04-11
An LLM Framework For Cryptography Over Chat Channels
Danilo Gligoroski, Mayank Raikwar, and Sonu Kumar Jha
Recent advancements in Large Language Models (LLMs) have transformed communication, yet their role in secure messaging remains underexplored, especially in surveillance-heavy environments. At the same time, many governments all over the world are proposing legislation to detect, backdoor, or even ban encrypted communication. That emphasizes the need for alternative ways to communicate securely and covertly over open channels. We propose a novel cryptographic embedding framework that enables covert Public Key or Symmetric Key encrypted communication over public chat channels with human-like produced texts. Some unique properties of our framework are: 1. It is LLM agnostic, i.e., it allows participants to use different local LLM models independently; 2. It is pre- or post-quantum agnostic; 3. It ensures indistinguishability from human-like chat-produced texts. Thus, it offers a viable alternative where traditional encryption is detectable and restricted.
Last updated:  2025-04-11
Optimized One-Dimensional SQIsign Verification on Intel and Cortex-M4
Marius A. Aardal, Gora Adj, Arwa Alblooshi, Diego F. Aranha, Isaac A. Canales-Martínez, Jorge Chavez-Saab, Décio Luiz Gazzoni Filho, Krijn Reijnders, and Francisco Rodríguez-Henríquez
SQIsign is a well-known post-quantum signature scheme due to its small combined signature and public-key size. However, SQIsign suffers from notably long signing times, and verification times are not short either. To improve this, recent research has explored both one-dimensional and two-dimensional variants of SQIsign, each with distinct characteristics. In particular, SQIsign2D's efficient signing and verification times have made it a focal point of recent research. However, the absence of an optimized one-dimensional verification implementation hampers a thorough comparison between these different variants. This work bridges this gap in the literature: we provide a state-of-the-art implementation of one-dimensional SQIsign verification, including novel optimizations. We report a record-breaking one-dimensional SQIsign verification time of 8.55 Mcycles on a Raptor Lake Intel processor, closely matching SQIsign2D on the same processor. For uncompressed signatures, the signature size doubles and we verify in only 5.6 Mcycles. Taking advantage of the inherent parallelism available in isogeny computations, we present 5-core variants that can go as low as 1.3 Mcycles. Furthermore, we present the first implementation that supports both 32-bit and 64-bit processors. It includes optimized assembly code for the Cortex-M4 and has been integrated with the pqm4 project. Our results motivate further research into one-dimensional SQIsign, as it boasts unique features among isogeny-based schemes.
Last updated:  2025-04-11
Do Not Disturb a Sleeping Falcon: Floating-Point Error Sensitivity of the Falcon Sampler and Its Consequences
Xiuhan Lin, Mehdi Tibouchi, Yang Yu, and Shiduo Zhang
Falcon is one of the three postquantum signature schemes already selected by NIST for standardization. It is the most compact among them, and offers excellent efficiency and security. However, it is based on a complex algorithm for lattice discrete Gaussian sampling which presents a number of implementation challenges. In particular, it relies on (possibly emulated) floating-point arithmetic, which is often regarded as a cause for concern, and has been leveraged in, e.g., side-channel analysis. The extent to which Falcon's use of floating point arithmetic can cause security issues has yet to be thoroughly explored in the literature. In this paper, we contribute to filling this gap by identifying a way in which Falcon's lattice discrete Gaussian sampler, due to specific design choices, is singularly sensitive to floating-point errors. In the presence of small floating-point discrepancies (which can occur in various ways, including the use of the two almost but not quite equivalent signing procedures ``dynamic'' and ``tree'' exposed by the Falcon API), we find that, when called twice on the same input, the Falcon sampler has a small but significant chance (on the order of once in a few thousand calls) of outputting two different lattice points with a very structured difference, that immediately reveals the secret key. This is in contrast to other lattice Gaussian sampling algorithms like Peikert's sampler and Prest's hybrid sampler, that are stable with respect to small floating-point errors. Correctly generated Falcon signatures include a salt that should in principle prevent the sampler to ever be called on the same input twice. In that sense, our observation has little impact on the security of Falcon signatures per se (beyond echoing warnings about the dangers of repeated randomness). On the other hand, it is critical for derandomized variants of Falcon, which have been proposed for use in numerous settings. One can mention in particular identity-based encryption, SNARK-friendly signatures, and sublinear signature aggregation. For all these settings, small floating point discrepancies have a chance of resulting in full private key exposure, even when using the slower, integer-based emulated floating-point arithmetic of Falcon's reference implementation.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.