Papers updated in last 31 days (Page 2 of 392 results)

Last updated:  2025-05-17
A Fast, Efficient, Platform-Adaptive, and AIS-20/31 Compliant PLL-Based True Random Number Generator on an SoC FPGA
Oğuz Yayla and Yunus Emre Yılmaz
Phase-locked loops (PLLs) embedded within field-program-mable gate arrays (FPGAs) or system-on-chip FPGAs (SoCs) present a promising methodology for the generation of random numbers. Their widespread integration across these platforms, combined with their isolated operational characteristics and robust entropy generation, as substantiated by prior research, positions PLL-based true random number generators (PLL-TRNGs) as highly effective solutions for this purpose. The present study focuses on the implementation of PLL-TRNGs utilizing the ZC702 Rev1.1 Evaluation Board, which incorporates the Zynq-7020 SoC from Xilinx. For experimental validation, a configuration involving three such boards is employed. The parameters governing the PLL-TRNG are optimized through a backtracking algorithm. Additionally, a novel, platform-adaptive technique is proposed to enhance the rate of random data bit generation while preserving entropy characteristics. The system's performance is rigorously evaluated against the criteria established by the German Federal Office for Information Security (BSI) AIS-20/31 Tests, with a detailed account of the implementation process provided. Furthermore, the study demonstrates the minimal resource utilization of the PLL-TRNG design within a SoC, thereby underscoring its suitability for Internet-of-Things (IoT) applications, where logic resources are often highly constrained.
Last updated:  2025-05-17
Design and Implementation of a Fast, Platform-Adaptive, AIS-20/31 Compliant PLL-Based True Random Number Generator on a Zynq 7020 SoC FPGA
Oğuz Yayla and Yunus Emre Yılmaz
Phase-locked loops (PLLs) integrated within field-program-mable gate arrays (FPGAs) or System-on-Chip FPGAs (SoCs) represent a promising approach for generating random numbers. Their widespread deployment, isolated functionality within these devices, and robust entropy, as demonstrated in prior studies, position PLL-based true random number generators (PLL-TRNGs) as highly viable solutions for this purpose. This study explicitly examines PLL-TRNG implementations using the ZC702 Rev1.1 evaluation board featuring the Zynq 7020 SoC from Xilinx, utilizing a configuration involving three such boards for experimental validation. Parameters governing the PLL-TRNG are optimized using a backtracking algorithm. Additionally, a novel methodology is proposed to enhance the rate of random data bit generation while preserving entropy characteristics. Performance metrics are rigorously evaluated against the criteria set by the German Federal Office for Information Security (BSI) AIS-20/31 Tests, accompanied by detailed descriptions of the implementation process.
Last updated:  2025-05-17
Leveled Homomorphic Encryption over Composite Groups
Mahdi Mahdavi, Ehsan Meamari, Emad Heydari Beni, and Maryam Sheikhi
Homomorphic encryption is a powerful tool that enables computation on encrypted data without requiring decryption. While many Fully Homomorphic Encryption schemes, supporting arbitrary computations on encrypted data, have been developed using lattice-based and AGCD-based approaches, progress in composite groups has been limited to Partial Homomorphic Encryption schemes, which only support either addition or multiplication. This paper introduces the first -leveled homomorphic encryption schemes over composite groups, based on factoring problem, that achieve both multiplicative and additive homomorphic properties. %Additionally, a modified version of Rothblum’s transformation technique is developed to provide public key variants of the symmetric schemes. Our novel design eliminates the need for relinearization operation, which is common in LWE-based HE schemes, and removes the requirement for the circular security assumption. For applications where the traffic must be indistinguishable as encrypted, a scrambled scheme is designed using a labeling technique. While the initial schemes offer an expanded message space, the introduction of the double-sized Message technique enables the encryption of messages up to twice the size without increasing the ciphertext size. Implementation results show that our schemes significantly outperform existing solutions, particularly in multiplication operations, achieving speeds up to 1000 times faster than well-known schemes such as BFV, BGV, CKKS, and TFHE.
Last updated:  2025-05-17
One-Way Homomorphic Encryption: A Composite Group Approach
Mahdi Mahdavi and Helena Rifà-Pous
Homomorphic Encryption (HE) is a fundamental Privacy-Enhancing Technology (PET) that enables computations on encrypted data without decryption. Despite its utility, designing an efficient and secure HE scheme is highly complex, requiring sophisticated cryptographic techniques. This paper introduces a novel approach to achieving homomorphic properties—supporting either one addition or one multiplication—within composite groups. While the proposed technique exhibits one-wayness, it has a good potential to serve as a foundational building block for constructing indistinguishable cryptosystems. This work contributes to the advancement of homomorphic encryption by providing a new perspective on secure computation within structured algebraic settings.
Last updated:  2025-05-17
Post-Quantum Privacy for Traceable Receipt-Free Encryption
Paola de Perthuis and Thomas Peters
Traceable Receipt-free Encryption (TREnc) has recently been introduced (Asiacrypt’22) as a verifiable public-key encryption primitive allowing to randomize ciphertexts in transit in order to remove any subliminal information up to a public trace which prevents the malleability of the underlying plaintexts. This unique feature generically enables the construction of voting systems by allowing voters to encrypt their votes, tracing whether a published ballot takes their choices into account, and preventing them from proving how they voted. While being a very promising primitive, the few existing TREnc mechanisms solely rely on discrete-logarithm related assumptions making them vulnerable to the well-known record-now/decrypt-later attack in the wait of quantum computers. In this article, we address this limitation by building the first TREnc which can be safely used today until the advent of quantum adversaries. More precisely, based on the observation that security must hold at the time the primitive is used while only privacy should withstand in the post-quantum era, our solution relies on a mix of pre-quantum and post-quantum cryptography. As a first contribution, we generalize the original TREnc primitive that is too restrictive to be easily compatible with built-in lattice-based semantically-secure encryption. Our more flexible model keeps all the ingredients generically implying receipt-free voting. Next, we design our construction with the following essential properties for trustworthy elections: (i) it is provably-secure in the standard model; (ii) it relies on standard assumptions, namely Ring Learning With Errors (RLWE) coupled with pairing-based statistical zero-knowledge simulation-sound SXDH-based proofs; and (iii) it further enjoys a public-coin common reference string removing the need of a trusted setup.
Last updated:  2025-05-17
A New Generalized Attack on RSA-like Cryptosystems
Michel Seck, Oumar Niang, Djiby Sow, Abderrahmane Nitaj, Mengce Zheng, and Maher Boudabra
Rivest, Shamir, and Adleman published the RSA cryptosystem in 1978, which has been widely used over the last four decades. The security of RSA is based on the difficulty of factoring large integers , where and are prime numbers. The public exponent and the private exponent are related by the equation . Recently, Cotan and Teseleanu (NordSec 2023) introduced a variant of RSA, where the public exponent and the private exponent satisfy the equation for some positive integer . In this paper, we study the general equation with positive integers and , and . We show that, given the public parameters and , one can recover and and factor the modulus in polynomial time by combining continued fractions with Coppersmith's algorithm which relies on lattice reduction techniques, under specific conditions on , , and . Furthermore, we show that if the private exponent in an RSA-like cryptosystem is either small or too large, then can be factored in polynomial time. This attack applies to the standard RSA cryptosystem.
Last updated:  2025-05-17
Packed Sumcheck over Fields of Small Characteristic with Application to Verifiable FHE
Yuanju Wei, Kaixuan Wang, Binwu Xiang, Xinxuan Zhang, Hailong Wang, Yi Deng, Xudong Zhu, and Li Lin
Verifiable computation over encrypted data is gaining increasing attention, and using SNARKs to provide proofs for FHE operations has emerged as a promising approach. However, the mismatch between FHE's typically small prime fields and SNARKs' larger prime fields creates verifiable FHE challenges. In this work, we construct a packed sumcheck protocol specifically designed for small fields. This approach leverages folding and repetition techniques to maintain security without field expansion, with all operations performed on the base field. For a field requiring -fold expansion, our sumcheck protocol operates with variables, where the sumcheck statement consists of multiplied multilinear polynomial statements. The prover can complete the proof in modular multiplications and modular additions over . We futher propose a proof system for bit-wise bootstrapping by integrating the packed sumcheck protocol with the Brakedown (CRYPTO 2023) and Binius (EUROCRYPT 2025) commitment schemes. Our construction exploits the highly repetitive structure of bit-wise bootstrapping by decomposing the procedure into a sequence of vector operations, enabling the design of an efficient proof protocol through the verification of these vector relations. The resulting system has linear prover time while performing all computations over the base field.
Last updated:  2025-05-17
Context-Dependent Threshold Decryption and its Applications
Dan Boneh, Benedikt Bünz, Kartik Nayak, Lior Rotem, and Victor Shoup
We initiate the study of high-threshold public-key decryption, along with an enhanced security feature called context-dependent decryption. Our study includes definitions, constructions, security proofs, and applications. The notion of high-threshold decryption has received almost no attention in the literature. The enhanced security feature of context-dependent encryption is entirely new, and plays an important role in many natural applications of threshold decryption.
Last updated:  2025-05-17
Optimistic Asynchronous Dynamic-committee Proactive Secret Sharing
Bin Hu, Jianwei Liu, Zhenliang Lu, Qiang Tang, Zhuolun Xiang, and Zongyang Zhang
Dynamic-committee Proactive Secret Sharing (DPSS) has gained increased attention for its ability to dynamically update shareholder committees and refresh secret shares, even under adversaries that gradually corrupt all nodes. However, existing state-of-the-art asynchronous DPSS protocols suffer from significant message complexity and communication complexity, where denotes the security parameter and is the committee size. In this paper, under the trusted setup assumption, we propose the first asynchronous DPSS protocol with message complexity in all scenarios. Additionally, our protocol achieves communication complexity in the optimistic case, where all nodes are honest and the network is synchronous, and communication complexity in the worst case. Without a trusted setup, in the optimistic case, the message complexity is , and the communication complexity is . In the worst case, our protocol preserves state-of-the-art message and communication complexities, i.e., and , respectively. Besides, our protocol supports batch amortization and accommodates high thresholds. For committee sizes of 4 to 400, the estimated concrete communication cost of our DPSS is 19--100x (resp., 8--14x) smaller in the optimistic case (resp., worst case) compared to the state-of-the-art. Experiments in AWS show that our DPSS achieves a latency of 1.9--8 seconds for committee sizes from 4 to 64. Single-machine benchmarks reveal a (computational) runtime reduction of up to 44%.
Last updated:  2025-05-17
Papercraft: Lattice-based Verifiable Delay Function Implemented
Michał Osadnik, Darya Kaviani, Valerio Cini, Russell W. F. Lai, and Giulio Malavolta
A verifiable delay function (VDF) requires a specified number of sequential steps to compute, yet the validity of its output can be verified efficiently, much faster than recomputing the function from scratch. VDFs are a versatile cryptographic tool, with many industrial applications, such as blockchain consensus protocols, lotteries and verifiable randomness. Unfortunately, without exceptions, all known practical VDF constructions are broken by quantum algorithms. In this work, we investigate the practicality of VDFs with plausible post-quantum security. We propose Papercraft, a working implementation of a VDF based entirely on lattice techniques and thus plausibly post-quantum secure. Our VDF is based on new observations on lattice-based succinct argument systems with many low-level optimisations, yielding the first lattice-based VDF that is implementable on today's hardware. As an example, our Papercraft implementation can verify a computation of almost 6 minutes in just 7 seconds. Overall, our work demonstrates that lattice-based VDFs are not just a theoretical construct, paving the way for their practical deployment.
Last updated:  2025-05-17
Blockcipher-Based Key Derivation without PRP/PRF Switching
Fabrice Benhamouda, Shai Halevi, Panos Kampanakis, and Hugo Krawczyk
We examine the use of blockcipher-based key derivation beyond the birthday bound, arguing that the analysis step of PRP/PRF switching can be eliminated in many cases. To support this, we consider a modified ``ideal model'' for keying cryptographic applications in the multi-instance setting, where keys are chosen to be random \emph{but distinct}, rather than completely independent). Our analysis shows that typical cryptographic applications remain secure in this model. One consequence is that it is typically safe to derive close to keys using an -bit blockcipher in counter mode. In particular, considering the practice of nonce-derived keys for authenticated encryption, our results imply that modes such as XAES-256-GCM that use CMAC-based key derivation are safe to use with more than distinct nonces.
Last updated:  2025-05-17
Towards Improving Throughput and Scalability of DAG-based BFT SMR
Nibesh Shrestha and Aniket Kate
Directed Acyclic Graph (DAG)-based BFT consensus protocols often suffer from limited throughput and scalability due to bandwidth-intensive data replication to all participants. However, it is sufficient to replicate data to a smaller sub-committee of parties that holds an honest majority with high probability. In this work, we introduce tribe-assisted reliable broadcast, a novel primitive that ensures reliable broadcast (RBC) properties within a smaller honest-majority sub-committee—referred to as a clan—drawn from the entire network, called the tribe. Leveraging this primitive, we develop two efficient DAG-based BFT consensus protocols. First, we present a single-clan protocol, in which a single clan is elected from the tribe, and data is disseminated exclusively to this designated clan using tribe-assisted RBC. We then extend this design to a multi-clan setting, where multiple clans are elected and data is distributed within each respective clan via the same mechanism. Experimental results demonstrate that both protocols offer substantial improvements in throughput and latency over existing DAG-based BFT protocols, even at moderately large scales.
Last updated:  2025-05-16
Lower Bounds for Garbled Circuits from Shannon-Type Information Inequalities
Jake Januzelli, Mike Rosulek, and Lawrence Roy
We establish new lower bounds on the size of practical garbled circuits, which hold against any scheme satisfying the following simple properties: (1) Its security is based on symmetric-key cryptography only. More formally, security holds in Minicrypt, a model in which a random oracle is the only available source of cryptography. (2) The evaluation algorithm makes non-adaptive queries to the random oracle. (3) The evaluation algorithm "knows" which of its oracle queries are made by which other input combinations. These restrictions are reasonable for garbling single gates. In particular, unlike prior attempts at lower bounds, we make no assumptions about the internal behavior of the garbling algorithms --- i.e., how it uses random oracle outputs and wire labels to compute the garbled gate, etc. We prove separate lower bounds depending on whether the scheme uses the free-XOR technique (Kolesnikov & Schneider, ICALP 2008). In the free-XOR case, we prove that a garbled AND-gate requires bits; thus, the garbling scheme of Rosulek & Roy (Crypto 2022) is optimal. In the non-free-XOR case, we prove that a garbled AND-gate requires bits and a garbled XOR-gate requires bits; thus, the garbling scheme of Gueron, Lindell, Nof, and Pinkas (CCS 2015) is optimal. We prove our lower bounds using tools from information theory. A garbling scheme can be characterized as a joint distribution over various quantities: wire labels, garbled gate information, random oracle responses. We show that different properties of a garbling scheme imply certain Shannon-type information inequalities about this distribution. We then use an automated theorem prover for Shannon-type inequalities to prove that our inequalities imply lower bounds on the entropy---hence, size---of the garbled gate information.
Last updated:  2025-05-16
Somewhat Homomorphic Encryption from Linear Homomorphism and Sparse LPN
Henry Corrigan-Gibbs, Alexandra Henzinger, Yael Tauman Kalai, and Vinod Vaikuntanathan
We construct somewhat homomorphic encryption from the sparse learning-parities-with-noise problem, along with an assumption that implies linearly homomorphic encryption (e.g., the decisional Diffie-Hellman or decisional composite residuosity assumptions). Our resulting schemes support an a-priori bounded number of homomorphic operations: multiplications followed by poly() additions, where is a security parameter. These schemes have compact ciphertexts: before and after homomorphic evaluation, the bit length of each ciphertext is a fixed polynomial in the security parameter , independent of the number of homomorphic operations that the scheme supports. This gives the first constructions of somewhat homomorphic encryption that can evaluate the class of bounded-degree polynomials without relying on lattice assumptions or bilinear maps. Our new encryption schemes are conceptually simple: much as in Gentry, Sahai, and Waters’ fully homomorphic encryption scheme, ciphertexts in our scheme are matrices, homomorphic addition is matrix addition, and homomorphic multiplication is matrix multiplication. Moreover, when encrypting many messages at once and performing many homomorphic evaluations at once, the bit length of the ciphertexts in (some of) our schemes can be made arbitrarily close to the bit length of the plaintexts. The main limitation of our schemes is that they require a large evaluation key, whose size scales with the complexity of the homomorphic computation performed, though this key can be re-used across any polynomial number of encryptions and evaluations. Our construction builds on recent work of Dao, Ishai, Jain, and Lin, who construct a homomorphic secret-sharing scheme from the sparse-LPN assumption.
Last updated:  2025-05-16
Improved Cryptanalysis of an RSA Variant Based on Cubic Pell Curve
Mohammed Rahmani and Abderrahmane Nitaj
In 2024, based on the cubic Pell curve, Nitaj and Seck proposed a variant of the RSA cryptosystem where the modulus is in the form , and the public key and private key satisfy the equation . They showed that can be factored when is less than a certain bound that depends on and in the situation , which is not extendable to . In this paper, we propose a cryptanalysis of this scheme in the situation , and give an explicit bound for that makes the scheme insecure. The method is based on Coppersmith's method and lattice reduction.
Last updated:  2025-05-16
Decentralized Multi-Authority Attribute-Based Inner-Product Functional Encryption: Noisy and Evasive Constructions from Lattices
Jiaqi Liu, Yan Wang, and Fang-Wei Fu
We initiate the study of multi-authority attribute-based functional encryption for noisy inner-product functionality, and propose two new primitives: (1) multi-authority attribute-based (noisy) inner-product functional encryption (MA-AB(N)IPFE), and (2) multi-authority attribute-based evasive inner-product functional encryption (MA-ABevIPFE). The MA-AB(N)IPFE primitive generalizes the existing multi-authority attribute-based inner-product functional encryption schemes by Agrawal et al. [AGT21], by enabling approximate inner-product computation under decentralized attribute-based control. This newly proposed notion combines the approximate function evaluation of noisy inner-product functional encryption (IPFE) with the decentralized key-distribution structure of multi-authority attribute-based encryption. To better capture noisy functionalities within a flexible security framework, we formulate the MA-ABevIPFE primitive under a generic-model view, inspired by the evasive IPFE framework by Hsieh et al. [HLL24]. It shifts the focus from pairwise ciphertext indistinguishability to a more relaxed pseudorandomness-based game. To support the above notions, we introduce two variants of lattice-based computational assumptions: - The evasive IPFE assumption (evIPFE): it generalizes the assumption introduced in [HLL24] to the multi-authority setting and admits a reduction from the evasive LWE assumption proposed by Waters et al. [WWW22]; - The indistinguishability-based evasive IPFE assumption (IND-evIPFE): it is an indistinguishability-based variant of the evasive IPFE assumption designed to capture the stronger security guarantees required by our MA-AB(N)IPFE scheme. We present concrete lattice-based constructions for both primitives supporting subset policies, building upon the framework of [WWW22]. Our schemes are proven to be statically secure in the random oracle model under the standard LWE assumption and the newly introduced assumptions. Additionally, we demonstrate that our MA-AB(N)IPFE scheme can be transformed, via standard modulus switching, into a noiseless MA-ABIPFE scheme that supports exact inner-product functionality consistent with the MA-IPFE syntax in [AGT21,DP23]. This yields the first lattice-based construction of such a primitive. All our schemes support arbitrary polynomial-size attribute policies and are secure in the random oracle model under lattice assumptions with a sub-exponential modulus-to-noise ratio, making them practical candidates for noise-tolerant, fine-grained access control in multi-authority settings.
Last updated:  2025-05-16
KZH-Fold: Accountable Voting from Sublinear Accumulation
George Kadianakis, Arantxa Zapico, Hossein Hafezi, and Benedikt Bünz
Accumulation schemes are powerful primitives that enable distributed and incremental verifiable computation with less overhead than recursive SNARKs. However, most existing schemes with constant-size accumulation verifiers suffer from linear-sized accumulators and deciders, leading to unsuitable linear-sized proofs in distributed settings such as accountable voting protocols. Our contributions are as follows: I) We introduce KZH, a novel multilinear polynomial commitment scheme (PCS) with sublinear opening and KZH-fold, a polynomial accumulation (PA) scheme where the verifier only does group scalar multiplications and accumulator size and decider time. Our scheme generalizes to achieve accumulator and decider complexity of while a verifier complexity . 2) As an orthogonal contribution to KZH-fold, we build an IVC (PCD) scheme for R1CS via Spartan+PA, in which instantiated with KZH-fold, i.e. Spartan+KZH-fold results in a sublinear proof and decider. With the recipe of Spartan+PA, we build non-uniform IVC and non-uniform PCD. Our non-uniform PCD is the first approach in which the prover's computation and communication at each step and grow sublinearly with the combined circuit size of all instructions. This approach can be instantiated with any PA and doesn't depend on KZH-fold. 3) We demonstrate the power of Spartan+KZH-fold by implementing an accountable voting scheme using a novel signature aggregation protocol supporting millions of participants, significantly reducing communication overhead and verifier time compared to BLS-based aggregation. We implemented and benchmarked our protocols, Spartan+KZH-fold achieves a 2000x reduction in communication and a 50x improvement in decider time over Nova when proving 2000 Poseidon hashes, at the cost of 3x the prover time.
Last updated:  2025-05-16
Shaking up authenticated encryption
Joan Daemen, Seth Hoffert, Silvia Mella, Gilles Van Assche, and Ronny Van Keer
Authenticated encryption (AE) is a cryptographic mechanism that allows communicating parties to protect the confidentiality and integrity of messages exchanged over a public channel, provided they share a secret key. In this work, we present new AE schemes leveraging the SHA-3 standard functions SHAKE128 and SHAKE256, offering 128 and 256 bits of security strength, respectively, and their “Turbo” counterparts. They support session-based communication, where a ciphertext authenticates the sequence of messages since the start of the session. The chaining in the session allows decryption in segments, avoiding the need to buffer the entire deciphered cryptogram between decryption and validation. And, thanks to the collision resistance of (Turbo)SHAKE, they provide so-called CMT-4 committing security, meaning that they provide strong guarantees that a ciphertext uniquely binds to the key, plaintext and associated data. The AE schemes we propose have a unique combination of advantages. The most important are that 1) their security is based on the security claim of SHAKE, that has received a large amount of public scrutiny, that 2) they make use of the standard Keccak- permutation that not only receives more and more dedicated hardware support, but also allows competitive software-only implementations thanks to the TurboSHAKE instances, and that 3) they do not suffer from a 64-bit birthday bound like most AES-based schemes. Of independent interest, we introduce the deck cipher as the stateful counterpart of the deck function and the duplex cipher generalizing keyed duplex and harmonize their security notions. Finally, we provide an elegant solution for multi-layer domain separation.
Last updated:  2025-05-16
Asymptotically Optimal Early Termination for Dishonest Majority Broadcast
Giovanni Deligios, Ivana Klasovita, and Chen-Da Liu-Zhang
Deterministic broadcast protocols among parties tolerating corruptions require rounds, where is the actual number of corruptions in an execution of the protocol. We provide the first protocol which is optimally resilient, adaptively secure, and asymptotically matches this lower bound for any . By contrast, the best known algorithm in this regime by Loss and Nielsen (EUROCRYPT'24) always requires rounds. Our main technical tool is a generalization of the notion of polarizer introduced by Loss and Nielsen, which allows parties to obtain transferable cryptographic evidence of missing messages with fewer rounds of interaction.
Last updated:  2025-05-16
Adversary Resilient Learned Bloom Filters
Ghada Almashaqbeh, Allison Bishop, and Hayder Tirmazi
A learned Bloom filter (LBF) combines a classical Bloom filter (CBF) with a learning model to reduce the amount of memory needed to represent a given set while achieving a target false positive rate (FPR). Provable security against adaptive adversaries that advertently attempt to increase FPR has been studied for CBFs. However, achieving adaptive security for LBFs is an open problem. In this paper, we close this gap and show how to achieve adaptive security for LBFs. In particular, we define several adaptive security notions capturing varying degrees of adversarial control, including full and partial adaptivity, in addition to LBF extensions of existing adversarial models for CBFs, including the Always-Bet and Bet-or-Pass notions. We propose two secure LBF constructions, \bloomname{} and \betpassbloomname{}, and formally prove their security under these models, assuming the existence of one-way functions. Based on our analysis and use case evaluations, our constructions achieve strong security guarantees while maintaining competitive FPR and memory overhead.
Last updated:  2025-05-16
Improvement of Side-Channel Attacks on Mitaka
Vladimir Sarde and Nicolas Debande
Mitaka is a variant of Falcon, which is one of the three postquantum signature schemes selected by the NIST for standardization. Introduced in 2021, Mitaka offers a simpler, parallelizable, and maskable version of Falcon. Our article focuses on its physical security, and our results are threefold. Firstly, we enhance a known profiled side-channel attack on an unprotected Mitaka implementation by a factor of 512. Secondly, we analyze the masked implementation of Mitaka described in the reference article, which incorporates a different sampler and a protective gadget. We expose the first side-channel flaw on this sampler. This flaw enables to break the masked implementation with a first-order side-channel attack. In this scenario, the key can be recovered using only three times more traces compared to the attack on the unprotected implementation. Finally, we discuss and recommend new countermeasures to mitigate these attacks.
Last updated:  2025-05-16
Related-Key Cryptanalysis of FUTURE
Amit Jana, Smita Das, Ayantika Chatterjee, Debdeep Mukhopadhyay, and Yu Sasaki
At Africacrypt 2022, Gupta et al. introduced FUTURE, a 64-bit lightweight block cipher based on an MDS matrix and designed in an SPN structure, with a focus on achieving single-cycle encryption and low implementation cost, especially in unrolled architectures. While the designers evaluated its security under various attack models, they did not consider related-key cryptanalysis. In this work, we address this gap by analyzing the security of FUTURE in the related-key setting using techniques based on Mixed Integer Linear Programming (MILP). We first propose a simplified and generalizable approach for applying MILP to model any MDS or near-MDS-based cipher that follows the substitution-permutation paradigm. Using our MILP framework, we construct an 8-round related-key distinguisher on FUTURE, requiring plaintexts, \xor operations, and negligible memory. We further identify a full-round (i.e., 10 rounds) boomerang distinguisher with a probability of , enabling a distinguishing attack with data and time complexity. In addition, we develop a full-round key recovery attack on FUTURE with data, time, and memory complexities of , , and , respectively. Although all known single-key attacks remain impractical (with time complexities of at least ), our results demonstrate a full-round cryptanalysis of FUTURE in the related-key setting, thereby challenging its claimed security guarantees.
Last updated:  2025-05-16
Finally! A Compact Lattice-Based Threshold Signature
Rafael del Pino and Guilhem Niot
Threshold signatures improve upon digital signatures by splitting the trust and robustness among multiple parties. In a (T, N) threshold signature any set of T parties can produce a signature but no set of less than T users can do so. Many such constructions are now available in the pre-quantum setting but post-quantum threshold schemes are still running heavy, with the state-of-the-art boasting signature sizes that are still an order of magnitude larger than post-quantum digital signatures. We propose a novel very efficient threshold signature scheme, with a signature size close to that of a single Dilithium signature for any threshold T of at most 8 users. Our construction reduces to well-studied problems (MLWE and SelfTargetMSIS) and does not need any heavy machinery, essentially consisting in just T parallel executions of the Dilithium signature scheme. Though the resulting scheme is remarkably simple, many technical difficulties, such as sharing a secret in small shares, or simulating rejecting transcripts, have kept such an efficient threshold signature out of reach until now.
Last updated:  2025-05-16
Simple and Efficient Lattice Threshold Signatures with Identifiable Aborts
Uncategorized
Rafael del Pino, Thomas Espitau, Guilhem Niot, and Thomas Prest
Show abstract
Uncategorized
We introduce simple yet efficient lattice-based threshold signatures with identifiable aborts, secure under the MLWE assumption. Central to our construction are novel Distributed Key Generation with Short Shares (sDKG) protocols over lattices, ensuring short shares, small reconstruction coefficients, and linear evaluation of honest shares. This uniquely realizes the "threshold designer's dream": signature shares double as valid signatures under the corresponding secret key shares. With two concrete instantiations (ramp and replicated secret sharings), our schemes match Threshold Raccoon (del Pino et al. EUROCRYPT 2024)’s compact ~10kB size. Further, we unveil 'Death Star Detection', a new algorithm that enhances identifiable aborts by efficiently spotting short vector adversarial correlations, of interest beyond threshold signatures.
Last updated:  2025-05-16
From List-Decodability to Proximity Gaps
Yiwen Gao, Dongliang Cai, Yang Xu, and Haibin Kan
Proximity testing for linear codes is a fundamental problem in coding theory with critical applications in cryptographic protocols, blockchain, and distributed storage systems. This work addresses the proximity gaps for linear codes, a crucial aspect for efficiently verifying whether a batch of codewords is close to a given code. We present a general framework for deriving proximity gaps from the list-decodability properties of the underlying linear code. Our main result shows that if a code is -list-decodable, then the probability that a random combination of a batch of codewords containing a -far codeword (for ) remains -far from is bounded by . This result also establishes a form of (mutual) correlated agreement for linear codes, which can be used to strengthen soundness analyses in protocols that rely on proximity testing, thereby reducing query complexity and enabling practical instantiations over smaller finite fields. In particular, we apply our main result to randomly punctured Reed–Solomon codes and folded Reed–Solomon codes—both of which are known to achieve list-decodability up to capacity—and derive linear proximity gaps for these families under the Johnson bound.
Last updated:  2025-05-16
Efficient Permutation Correlations and Batched Random Access for Two-Party Computation
Stanislav Peceny, Srinivasan Raghuraman, Peter Rindal, and Harshal Shah
In this work we formalize the notion of a two-party permutation correlation s.t. for a random permutation of elements and vectors . This correlation can be viewed as an abstraction and generalization of the Chase et al. (Asiacrypt 2020) share translation protocol. We give a systematization of knowledge for how such a permutation correlation can be derandomized to allow the parties to perform a wide range of oblivious permutations of secret-shared data. This systematization immediately enables the translation of various popular honest-majority protocols to be efficiently instantiated in the two-party setting, e.g. collaborative filtering, sorting, database joins, graph algorithms, and many more. We give two novel protocols for efficiently generating a random permutation correlation. The first uses MPC-friendly PRFs to generate a correlation of elements, each of size bits, with bit-OTs, time, communication, and only 3 rounds including setup. Similar asymptotics previously required relatively expensive public-key cryptography, e.g. Paillier or LWE. Our protocol implementation for requires just 7 seconds & OTs for communication, a respective 40 & improvement on the LWE solution of Juvekar at al. (CCS 2018). The second protocol is based on pseudo-random correlation generators and achieves an overhead that is sublinear in the string length , i.e. the communication and number of OTs is . The overhead of the latter protocol has larger hidden constants, and therefore is more efficient only when long strings are permuted, e.g. in graph algorithms. Finally, we present a suite of highly efficient protocols based on permutations for performing various batched random access operations. These include the ability to extract a hidden subset of a secret-shared list. More generally, we give ORAM-like protocols for obliviously reading and writing from a list in a batched manner. We argue that this suite of batched random access protocols should be a first class primitive in the MPC practitioner's toolbox.
Last updated:  2025-05-16
One for All, All for One: Universal semi-agnostic quantum circuit for solving (Standard) Abelian Hidden Subgroup Problems
Michał Wroński, Łukasz Dzierzkowski, Mateusz Leśniak, and Ewa Syta
We introduce a unified approach to quantum cryptanalysis that reduces all \emph{standard abelian hidden subgroup problems} (SAHSPs), including integer factorization, discrete logarithm in finite fields (DLP), and discrete logarithm on elliptic curves, to a single core problem: the \emph{elliptic curve discrete logarithm problem} (ECDLP). This reduction forms the foundation of a framework for quantum cryptanalysis based on ECDLP. At the core of this framework is a \emph{semi-agnostic quantum algorithm} for solving ECDLP, which requires only partial knowledge of the problem's group structure. By operating on singular curves, we can translate problems such as integer factorization and finite field DLP into the ECDLP. Leveraging the semi-agnostic algorithm for ECDLP, we propose a \emph{universal, reconfigurable quantum circuit} design that can solve any SAHSP instance by executing the ECDLP solver with appropriate curve and field parameters, without requiring problem-specific customization. We prove that solving ECDLP in this model is sufficient to solve all instances of SAHSPs, establishing ECDLP as a \emph{complete problem} for this class. These results unify the structure of Shor's algorithm across all SAHSPs, eliminating the need for custom quantum circuits for each problem. This demonstrates the practical feasibility of universal quantum cryptanalysis and underscores the urgency of transitioning to cryptographic systems that remain secure against quantum-capable adversaries.
Last updated:  2025-05-16
Signatures with Tight Adaptive Corruptions from Search Assumptions
Keitaro Hashimoto, Wakaha Ogata, and Yusuke Sakai
We construct the \emph{first} tightly secure signature schemes in the multi-user setting with adaptive corruptions from static search assumptions, such as classical discrete logarithm, RSA, factoring, or post-quantum group action discrete logarithm assumptions. In contrast to our scheme, the previous tightly secure schemes are based on decisional assumptions (e.g., (group action) DDH) or interactive search assumptions (e.g., one-more CDH). The security of our schemes is independent of the number of users, signing queries, and random oracle queries, and forging our signatures is as hard as solving the underlying static search problems. Our signature schemes are based on an identification scheme with multiple secret keys per public key and ``second-key recovery resistance,'' difficulty of finding another secret key of a given public and secret key pair (e.g., Okamoto identification (CRYPTO'92) and Parallel-OR identification (CRYPTO'94)). These properties allow a reduction in solving a search problem while answering signing and corruption queries for all users in the signature security game. To convert such an identification scheme into a signature scheme tightly, we employ randomized Fischlin transformation introduced by Kondi and shelat (Asiacrypt 2022) that provides straight-line extraction. Klooss et al. (Asiacrypt 2024) showed that randomized Fischlin transformation satisfies the zero-knowledge property in the programmable ROM if an exponential-size challenge space is used. This fact intuitively implies that the transformation with a large challenge space guarantees the tight security of our signature scheme in the programmable random oracle model, but we successfully prove its tight security in the \emph{non-programmable} random oracle model \emph{without enlarging the challenge size}. Also, as a side contribution, we reconsider the zero-knowledge property of randomized Fischlin transformation, and show that the transformation with a polynomial size challenge space has zero-knowledge if the underlying Sigma protocol satisfies certain properties.
Last updated:  2025-05-16
Delegated PSI from Homomorphic Encryptions
Sicheng Wei and Jingwei Hu
This paper presents an efficient protocol for private set intersection in a setting with multiple set owners and a semi-honest cloud server. The core idea is to reduce the intersection computation to secure operations over Bloom filters, enabling both scalability and efficiency. By leveraging this transformation, our protocols achieve strong privacy guarantees while minimizing computation and communication overhead.
Last updated:  2025-05-16
Side Channel Analysis in Homomorphic Encryption
Baraq Ghaleb and William J Buchanan
Homomorphic encryption provides many opportunities for privacy-aware processing, including with methods related to machine learning. Many of our existing cryptographic methods have been shown in the past to be susceptible to side channel attacks. With these, the implementation of the cryptographic methods can reveal information about the private keys used, the result, or even the original plaintext. An example of this includes the processing of the RSA exponent using the Montgomery method, and where 0's and 1's differ in their processing time for modular exponentiation. With FHE, we typically use lattice methods, and which can have particular problems in their implementation in relation to side channel leakage. This paper aims to outline a range of weaknesses within FHE implementations as related to side channel analysis. It outlines a categorization for side-channel analysis, some case studies, and mitigation strategies.
Last updated:  2025-05-16
FLUENT: A Tool for Efficient Mixed-Protocol Semi-Private Function Evaluation
Daniel Günther, Joachim Schmidt, Thomas Schneider, and Hossein Yalame
In modern business to customer interactions, handling private or confidential data is essential. Private Function Evaluation (PFE) protocols ensure the privacy of both the customers' input data and the business' function evaluated on it which is often sensitive intellectual property (IP). However, fully hiding the function in PFE results in high performance overhead. Semi-Private Function Evaluation (SPFE) is a generalization of PFE to only partially hide the function, whereas specific non-critical components remain public. Our paper introduces a novel framework designed to make SPFE accessible to non-experts and practical for real-world deployments. To achieve this, we improve on previous SPFE solutions in two aspects. First, we enhance the developer experience by leveraging High-Level Synthesis (HLS), making our tool more user-friendly than previous SPFE frameworks. Second, we achieve a speedup compared to the previous state-of-the-art through more efficient underlying constructions and the usage of Lookup Tables (LUTs). We evaluate the performance of our framework in terms of communication and runtime efficiency. Our final implementation is available as an open-source project, aiming to bridge the gap between advanced cryptographic protocols and their practical application in industry scenarios.
Last updated:  2025-05-16
Improved Subfield Curve Search For Specific Field Characteristics
Jesús-Javier Chi-Domínguez
Isogeny-based cryptography relies its security on the hardness of the supersingular isogeny problem: finding an isogeny between two supersingular curves defined over a quadratic field. The Delfs-Galbraith algorithm is the most efficient procedure for solving the supersingular isogeny problem with a time complexity of operations. The bottleneck of the Delfs-Galbraith algorithm is the so-called subfield curve search (i.e., finding an isogenous supersingular elliptic curve defined over the prime field), which determines the time complexity of the aforementioned algorithm. Given that, for efficiency, most recent isogeny-based constructions propose using finite fields with field characteristics equal to for some positive integers and . This work focuses on primes of that particular form, and presents two new algorithms for finding subfield curves with a time complexity of operations and a memory complexity polynomial in . Such algorithms exploit the existence of large -torsion points and extend the subfield root detection algorithm of Corte-Real Santos, Costello, and Shi (Crypto, 2022) to a projective scenario where one can work over the curve coefficients instead of the explicit j-invariants of the curves. These algorithms easily extend to primes of the form and with being a small integer. This study also examines the usage of radical -isogenies with the proposed extended subfield root detection algorithm. In this context, the results indicate that the radical -isogeny approach is competitive compared to the state-of-the-art algorithms.
Last updated:  2025-05-16
Public-key Cryptography Attacks Using Adiabatic Quantum Computer
Weishen Zou, Bruno Martin, and Thomas Prévost
We explore the application of the QUBO and Ising models to the integer factorization problem with implications for the security of public-key algorithms such as RSA. A key contribution is a program that applies existing algorithms to parameterize and simulate integer factorization through an Ising model in order to replicate previous works. Due to limited access to quantum hardware, we use classical heuristic methods to approximate solutions.
Last updated:  2025-05-16
How Fast Does the Inverse Walk Approximate a Random Permutation?
Tianren Liu, Angelos Pelecanos, Stefano Tessaro, and Vinod Vaikuntanathan
For a finite field of size , the (patched) inverse permutation computes the inverse of over when and outputs when , and the (AddRoundKey) permutation adds a fixed constant to its input, i.e., We study the process of alternately applying the permutation followed by a random linear permutation , which is a random walk over the alternating (or symmetric) group that we call the {\em inverse walk}. We show both lower and upper bounds on the number of rounds it takes for this process to approximate a random permutation over . We show that rounds of the inverse walk over the field of size with rounds generate a permutation that is -close (in variation distance) to a uniformly random even permutation (i.e. a permutation from the alternating group ). Our result is provided with explicit constants, and is tight, up to logarithmic factors. Our result answers an open question from the work of Liu, Pelecanos, Tessaro, and Vaikuntanathan (CRYPTO 2023) by proving the -wise independence of (a variant of) AES for up to the square root of the field size, compared to the original result that only held for . It also constitutes a significant improvement on a result of Carlitz (Proc. American Mathematical Society, 1953) who showed a {\em reachability result}: namely, that every even permutation can be generated {\em eventually} by composing and . We show a {\em tight convergence result}, namely a tight quantitative bound on the number of rounds to reach a random (even) permutation. Our work brings to the forefront the view of block ciphers as random walks and uses an entirely new set of (functional analytic) tools to study their pseudorandomness, both of which we hope will prove useful in the study of block ciphers.
Last updated:  2025-05-16
Quantum Periodic Distinguisher Construction: Symbolization Method and Automated Tool
Qun Liu, Haoyang Wang, Jinliang Wang, Boyun Li, and Meiqin Wang
As one of the famous quantum algorithms, Simon's algorithm enables the efficient derivation of the period of periodic functions in polynomial time. However, the complexity of constructing periodic functions has hindered the widespread application of Simon's algorithm in symmetric-key cryptanalysis. Currently, aside from the exhaustive search-based testing method introduced by Canale et al. at CRYPTO 2022, there is no unified model for effectively searching for periodic distinguishers. Although Xiang et al. established a link between periodic functions and truncated differential theory at ToSC 2024, their approach lacks the ability to construct periods using unknown differentials and does not provide automated tools. This limitation underscores the inadequacy of existing methods in identifying periodic distinguishers for complex structures. In this paper, we address the challenge of advancing periodic distinguishers for symmetric-key ciphers. First, we propose a more generalized method for constructing periodic distinguishers, addressing the limitations of Xiang et al.'s theory in handling unknown differences. We further extend our method to probabilistic periodic distinguishers, thereby extending the separability property proposed by Hodžić et al. in 2020. As a result, our method can cover a wider range of periodic distinguishers. Second, we introduce a novel symbolic representation to simplify the search of periodic distinguishers. Based on this representation, we propose the first fully automated SMT-based search model, which efficiently addresses the challenges of manual searching in complex structures. Finally, we extend the model to SPN structures based on our new method. Our model has broad applicability through significant advancements in analyzing generalized Feistel structures (GFSs) and SPN-based ciphers. We have achieved new quantum distinguishers with the following round configurations: 10 rounds for GFS-4F, 10 rounds for LBlock, 10 rounds for TWINE, and 16 rounds for Skipjack-B, improving the previous best results by 1, 2, 2, and 3 rounds, respectively. In the domain of SPN-based ciphers, our model has enabled the identification of novel periodic distinguishers, including the first 9-round distinguisher for SKINNY and the first 12-round distinguisher for CRAFT. These achievements lay the foundation for quantum cryptanalysis of SPN-based ciphers using Simon’s algorithm.
Last updated:  2025-05-16
Data Availability for Thousands of Nodes
Yanpei Guo, Alex Luoyuan Xiong, Wenjie Qu, and Jiaheng Zhang
Scalable data availability (DA) is essential for high-throughput, decentralized blockchains, enabling lightweight nodes to verify block availability without incurring the prohibitive costs of full data replication. Reed-Solomon (RS) code commitment schemes underpin modern DA protocols by ensuring that dispersed data fragments can be verified as part of a valid codeword, even in the presence of malicious block producers. However, state-of-the-art schemes such as FRIDA (Crypto'24), while computationally efficient, incur substantial per-node communication overhead at the scale of thousands of network nodes, often 5.7 the size of the actual data fragment. In this work, we introduce CONDA, a new interleaved RS code commitment scheme that significantly reduces the communication overhead while retaining FRIDA's prover efficiency. At its core is a novel evaluation consolidation technique for polynomial commitment scheme (PCS) that reduces the problem of proving evaluations at fixed points (one per verifier) to a single evaluation at a random point, using logarithmic communication. This technique is lightweight, hash-based, and compatible with any multilinear PCS. To further optimize for DA applications, we introduce LightLigero, a new multilinear PCS that improves upon DeepFold (Sec'25) with reduction in proof size and only slowdown in prover time. Combining CONDA and LightLigero yields an efficient DA scheme for thousands of nodes. Our implementation demonstrates a 4 reduction in communication cost compared to FRIDA, while incurring only a 25\% increase in prover time. It also achieves near-best prover time and near-best communication cost simultaneously among all code commitment schemes. CONDA also offers at least smaller proofs and faster provers than state-of-the-art verifiable secret sharing constructions such as ZXH+22 (Sec'22) and PolyFRIM (Sec'24).
Last updated:  2025-05-16
DeepFold: Efficient Multilinear Polynomial Commitment from Reed-Solomon Code and Its Application to Zero-knowledge Proofs
Yanpei Guo, Xuanming Liu, Kexi Huang, Wenjie Qu, Tianyang Tao, and Jiaheng Zhang
This work presents Deepfold, a novel multilinear polynomial commitment scheme (PCS) based on Reed-Solomon code that offers optimal prover time and a more concise proof size. For the first time, Deepfold adapts the FRI-based multilinear PCS to the list decoding radius setting, requiring significantly fewer query repetitions and thereby achieving a 3 reduction in proof size compared to Basefold (Crypto'24), while preserving its advantages in prover time. Compared with PolyFRIM (USENIX Security'24), Deepfold achieves a 2 improvement in prover time, verifier time, and proof size. Another contribution of this work is a batch evaluation scheme, which enables the FRI-based multilinear PCS to handle polynomials whose size is not a power of two more efficiently. Our scheme has broad applications in zk-SNARKs, since PCS is a key component in modern zk-SNARK constructions. For example, when replacing the PCS component of Virgo (S&P'20) with Deepfold, our scheme achieves a 2.5 faster prover time when proving the knowledge of a Merkle tree with 256 leaves, while maintaining the similar proof size. When replacing the PCS component of HyperPlonk (Eurocrypt'23) with Deepfold, our scheme has about 3.6 faster prover time. Additionally, when applying our arbitrary length input commitment to verifiable matrix multiplications for matrices of size 1200768 and 7682304, which are actual use cases in GPT-2 model, the performance showcases a 2.4 reduction in prover time compared to previous approaches.
Last updated:  2025-05-16
Fheanor: a new, modular FHE library for designing and optimising schemes
Hiroki Okada, Rachel Player, and Simon Pohmann
Implementations of modern FHE schemes are available in various highly-optimized libraries. Many of these libraries are designed to allow developers who may not have deep expertise in FHE to build fast and secure privacy-preserving applications. To support such users, the API of these libraries often hides the internals of the schemes in question from the user. However, this design choice makes it hard for users of these libraries to modify existing schemes, or implement new ones; work that is often valuable when conducting research on FHE schemes. We present our new Rust library Fheanor, which aims to facilitate such research on FHE schemes. The core target user is an FHE researcher, rather than an application developer. Most importantly, the design of Fheanor is very modular, and mirrors the mathematical structure of the available FHE schemes. By exposing the mathematical structure, but still hiding implementation details, it is easy to modify or extend the functionality of FHE schemes implemented in the library and still preserve high performance. Indeed, Fheanor demonstrates performance that is close to that of HElib or SEAL, with the potential for optimizations in the future. Fheanor implements several features that have not, or have only rarely, been implemented in previous libraries. These include non-power-of-two cyclotomic rings, single-RNS based ring arithmetic, the CLPX/GBFV scheme, and bootstrapping for BFV and BGV. In addition, this paper presents new theoretical contributions that are also implemented in Fheanor. The first is an extension of optimal digit extraction circuits, used in BFV/BGV bootstrapping, to the case 2^23. The second is a more efficient algorithm for computing the trace in the non-power-of-two cyclotomic setting.
Last updated:  2025-05-15
Fly Away: Lifting Fault Security through Canaries and the Uniform Random Fault Model
Gaëtan Cassiers, Siemen Dhooghe, Thorben Moos, Sayandeep Saha, and François-Xavier Standaert
Cryptographic implementations are vulnerable to active physical attacks where adversaries inject faults to extract sensitive information. Existing fault models, such as the threshold and random fault models, assume limitations on the amount or probability of injecting faults. Such models, however, insufficiently address the case of practical fault injection methods capable of faulting a large proportion of the wires in a circuit with high probability. Prior works have shown that this insufficiency can lead to concrete key recovery attacks against implementations proven secure in these models. We address this blind spot by introducing the uniform random fault model, which relaxes assumptions on the amount/probability of faults and instead assumes a uniform probabilistic faulting of all wires in a circuit or region. We then show that security in this new model can be reduced to security in the random fault model by inserting canaries in the circuit to ensure secret-independent fault detection. We prove that combining canaries with a more classical fault countermeasure such as redundancy can lead to exponential fault security in the uniform random fault model at a polynomial cost in circuit size in the security parameter. Finally, we discuss the interactions between our work and the practical engineering challenges of fault security, shedding light on how the combination of state-of-the-art countermeasures may protect against injections of many high probability faults, while opening a path to methodologies that formally analyze the guarantees provided by such countermeasures.
Last updated:  2025-05-15
Critical Rounds in Multi-Round Proofs: Proof of Partial Knowledge, Trapdoor Commitments, and Advanced Signatures
Masayuki Abe, David Balbás, Dung Bui, Miyako Ohkubo, Zehua Shang, Akira Takahashi, and Mehdi Tibouchi
Zero-knowledge simulators and witness extractors, initially developed for proving the security of proof systems, turned out to be also useful in constructing advanced protocols from simple three-move interactive proofs. However, in the context of multi-round public-coin protocols, the interfaces of these auxiliary algorithms become more complex, introducing a range of technical challenges that hinder the generalization of these constructions. We introduce a framework to enhance the usability of zero-knowledge simulators and witness extractors in multi-round argument systems for protocol designs. Critical-round zero-knowledge relies on the ability to perform complete zero-knowledge simulations by knowing the challenge of just one specific round in advance. Critical-round special soundness aims to address a stringent condition for witness extraction by formalizing it to operate with a smaller tree of transcripts than the one needed for extended extraction, which either outputs the intended witness or solves the underlying hard problem in an argument system. We show that these notions are satisfied by diverse protocols based on MPC-in-the-Head, interactive oracle proofs, and split-and-fold arguments. We demonstrate the usefulness of the critical round framework by constructing proofs of partial knowledge (Cramer, Damgård, and Schoenmakers, CRYPTO'94) and trapdoor commitments (Damgård, CRYPTO'89) from critical-round multi-round proofs. Furthermore, our results imply advancements in post-quantum secure adaptor signatures and threshold ring signatures based on MPC-in-the-Head, eliminating the need for (costly) generic NP reductions.
Last updated:  2025-05-15
EndGame: Field-Agnostic Succinct Blockchain with Arc
Simon Judd
We present EndGame, a novel blockchain architecture that achieves succinctness through Reed-Solomon accumulation schemes. Our construction enables constant-time verification of blockchain state while maintaining strong security properties. We demonstrate how to efficiently encode blockchain state transitions using Reed-Solomon codes and accumulate proofs of state validity using the ARC framework. Our protocol achieves optimal light client verification costs and supports efficient state management without trusted setup.
Last updated:  2025-05-15
MOCHA: Mixnet Optimization Considering Honest Client Anonymity
Mahdi Rahimi
Mix networks (mixnets) safeguard client anonymity by forwarding traffic through multiple intermediary nodes (mixnodes), which reorder and delay messages to obscure communication patterns against a global passive adversary capable of monitoring all network transmissions. The anonymity provided by mixnets is usually assessed with a discrete-event simulator, gauging a target message's indistinguishability among output messages. While useful for comparative analysis, this approach only approximates the mixnet's anonymity potential. Hence, this paper sheds light on the necessity of considering the client (originator of messages) itself to gauge anonymity accurately. We further provide an algorithm (simulator) to simulate client anonymity for Loopix mixnets. We conduct experiments to optimize general Loopix mixnet parameters, considering both message and client anonymity. Our findings indicate that message anonymity often provides an upper bound and can yield misleading results for mixnet optimization, underscoring the importance of client anonymity. Additionally, we explore scenarios where client anonymity is significantly compromised due to an insufficient number of clients. To address these cases, we propose a multimixing strategy that enhances client anonymity by effectively merging varied traffic types with different mixing characteristics.
Last updated:  2025-05-15
Secure Rate-Distortion-Perception Trade-off Over Channels: A Randomized Distributed Function Computation (RDFC) Application
Gustaf Ahlgren and Onur Gunlu
Secure rate-distortion-perception (RDP) trade-offs arise in critical applications, such as semantic compression and privacy-preserving generative coding, where preserving perceptual quality while minimizing distortion is vital. This paper studies a framework for secure RDP over noiseless and noisy broadcast channels under strong secrecy constraints. We first characterize the exact secure RDP region for noiseless transmission channels. We then develop an inner bound on the secure RDP region for a memoryless broadcast channel with correlated noise components at the receivers' observations and prove its tightness under a more capable broadcast channel assumption. Our results demonstrate how optimized binning schemes simultaneously achieve high perceptual quality, low distortion, and strong secrecy, illuminating fundamental information-theoretic limits for next-generation trustworthy computation systems.
Last updated:  2025-05-15
Zeroed Out: Cryptanalysis of Weak PRFs in Alternating Moduli
Irati Manterola Ayala and Håvard Raddum
The growing adoption of secure multi-party computation (MPC) has driven the development of efficient symmetric key primitives tailored for MPC. Recent advances, such as the alternating moduli paradigm, have shown promise but leave room for cryptographic and practical improvements. In this paper, we analyze a family of weak pseudorandom functions (wPRF) proposed at Crypto 2024, focusing on their One-to-One parameter sets. We demonstrate that these configurations fail to achieve their intended one-to-one mappings and exploit this observation to develop an efficient key recovery attack. Our analysis reveals critical vulnerabilities, reducing the complexity of key recovery to for the Standard One-to-One wPRF and for the Reversed Moduli variant -- both substantially below their claimed -bit security. We validate our findings through experimental evaluation, confirming alignment between predicted and observed attack complexities.
Last updated:  2025-05-15
sPAR: (Somewhat) Practical Anonymous Router
Debajyoti Das and Jeongeun Park
Anonymous communication is one of the fundamental tools to achieve privacy for communication over the internet. Almost all existing design strategies (e.g., onion routing/Tor, mixnets) for anonymous communication rely on the existence of some honest server/router in the network infrastructure to provide anonymity. A recent seminal work by Shi and Wu (Eurocrypt 2021) proposes the first cryptographic design for a non-interactive anonymous router (NIAR) that can use a single untrusted server or router to permute a set of messages without revealing the permutation to the untrusted router. This work is a really important step towards showing the possibility of designing such protocol from standard cryptographic assumptions. However, their construction is only of theoretical nature and still leaves many open questions towards realizing such systems in practice: (1) the cryptographic building blocks (multi-client functional encryption, correlated pseudorandom function) used in their design are really difficult to implement in practice. (2) Their setup phase takes the permutation as an input to generate the encryption/decryption keys; which means that the messages from the same sender in different rounds will be at the same position in the output vector, unless the setup phase is run before every round with a new permutation. (3) It is not known how to realize such a setup procedure, that initializes a random permutation obliviously, without any trusted entities in the system. In this paper, we propose the first (somewhat) practical design, which we call sPAR, that solves the above problems using homomorphic encryption techniques. Our design also relies on a one-time setup phase, however the setup phase does not take any specific permutation as input. Instead, our design generates a fresh permutation for every round based on the random values locally generated by the clients. Already existing practical instantiations of fully homomorphic encryption (FHE) schemes make our design implementable and deployable in practice. Our design presents a new direction for designing anonymous communication systems. Unlike some existing systems like Tor, sPAR does not scale to millions of users, however, we demonstrate with a proof-of-concept implementation that sPAR could easily support around hundred users with a few seconds of latency for each message.
Last updated:  2025-05-15
On the Provable Dual Attack for LWE by Modulus Switching
Hongyuan Qu and Guangwu Xu
As a theoretical cornerstone of post-quantum cryptography, the Learning With Errors (LWE) problem serves as the security foundation for standardized algorithms such as Kyber and Dilithium. Recently, a framework for provable dual attacks on LWE has been proposed by Pouly et al. in Eurocrypt 2024, addressing the limitations in effectiveness caused by existing methods' reliance on heuristic assumptions in LWE dual attacks. Their paper also poses an open problem on how to formally integrate modulus switching into this framework to reduce attack costs. The main purpose of this paper is to give a solution of this open problem by presenting an improved provable dual attack method that incorporates modulus switching and Chinese Remainder Theorem (CRT) techniques. First, we design a modulus switching mechanism that eliminates practical errors via the Poisson summation formula. By embedding the inherent noise from modulus switching into a rational lattice framework, our approach effectively preventing the risk of attack failure caused by the merging of such errors with LWE noise. Theoretical guarantees (Theorems 4 and 5) rigorously quantify the parameter ranges for successful attacks. Second, we introduce a CRT-based secret recovery method that aggregates partial secrets from independent sub-attacks. By leveraging the Chinese Remainder Theorem to reconstruct full secrets from congruence relations, our method adapts to arbitrary secret distributions. Furthermore, by using a tighter variant of Banaszczyk's measure inequality, we obtain a precise parameter range for the dual attack's efficacy through rigorous mathematical proof, and achieve the same complementary gap with the contradictory regime (proposed by Ducas et al.) as in Pouly et al.'s work. Experiments show 15-29 bit superior performance in attack estimation compared to the original framework.
Last updated:  2025-05-15
A Framework for Group Action-Based Multi-Signatures and Applications to LESS, MEDS, and ALTEQ
Giuseppe D'Alconzo, Andrea Flamini, Alessio Meneghetti, and Edoardo Signorini
A multi-signature scheme allows a list of signers to sign a common message. They are widely used in scenarios where the same message must be signed and transmitted by users, and, instead of concatenating individual signatures, employing a multi-signature can reduce the data to be sent. In recent years there have been numerous practical proposals in the discrete logarithm setting, such as MuSig2 (CRYPTO'21) for the Schnorr signature. Recently, these attempts have been extended to post-quantum assumptions, with lattice-based proposals such as MuSig-L (CRYPTO'22). Given the growth of group action-based signatures, a natural question is whether a multi-signature can be built on the same models. In this work, we present the first construction of such a primitive relying on group action assumptions. We obtain a 3-round scheme achieving concurrent security in the ROM. Moreover, we instantiate it using the three candidates to the additional post-quantum NIST's call, namely LESS, MEDS and ALTEQ, obtaining a good compression rate for different parameters sets.
Last updated:  2025-05-15
Encrypted Matrix-Vector Products from Secret Dual Codes
Fabrice Benhamouda, Caicai Chen, Shai Halevi, Yuval Ishai, Hugo Krawczyk, Tamer Mour, Tal Rabin, and Alon Rosen
Motivated by applications to efficient secure computation, we consider the following problem of encrypted matrix–vector product (EMVP). Let be a finite field. In an offline phase, a client uploads an encryption of a matrix to a server, keeping only a short secret key. The server stores the encrypted matrix . In the online phase, the client may repeatedly send encryptions of query vectors , which enables the client and the server to locally compute compact shares of the matrix–vector product . The server learns nothing about or . The shared output can either be revealed to the client or processed by another protocol. We present efficient EMVP protocols based on variants of the learning parity with noise (LPN) assumption and the related learning subspace with noise (LSN) assumption. Our EMVP protocols are field-agnostic in the sense that the parties only perform arithmetic operations over , and are close to optimal with respect to both communication and computation. In fact, for sufficiently large (typically a few hundreds), the online computation and communication costs of our LSN-based EMVP can be \emph{less than twice} the costs of computing in the clear. Combined with suitable secure post-processing protocols on the secret-shared output, our EMVP protocols are useful for a variety of secure computation tasks, including encrypted fuzzy search and secure ML. Our technical approach builds on recent techniques for private information retrieval in the secret-key setting. The core idea is to encode the matrix and the queries using a pair of secret dual linear codes, while defeating algebraic attacks by adding noise.
Last updated:  2025-05-15
Classify Directly: A Dynamic Time SPA Classification Method Based on DTW
Yaoling Ding, Haotong Xu, Annyu Liu, An Wang, Jingqi Zhang, Jing Yu, and Liehuang Zhu
Side-channel analysis remains a critical threat to public-key cryptographic implementations. Simple Power Analysis (SPA) techniques can extract secret keys from a single power trace, often using clustering-based classification methods. However, traces captured in real-world environments often suffer from misalignment and variable trace lengths due to unstable clocks and random delays. As a result, clustering methods are required to use alignment methods that may alter the original information of the traces. To address this problem, this work proposes Dynamic Time Classification (DTC) as an alternative approach to classify cryptographic operations in SPA based on Dynamic Time Warping. Unlike clustering methods, DTC inherently compares power traces without requiring fixed-length segments, which greatly improved the adaptability to unequal traces and thus allows us to classify different operations relatively stably. Experimental results on public-key cryptographic algorithms and post-quantum algorithm implementations show that DTC are no less accurate than clustering methods and are more robust to timing variations. This work also systematically divides the features of different operations and explores the effects of different SPA methods on different types of feature. This work also conducts experiments with and without random delays for different categories, compares the experimental accuracy of different alignment methods, and discusses the feasibility of DTW as a preprocessing method.
Last updated:  2025-05-15
Testing the Tests - Opportunities for Corrections and Improvements in NIST SP 800-22r1a and its Reference Code
Elias Riesinger and Jürgen Fuß
During an analysis of the NIST SP 800-22r1a document, which provides a test suite to validate random number generators and their reference implementation, various issues were identified, including imprecise probability constants, erroneous example calculations, and discrepancies within test descriptions. Here, we provide a technical analysis of the Statistical Test Suite, documenting inconsistencies and deficiencies in both the theoretical specification and the official C reference implementation. The analysis also reveals concrete implementation bugs and structural limitations in the reference codebase. Rather than revising any of the statistical tests, this work documents these flaws to support the ongoing revision process of the standard and to encourage more reliable and maintainable implementations.
Last updated:  2025-05-15
Scrutinizing the Security of AES-based Hashing and One-way Functions
Shiyao Chen, Jian Guo, Eik List, Danping Shi, and Tianyu Zhang
AES has cemented its position as the primary symmetric-key primitive for a wide range of cryptographic applications, which motivates the analysis on the concrete security of AES in practical instantiations, for instance, the collision resistance of AES-based hashing, the key commitment security of AES-based authenticated encryption schemes, and the one-wayness of AES-based one-way functions in ZK and MPC protocols. In this work, we introduce single-color initial structures (SCIS) into meet-in-the-middle (MITM) attacks to address neutral word generation, a critical bottleneck in MITM collision attacks. The SCIS technique leverages new structural insights to enable efficient neutral word generation and leads to multiple improved results on AES compared to the state-of-the-art. In particular, we present the first classical one-block collision attack on 7-round AES-MMO/MP, marking the first advancement in the number of attacked rounds in over a decade and matching the best-known results in the quantum setting, as well as the first one-block collision attack on 4-round AES-128-DM, bridging the gap highlighted by Taiyama \textit{et al.} at Asiacrypt 2024 from a non-differential-based approach. Additionally, we provide a comprehensive list of new results on the security margins of AES-192, AES-256, Rijndael-192, and Rijndael-256 in multiple attack settings.
Last updated:  2025-05-15
Posterior Security: Anonymity and Message Hiding of Standard Signatures
Tsz Hon Yuen, Ying-Teng Chen, Shimin Pan, Jiangshan Yu, and Joseph K. Liu
We introduce posterior security of digital signatures, the additional security features after the original signature is generated. It is motivated by the scenario that some people store their secret keys in secure hardware and can only obtain a standard signature through a standardized interface. In this paper, we consider two different posterior security features: anonymity and message hiding. We first introduce incognito signature, a new mechanism to anonymize a standard signature. Different from other ring or group signatures, the signer generates a standard (non-anonymous) signature first. The signature is then anonymized by a converter before sending to the verifier, by hiding the signer public key with a set of decoy public keys. We then introduce concealed signature which hides the message in a commitment. The standard signature is converted such that it can be verified with the commitment. The models of posterior anonymity and posterior message hiding capture the separation of the signer and the converter. Anonymity or message hiding is provided by the converter after the creation of a standard signature by the signer. We give generic constructions of incognito signature and concealed signature. It can be applied to standard signatures like Schnorr. It gives the first practical anonymized ECDSA signature, and the signature size is logarithmic to the number of decoy public keys . The existing ring signature scheme with ECDSA keys is at least 152 times longer than our scheme for . The incognito signature and concealed signature can be composed to provide posterior anonymity and message hiding. It is useful in applications like two-tier central bank digital currency, where users want to hide their addresses (public keys) and transaction amounts (messages) when the payment is settled in the interbank layer.
Last updated:  2025-05-14
Towards ML-KEM & ML-DSA on OpenTitan
Amin Abdulrahman, Felix Oberhansl, Hoang Nguyen Hien Pham, Jade Philipoom, Peter Schwabe, Tobias Stelzer, and Andreas Zankl
This paper presents extensions to the OpenTitan hardware root of trust that aim at enabling high-performance lattice-based cryptography. We start by carefully optimizing ML-KEM and ML-DSA --- the two algorithms primarily recommended and standardized by NIST --- in software targeting the OTBN accelerator. Based on profiling results of these implementations, we propose tightly integrated extensions to OTBN, specifically an interface from OTBN to OpenTitan's Keccak accelerator (KMAC core) and extensions to the OTBN ISA to support operations on 256-bit vectors. We implement these extensions in hardware and show that we achieve a speedup by a factor between 6 and 9 for different operations and parameter sets of ML-KEM and ML-DSA compared to our baseline implementation on unmodified OTBN. This speedup is achieved with an increase in cell count of less than 17% in OTBN, which corresponds to an increase of less than 3% for the full Earl Grey OpenTitan core.
Last updated:  2025-05-14
ProbeNav - Fast, precise and repeatable positioning of electromagnetic probes for local Side-Channel Attacks
Matthias Probst, Alexander Wiesent, Michael Gruber, and Georg Sigl
Localized side-channel analysis makes it possible to evaluate only the relevant chip area by measuring near-field electromagnetic (EM) emanations. Compared to global power measurements, this can lead to more powerful attacks as the signal-to-noise ratio is higher and irrelevant circuit components are not included in the recorded measurements. Especially for profiled attacks and their reproduction, the probe position in a localized scenario is of utmost importance. Ideally a probe should be placed identically during the profiling and attack phases, as small variations can have a large impact on the success of the attack. In this work we present our methodology – ProbeNav – to accurately reposition an EM probe which is optimized for localized measurements, i.e., near-field measurements. We evaluate cross-correlation, Oriented Fast and rotated Brief (ORB) and particle filters to re-calibrate the coordinate system of our setup. As a result, our methodologies show that precise positioning on a STM32F303 microcontroller is possible for a profiled attack scenario with different EM probes. Furthermore, by requiring only a single trace per position, profiling is 3 times and repositioning 28 faster in terms of number of collected traces compared to the state of the art.
Last updated:  2025-05-14
Practical Deniable Post-Quantum X3DH: A Lightweight Split-KEM for K-Waay
Guilhem Niot
The Signal Protocol, underpinning secure messaging for billions, faces the challenge of migrating to a post-quantum world while preserving critical properties like asynchrony and deniability. While Signal’s PQXDH key exchange protocol addresses post-quantum confidentiality, full migration of the X3DH protocol remains elusive. Relying on a split KEM (K-Waay, USENIX ’24) offers a promising migration path, but it has so far suffered from size limitations compared to concurrent works leveraging ring signatures. This work introduces Sparrow-KEM and Sym-Sparrow-KEM, novel asymmetric and symmetric split KEMs respectively, i.e. for which keys can be used interchangeably for sending and receiving, or only in one direction. They are designed to optimize the K-Waay protocol for size efficiency. Leveraging the MLWE assumption, these constructions reduce by a factor 5.1× the communication of prior post-quantum X3DH based on split KEMs, plus provides a 40× speedup. Additionally, Sym-Sparrow-KEM is the first symmetric split-KEM to offer deniability, IND-1KCA, and IND-1BatchCCA security, capturing implicit authentication properties. We provide formal security proofs for both schemes, including deniability. Our results demonstrate the feasibility of a compact and deniable post-quantum X3DH protocol based on split KEMs.
Last updated:  2025-05-14
An Optimized Instantiation of Post-Quantum MQTT protocol on 8-bit AVR Sensor Nodes
YoungBeom Kim and Seog Chung Seo
Since the selection of the National Institute of Standards and Technology (NIST) Post-Quantum Cryptography (PQC) standardization algorithms, research on integrating PQC into security protocols such as TLS/SSL, IPSec, and DNSSEC has been actively pursued. However, PQC migration for Internet of Things (IoT) communication protocols remains largely unexplored. Embedded devices in IoT environments have limited computational power and memory, making it crucial to optimize PQC algorithms for efficient computation and minimal memory usage when deploying them on low-spec IoT devices. In this paper, we introduce KEM-MQTT, a lightweight and efficient Key Encapsulation Mechanism (KEM) for the Message Queuing Telemetry Transport (MQTT) protocol, widely used in IoT environments. Our approach applies the NIST KEM algorithm Crystals-Kyber (Kyber) while leveraging MQTT’s characteristics and sensor node constraints. To enhance efficiency, we address certificate verification issues and adopt KEMTLS to eliminate the need for Post-Quantum Digital Signatures Algorithm (PQC-DSA) in mutual authentication. As a result, KEM-MQTT retains its lightweight properties while maintaining the security guarantees of TLS 1.3. We identify inefficiencies in existing Kyber implementations on 8-bit AVR microcontrollers (MCUs), which are highly resource-constrained. To address this, we propose novel implementation techniques that optimize Kyber for AVR, focusing on high-speed execution, reduced memory consumption, and secure implementation, including Signed LookUp-Table (LUT) Reduction. Our optimized Kyber achieves performance gains of 81%,75%, and 85% in the KeyGen, Encaps, and DeCaps processes, respectively, compared to the reference implementation. With approximately 3 KB of stack usage, our Kyber implementation surpasses all state-of-the-art Elliptic Curve Diffie-Hellman (ECDH) implementations. Finally, in KEM-MQTT using Kyber-512, an 8-bit AVR device completes the handshake preparation process in 4.32 seconds, excluding the physical transmission and reception times.
Last updated:  2025-05-14
Neural-Inspired Advances in Integral Cryptanalysis
Liu Zhang, Yiran Yao, Danping Shi, Dongchen Chai, Jian Guo, and Zilong Wang
The study by Gohr et.al at CRYPTO 2019 and sunsequent related works have shown that neural networks can uncover previously unused features, offering novel insights into cryptanalysis. Motivated by these findings, we employ neural networks to learn features specifically related to integral properties and integrate the corresponding insights into optimized search frameworks. These findings validate the framework of using neural networks for feature exploration, providing researchers with novel insights that advance established cryptanalysis methods. Neural networks have inspired the development of more precise integral search models. By comparing the integral distinguishers obtained via neural networks with those identified by classical methods, we observe that existing automated search models often fail to find optimal distinguishers. To address this issue, we develop a meet-in-the-middle search framework that balances model accuracy and computational efficiency. As a result, we reduce the number of active plaintext bits required for an 11-round integral distinguisher on SKINNY-64-64, and further identify a 12-round key-dependent integral distinguisher—achieving one additional round over the previous best-known result. The integral distinguishers discovered by neural networks enable key-recovery attacks on more rounds. We identify a 7-round key-independent integral distinguisher from neural networks with even only one active plaintext cell, which is based on linear combinations of bits. This distinguisher enables a 15-round key-recovery attack on SKINNY-n-n through a strategy with 3 rounds of forward decryption and 5 rounds of backward encryption, improving upon the previous record by one round. The same distinguisher also enhances attacks on SKINNY-n-2n and SKINNY-n-3n. Additionally, we discover an 8-round key-dependent integral distinguisher using neural network that further reduces the time complexity of key-recovery attacks against SKINNY.
Last updated:  2025-05-14
A Round-Optimal Near-Linear Third-Party Private Set Intersection Protocol
Foo Yee Yeo and Jason H. M. Ying
Third-party private set intersection (PSI) enables two parties, each holding a private set to compute their intersection and reveal the result only to an inputless third party. In this paper, we present an efficient round-optimal third-party PSI protocol. Our work is motivated by real-world applications such as contact tracing whereby expedition is essential while concurrently preserving privacy. Our construction only requires communication rounds and attains a near-linear computational complexity of for large dataset size , where is any fixed constant. Our improvements stem from algorithmic changes and the incorporation of new techniques along with precise parameter selections to achieve a tight asymptotic bound. Furthermore, we also present a third-party PSI cardinality protocol which has not been explored in prior third-party PSI work. In a third-party PSI cardinality setting, only the third-party obtains the size of the intersection and nothing else. Our construction to achieve the cardinality functionality attains a quasilinear computational complexity for the third-party.
Last updated:  2025-05-14
RAMenPaSTA: Parallelizable Scalable Transparent Arguments of Knowledge for RAM Programs
Khai Hanh Tang, Nhat Minh Pham, and Chan Nam Ngo
Recent advances in Zero-Knowledge Proof and Argument (ZKP/ZKA) Systems allow efficiently verifiable computation in the circuit-based model (where programs can be represented as Boolean or arithmetic circuits). However, the circuit-based model is not widely popular due to its unsuitability for big programs (as the circuit size is linear to the program's size). Hence, the research community began looking into the Random-Access Machine model of programs, namely RAM programs, which is better suited for general-purpose programming. Consequently, a flurry of work began researching to construct ZKP protocols for the correct execution of RAM programs. In this paper, we also develop ZKP/ZKA for RAM programs, with a particular focus on two properties: (i) parallelizability, which significantly reduces prover (and verifier) computation, and (ii) genericity, which makes the construction requires minimal assumptions and can be instantiated with any existing ZKP protocols. To our knowledge, previous ZKP constructions for RAM programs either (i) are not known to support proving or verifying in parallel, (ii) require making non-black-box use of specific SNARK constructions, or (iii) even require proving computation of random oracle. To this end, we propose the so-called Conditional Folding Scheme (CFS) as a building block to construct ZKP/ZKA for RAM programs. We provide a non-trivial practical construction for our CFS that is natively parallelizable, which significantly brings the runtime of proof generation down to (compared to in previous works), where is the witness size of the heaviest operation, and is the number of execution steps. We rigorously prove the security of our CFS (also for the case of folding in parallel). Our scheme can be made non-interactive using the Fiat-Shamir transform. Our CFS is generic and does not require a trusted setup (yielding a ``transparent'' argument). It can be adapted to construct Parallelizable Scalable Transparent Arguments of Knowledge for RAM Programs that we dub RAMenPaSTA.
Last updated:  2025-05-14
USpt: Updatable Signature with Public Tokens
Haotian Yin, Jie Zhang, Wanxin Li, Yuji Dong, Eng Gee Lim, and Dominik Wojtczak
The Updatable Signature (US) allows valid signatures to be updated by an update token without accessing the newly generated signing key. Cini et al. (PKC’21) formally defined this signature and gave several constructions. However, their security model requires the secrecy of the update token, which is not applicable in many common application scenarios where existing signatures have been distributed to many parties. In addition, one can use the same token to update both the signing key and signatures, and all signatures can be updated by a single token, whereas the adversarial signature generated by an adversary might also be updated. This work explores the (im)possibility of constructing an Updatable Signature with public tokens (USpt). Specifically, we first define the updatable signature with public tokens and present its security model. Then, from considering existing US schemes, we found that a secure USpt must properly handle a transform function from signature-update token to key-update token. We further formally proved the impossibility of constructing a secure USpt if (1) there is no transform function between key pairs and signatures, or (2) the signature-update token can be derived from the public keys of adjacent epochs. Finally, we present a concrete USpt scheme based on the BLS signature.
Last updated:  2025-05-14
Vrity: Verifiable Local Differential Privacy
James Bell-Clark, Adrià Gascón, Baiyu Li, Mariana Raykova, and Amrita Roy Chowdhury
Local differential privacy (LDP) enables individuals to report sensitive data while preserving privacy. Unfortunately, LDP mechanisms are vulnerable to poisoning attacks, where adversaries controlling a fraction of the reporting users can significantly distort the aggregate output--much more so than in a non-private solution where the inputs are reported directly. In this paper, we present two novel solutions that prevent poisoning attacks under LDP while preserving its privacy guarantees. Our first solution, , addresses scenarios where the users report inputs with a ground truth available to a third party. The second solution, , tackles the more challenging case in which the users locally generate their input and there is no ground truth which can be used to bootstrap verifiable randomness generation.
Last updated:  2025-05-14
Succinct Computational Secret Sharing for Monotone Circuits
George Lu, Shafik Nassar, and Brent Waters
Secret sharing is a cornerstone of modern cryptography, underpinning the secure distribution and reconstruction of secrets among authorized participants. In these schemes, succinctness—measured by the size of the distributed shares—has long been an area of both great theoretical and practical interest, with large gaps between constructions and best known lower bounds. In this paper, we present a novel computational secret sharing scheme for monotone Boolean circuits that achieves substantially shorter share sizes than previously known constructions in the standard model. Our scheme attains a public share size of and a user share size of , where n denotes the number of users, is the monotone circuit and is the security parameter, thus effectively eliminating the dependence on the circuit size. This marks a significant improvement over earlier approaches, which exhibited share sizes that grew with the number of gates in the circuit. Our construction makes use of indistinguishability obfuscation and injective one-way functions.
Last updated:  2025-05-13
Unmasking TRaccoon: A Lattice-Based Threshold Signature with An Efficient Identifiable Abort Protocol
Rafael del Pino, Shuichi Katsumata, Guilhem Niot, Michael Reichle, and Kaoru Takemure
TRaccoon is an efficient 3-round lattice-based T-out-of-N threshold signature, recently introduced by del Pino et al. (Eurocrypt 2024). While the design resembles the classical threshold Schnorr signature, Sparkle (Crites et al., Crypto 2023), one shortfall is that it has no means to identify malicious behavior, a property highly desired in practice. This is because to resist lattice-specific attacks, TRaccoon relies on a technique called masking, informally blinding each partial signature with a one-time additive mask. del Pino et al. left it as an open problem to add a mechanism to identify malicious behaviors to TRaccoon. In this work, we propose TRaccoon-IA, a TRaccoon with an efficient identifiable abort protocol, allowing to identify malicious signers when the signing protocol fails. The identifiable abort protocol is a simple add-on to TRaccoon, keeping the original design intact, and comes with an added communication cost of 60 + 6.4 |T| KB only when signing fails. Along the way, we provide the first formal security analysis of a variant of LaBRADOR (Beullens et al., Crypto 2023) with zero-knowledge, encountering several hurdles when formalizing it in detail. Moreover, we give a new game-based definition for interactive identifiable abort protocols, extending the popular game-based definition used to prove unforgeability of recent threshold signatures.
Last updated:  2025-05-13
Evasive LWE Assumptions: Definitions, Classes, and Counterexamples
Chris Brzuska, Akin Ünal, and Ivy K. Y. Woo
The evasive LWE assumption, proposed by Wee [Eurocrypt'22 Wee] for constructing a lattice-based optimal broadcast encryption, has shown to be a powerful assumption, adopted by subsequent works to construct advanced primitives ranging from ABE variants to obfuscation for null circuits. However, a closer look reveals significant differences among the precise assumption statements involved in different works, leading to the fundamental question of how these assumptions compare to each other. In this work, we initiate a more systematic study on evasive LWE assumptions: (i) Based on the standard LWE assumption, we construct simple counterexamples against three private-coin evasive LWE variants, used in [Crypto'22 Tsabary, Asiacrypt'22 VWW, Crypto'23 ARYY] respectively, showing that these assumptions are unlikely to hold. (ii) Based on existing evasive LWE variants and our counterexamples, we propose and define three classes of plausible evasive LWE assumptions, suitably capturing all existing variants for which we are not aware of non-obfuscation-based counterexamples. (iii) We show that under our assumption formulations, the security proofs of [Asiacrypt'22 VWW] and [Crypto'23 ARYY] can be recovered, and we reason why the security proof of [Crypto'22 Tsabary] is also plausibly repairable using an appropriate evasive LWE assumption.
Last updated:  2025-05-13
Cauchyproofs: Batch-Updatable Vector Commitment with Easy Aggregation and Application to Stateless Blockchains
Zhongtang Luo, Yanxue Jia, Alejandra Victoria Ospina Gracia, and Aniket Kate
Stateless blockchain designs have emerged to address the challenge of growing blockchain size using succinct global states. Previous works have developed vector commitments that support proof updates and aggregation to be used as such states. However, maintaining proofs for multiple users still demands significant computational resources, particularly to update proofs with every transaction. This paper introduces Cauchyproofs, a batch-updatable vector commitment that enables proof-serving nodes to efficiently update proofs in quasi-linear time relative to the number of users and transactions, utilizing an optimized KZG scheme to achieve complexity for users and transactions, compared to the previous approaches. This advancement reduces the computational burden on proof-serving nodes, allowing efficient proof maintenance across large user groups. We demonstrate that our approach is approximately eight times faster than the naive approach at the Ethereum-level transaction throughput if we perform batch update every hour. Additionally, we present a novel matrix representation for KZG proofs utilizing Cauchy matrices, enabling faster all-proof computations with reduced elliptic curve operations. Finally, we propose an algorithm for history proof query, supporting retrospective proof generation with high efficiency. Our contributions substantially enhance the scalability and practicality of proof-serving nodes in stateless blockchain frameworks.
Last updated:  2025-05-13
The Sponge is Quantum Indifferentiable
Gorjan Alagic, Joseph Carolan, Christian Majenz, and Saliha Tokat
The sponge is a cryptographic construction that turns a public permutation into a hash function. When instantiated with the Keccak permutation, the sponge forms the NIST SHA-3 standard. SHA-3 is a core component of most post-quantum public-key cryptography schemes slated for worldwide adoption. While one can consider many security properties for the sponge, the ultimate one is \emph{indifferentiability from a random oracle}, or simply \emph{indifferentiability}. The sponge was proved indifferentiable against classical adversaries by Bertoni et al. in 2008. Despite significant efforts in the years since, little is known about sponge security against quantum adversaries, even for simple properties like preimage or collision resistance beyond a single round. This is primarily due to the lack of a satisfactory quantum analog of the lazy sampling technique for permutations. In this work, we develop a specialized technique that overcomes this barrier in the case of the sponge. We prove that the sponge is in fact indifferentiable from a random oracle against quantum adversaries. Our result establishes that the domain extension technique behind SHA-3 is secure in the post-quantum setting. Our indifferentiability bound for the sponge is a loose , but we also give bounds on preimage and collision resistance that are tighter.
Last updated:  2025-05-13
Secret and Shared Keys Recovery on Hamming Quasi-Cyclic with SASCA
Chloé Baïsse, Antoine Moran, Guillaume Goy, Julien Maillard, Nicolas Aragon, Philippe Gaborit, Maxime Lecomte, and Antoine Loiseau
Soft Analytical Side Channel Attacks (SASCA) are a powerful family of Side Channel Attacks (SCA) that allows the recovery of secret values with only a small number of traces. Their effectiveness lies in the Belief Propagation (BP) algorithm, which enables efficient computation of the marginal distributions of intermediate values. Post-quantum schemes such as Kyber, and more recently, Hamming Quasi-Cyclic (HQC), have been targets of SASCA. Previous SASCA on HQC focused on Reed-Solomon (RS) codes and successfully retrieved the shared key with a high success rate for high noise levels using a single trace. In this work, we present new SASCA on HQC, where both the shared key and the secret key are targeted. Our attacks are realized on simulations. Unlike the previous SASCA, we take a closer look at the Reed-Muller (RM) code. The advantage of this choice is that the RM decoder is applied before the RS decoder, enabling attacks targeting both the secret key and shared key. We build a factor graph of the Fast Hadamard Transform (FHT) function from the HQC reference implementation of April 2023. The information recovered from BP allows us to retrieve the shared key with a single trace. In addition to the previous SASCA targeting HQC, we also manage to recover the secret key with two different chosen ciphertext attacks. One of them requires a single trace and is successful until high noise levels.
Last updated:  2025-05-13
On Graphs of Incremental Proofs of Sequential Work
Hamza Abusalah
In this work, we characterize graphs of \emph{(graph-labeling) incremental proofs of sequential work} (iPoSW). First, we define \emph{incremental} graphs and prove they are necessary for iPoSWs. Relying on space pebbling complexity of incremental graphs, we show that the depth-robust graphs underling the PoSW of Mahmoody et al.\ are not incremental, and hence, their PoSW cannot be transformed into an iPoSW. Second, and toward a generic iPoSW construction, we define graphs whose structure is compatible with the incremental sampling technique (Döttling et al.). These are \emph{dynamic} graphs. We observe that the graphs underlying all PoSWs, standalone or incremental, are dynamic. We then generalize current iPoSW schemes by giving a generic construction that transforms any PoSW whose underlying graph is incremental and dynamic into an iPoSW. As a corollary, we get a new iPoSW based on the modified Cohen-Pietrzak graph (Abusalah et al.). When used in constructing blockchain light-client bootstrapping protocols (Abusalah et al.) such an iPoSW, results in the most efficient bootstrappers/provers, in terms of both proof size and space complexity. Along the way, we show that previous iPoSW definitions allow for trivial solutions. To overcome this, we provide a refined definition that captures the essence of iPoSWs and is satisfied by all known iPoSW constructions.
Last updated:  2025-05-13
Deterministic algorithms for class group actions
Marc Houben
We present an algorithm for the CSIDH protocol that is fully deterministic and strictly constant time. It does not require dummy operations and can be implemented without conditional branches. Our proof-of-concept C implementation shows that a key exchange can be performed in a constant (i.e. fixed) number of finite field operations, independent of the secret keys. The algorithm relies on a technique reminiscent of the standard Montgomery ladder, and applies to the computation of isogenies that divide an endomorphism of smooth degree represented by its kernel. We describe our method in the general context of class group actions on oriented elliptic curves, giving rise to a large family of non-interactive key exchanges different from CSIDH.
Last updated:  2025-05-13
Faster Hash-based Multi-valued Validated Asynchronous Byzantine Agreement
Hanwen Feng, Zhenliang Lu, Tiancheng Mai, and Qiang Tang
Multi-valued Validated Byzantine Agreement (MVBA) is vital for asynchronous distributed protocols like asynchronous BFT consensus and distributed key generation, making performance improvements a long-standing goal. Existing communication-optimal MVBA protocols rely on computationally intensive public-key cryptographic tools, such as non-interactive threshold signatures, which are also vulnerable to quantum attacks. While hash-based MVBA protocols have been proposed to address these challenges, their higher communication overhead has raised concerns about practical performance. We present a novel MVBA protocol with adaptive security, relying exclusively on hash functions to achieve post-quantum security. Our protocol delivers near-optimal communication, constant round complexity, and significantly reduced latency compared to existing schemes, though it has sub-optimal resilience, tolerating up to 20% Byzantine corruptions instead of the typical 33%. For example, with and input size 1.75 MB, it reduces latency by 81% over previous hash-based approaches.
Last updated:  2025-05-13
Full-Authority Data Sharing Systems: Ciphertext-Dependent Proxy Re-Encryption with Dynamic Key Generation
Haotian Yin, Jie Zhang, Wanxin Li, Yuji Dong, Eng Gee Lim, and Dominik Wojtczak
Proxy re-encryption (PRE) is a powerful primitive for secure cloud storage sharing. Suppose Alice stores encrypted datasets (ciphertexts) in a cloud server (proxy). If Bob requests data sharing, Alice shares the ciphertexts by computing and sending a re-encryption key to the proxy, which will perform the re-encryption operation that generates the ciphertexts that are decryptable to Bob. Still, the proxy cannot access the plaintexts/datasets. Traditionally, the re-encryption key can convert all of Alice's ciphertexts, and the proxy should operate the re-encryption on the ciphertexts selected by the users (Alice/Bob). There is a trust issue: Alice must grant full decryption rights (losing control) to rely on proxy-enforced access policies (vulnerable to collusion). Existing PRE schemes fail to reconcile fine-grained control with collusion resistance. If Alice uses different keys to encrypt each dataset, the re-encryption complexity is linear to the number of requested datasets. We propose full-authority data sharing, a novel paradigm combining ciphertext-dependent PRE (cdPRE) and dynamic key generation (dKG). Unlike traditional PRE, cdPRE binds re-encryption keys to specific ciphertexts, ensuring collusion resistance (i.e., proxy + Bob cannot access unauthorised data). dKG dynamically connects keys via key derivation functions; for example, the chain system reduces per-dataset delegation cost to for sequential release in publication/subscription systems (vs. in trivial solutions, where is the number of datasets). We instantiate this paradigm with Kyber (NIST-PQC standardised) and AES, prove its security, and experimentally verify the high efficiency of the scheme.
Last updated:  2025-05-13
Walnut: A Generic Framework with Enhanced Scalability for BFT Protocols
Lei Tian, Chenke Wang, Yu Long, Xian Xu, Mingchao Wan, Chunmiao Li, Shi-Feng Sun, and Dawu Gu
The performance of traditional BFT protocols significantly decreases as grows ( for the number of replicas), and thus, they support up to a few hundred replicas. Such scalability issues severely limit the application scenarios of BFT. Meanwhile, the committee sampling technique has the potential to scale the replica size significantly by selecting a small portion of replicas as the committee and then conveying the consensus results to the rest. However, this technique is rarely used in BFT, and there is still a lack of methods to scale the traditional BFT protocol being deployed to support more replicas rather than the costly re-deployment of new protocols. This paper introduces Walnut, a secure and generic committee-sampling-based modular consensus. Specifically, we use the verifiable random function for committee election and integrate committee rotation with the consensus. This resulting construction ensures that each selected committee is of a fixed size and acknowledged by all replicas, even in a partially synchronous network. For Walnut, we provide a rigorous definition and outline the necessary properties of each module to achieve safety and liveness. To clarify Walnut's effectiveness, we apply this framework to HotStuff to obtain the Walnut-HS protocol, together with a proof of fit-in. We also implement Walnut-HS and compare its performance with HotStuff, using up to 100 Amazon EC2 instances in WAN. The experiments show that Walnut-HS can easily scale to 1,000 replicas with only a slight performance degradation, while HotStuff performs poorly or even breaks when . Besides, Walnut-HS performs well in comparison with Hotstuff for small-scale experiments. For example, the peak throughput of Walnut-HS is at most 38.6% higher than HotStuff for .
Last updated:  2025-05-13
DUPLEX: Scalable Zero-Knowledge Lookup Arguments over RSA Group
Semin Han, Geonho Yoon, Hyunok Oh, and Jihye Kim
Lookup arguments enable a prover to convince a verifier that a committed vector of lookup elements is contained within a predefined table . These arguments are particularly beneficial for enhancing the performance of SNARKs in handling non-arithmetic operations, such as batched range checks or bitwise operations. While existing works have achieved efficient and succinct lookup arguments, challenges remain, particularly when dealing with large vectors of lookup elements in privacy-sensitive applications. In this paper, we introduce , a scalable zero-knowledge lookup argument scheme that offers significant improvements over previous approaches. Notably, we present the first lookup argument designed to operate over the RSA group. Our core technique allows for the transformation of elements into prime numbers to ensure compatibility with the RSA group, all without imposing substantial computational costs on the prover. Given lookup elements, achieves an asymptotic proving time of , with constant-sized proofs, constant-time verification, and a public parameter size independent of the table size . Additionally, ensures the privacy of lookup elements and is robust against dynamic table updates, making it highly suitable for scalable verifiable computation in real-world applications. We implemented and empirically evaluated , comparing it with the state-of-the-art zero-knowledge lookup argument Caulk [CCS'22]. Our experimental results demonstrate that significantly outperforms Caulk in proving time for both single and batched lookup arguments, while maintaining practical proof size and verification time.
Last updated:  2025-05-13
BPDTE: Batch Private Decision Tree Evaluation via Amortized Efficient Private Comparison
Huiqiang Liang, Haining Lu, Yifeng Guo, Geng Wang, Haining Yu, Hongli Zhang, Baoyu An, Jinyu Li, and Li Su
Machine learning as a service requires clients to entrust their information to the server, raising privacy concerns. Private Decision Tree Evaluation (PDTE) are proposal to address these concerns in decision trees, which are fundamental models in machine learning. However, existing solutions perform poorly with massive datasets in real-world applications, therefore, we focus on the batching variant called BPDTE, which supports a single evaluation on multiple data and significantly improves performance. Firstly, we propose three private comparison (PrivCMP) algorithms that privately compare two numbers to determine which one is larger, by utilizing thermometer encoding and a novel folklore-inspired dichotomy method. These algorithms are non-interactive, batchable, and high-precision, achieving an amortized cost of less than 1 ms at 32-bit precision. Secondly, we propose six BPDTE schemes based on our PrivCMP and the Clear Rows Relation (CRR) algorithm, introduced to ensure batching security. Experimental results show that our schemes improve amortized efficiency, offer more flexible batch sizes, and achieve higher precision. Finally, we provide a formal security analysis of these schemes.
Last updated:  2025-05-12
Post-Quantum PKE from Unstructured Noisy Linear Algebraic Assumptions: Beyond LWE and Alekhnovich's LPN
Riddhi Ghosal, Aayush Jain, Paul Lou, Amit Sahai, and Neekon Vafa
Noisy linear algebraic assumptions with respect to random matrices, in particular Learning with Errors () and Alekhnovich Learning Parity with Noise (Alekhnovich ), are among the most investigated assumptions that imply post-quantum public-key encryption (PKE). They enjoy elegant mathematical structure. Indeed, efforts to build post-quantum PKE and advanced primitives such as homomorphic encryption and indistinguishability obfuscation have increasingly focused their attention on these two assumptions and their variants. Unfortunately, this increasing reliance on these two assumptions for building post-quantum cryptography leaves us vulnerable to potential quantum (and classical) attacks on Alekhnovich and . Quantum algorithms is a rapidly advancing area, and we must stay prepared for unexpected cryptanalytic breakthroughs. Just three decades ago, a short time frame in the development of our field, Shor's algorithm rendered most then-popular number theoretic and algebraic assumptions quantumly broken. Furthermore, within the last several years, we have witnessed major classical and quantum breaks on several assumptions previously introduced for post-quantum cryptography. Therefore, we ask the following question: In a world where both and Alekhnovich are broken, can there still exist noisy linear assumptions that remain plausibly quantum hard and imply PKE? To answer this question positively, we introduce two natural noisy-linear algebraic assumptions that are both with respect to random matrices, exactly like and Alekhnovich , but with different error distributions. Our error distribution combines aspects of both small norm and sparse error distributions. We design a PKE from these assumptions and give evidence that these assumptions are likely to still be secure even in a world where both the and Alekhnovich assumptions are simultaneously broken. We also study basic properties of these assumptions, and show that in the parameter settings we employ to build PKE, neither of them are ``lattice'' assumptions in the sense that we don't see a way to attack them using a lattice closest vector problem solver, except via -completeness reductions.
Last updated:  2025-05-12
Rerandomizable Garbling, Revisited
Raphael Heitjohann, Jonas von der Heyden, and Tibor Jager
In key-and-message homomorphic encryption (KMHE), the key space is a subset of the message space, allowing encryption of secret keys such that the same homomorphism can be applied to both the key and the message of a given ciphertext. KMHE with suitable security properties is the main building block for constructing rerandomizable garbling schemes (RGS, Gentry et al., CRYPTO 2010), which enable advanced cryptographic applications like multi-hop homomorphic encryption, the YOSO-like MPC protocol SCALES (Acharya et al., TCC 2022 and CRYPTO 2024), and more. The BHHO scheme (Boneh et al., CRYPTO 2008) is currently the only known KMHE scheme suitable for constructing RGS. An encryption of a secret key consists of group elements, where is the security parameter, which incurs a significant bandwidth and computational overhead that makes the scheme itself, and the RGS protocols building upon it, impractical. We present a new, more efficient KMHE scheme with linear-size ciphertexts. Despite using heavier cryptographic tools (pairings instead of plain DDH-hard groups), the concrete ciphertext size and computational costs are very significantly reduced. We are able to shrink the ciphertext by 97.83 % (from 16.68 MB to 360 kB) and reduce the estimated computations for encryption by 99.996 % (from 4 minutes to 0.01 seconds) in comparison to BHHO. Additionally, we introduce gate KMHE as a new tool to build more efficient RGS. Our RGS construction shrinks the size of a garbled gate by 98.99 % (from 133.43 MB to 1.35 MB) and decreases the estimated cost of garbling by 99.998 % (from 17 minutes to 0.02 seconds per gate) in comparison to Acharya et al. In summary, our work shows for the first time that RGS and the SCALES protocol (and hence YOSO-like MPC) are practically feasible for simple circuits.
Last updated:  2025-05-12
Fully Asymmetric Anamorphic Homomorphic Encryption from LWE
Amit Deo and Benoît Libert
As introduced by Persiano {\it et al.} (Eurocrypt'22), anamorphic encryption (AE) is a primitive enabling private communications against a dictator that forces users to surrender their decryption keys. In its fully asymmetric flavor (defined by Catalano {\it et al.}, Eurocrypt'24), anamorphic channels can work as hidden public-key mechanisms in the sense that anamorphic encryptors are not necessarily able to decrypt anamorphic ciphertexts. Unfortunately, fully asymmetric AE is hard to come by and even impossible to obtain from ordinary public-key encryption via black-box constructions. So far, only three schemes are known to rely on well-established assumptions. In this paper, we exhibit constructions from the standard LWE assumption based on Regev's cryptosystem and its dual version. In both cases, we retain the additive homomorphism of the schemes. We additionally show that dual Regev is public-key anamorphic in the sense of Persiano {\it et al.} (Crypto'24). In the FHE setting, we show that the dual GSW system provides fully asymmetric AE (while preserving its leveled homomorphism) when instantiated with binary/ternary secret keys. Along the way, we discuss the extent to which our schemes satisfy a generalization of Banfi {\it et al.}'s notion of robustness (Eurocrypt'24) to the case of homomorphically evaluated ciphertexts.
Last updated:  2025-05-12
Improvements on the schemes VOX and QR UOV When minus is a plus
Pierre Varjabedian
In this article, we present an improvement for QR UOV and a reparation of VOX. Thanks to the use of the perturbation minus we are able to fully exploit the parameters in order to reduce the public key. It also helps us to repair the scheme VOX. With QR UOV- we are able to significantly reduce the size of the public key at the cost of a small increase of the signature size. While with VOX- we can obtain a public key as short as . VOX- also maintains a very short signature size. We also show that the use of minus perturbation is very useful with schemes that uses the QR (Quotient ring perturbation).
Last updated:  2025-05-12
MPC with Publicly Identifiable Abort from Pseudorandomness and Homomorphic Encryption
Marc Rivinius
Publicly identifiable abort is a critical feature for ensuring accountability in outsourced computations using secure multiparty computation (MPC). Despite its importance, no prior work has specifically addressed identifiable abort in the context of outsourced computations. In this paper, we present the first MPC protocol that supports publicly identifiable abort with minimal overhead for external clients. Our approach minimizes client-side computation by requiring only a few pseudorandom function evaluations per input. On the server side, the verification process involves lightweight linear function evaluations using homomorphic encryption. This results in verification times of a few nanoseconds per operation for servers, with client overhead being approximately two orders of magnitude lower. Additionally, the public verifiability of our protocol reduces client input/output costs compared to SPDZ-based protocols, on which we base our protocol. For example, in secure aggregation use cases, our protocol achieves over twice the efficiency during the offline phase and up to an 18 % speedup in the online phase, significantly outperforming SPDZ.
Last updated:  2025-05-12
Verifiable E-Voting with a Trustless Bulletin Board
Daniel Rausch, Nicolas Huber, and Ralf Kuesters
Voter privacy and end-to-end (E2E) verifiability are critical features of electronic voting (e-voting) systems to safeguard elections. To achieve these properties commonly a perfect bulletin board (BB) is assumed that provides consistent, reliable, and tamper-proof storage and transmission of voting data. However, in practice, BBs operate in asynchronous and unreliable networks, and hence, are susceptible to vulnerabilities such as equivocation attacks and dropped votes, which can compromise both verifiability and privacy. Although prior research has weakened the perfect BB assumption, it still depends on trusting certain BB components. In this work, we present and initiate a formal exploration of designing e-voting systems based on fully untrusted BBs. For this purpose, we leverage the notion of accountability and in particular use accountable BBs. Accountability ensures that if a security breach occurs, then cryptographic evidence can identify malicious parties. Fully untrusted BBs running in asynchronous networks bring new challenges. Among others, we identify several types of attacks that a malicious but accountable BB might be able to perform and propose a new E2E verifiability notion for this setting. Based on this notion and as a proof of concept, we construct the first e-voting system that is provably E2E verifiable and provides vote privacy even when the underlying BB is fully malicious. This establishes an alternative to traditional e-voting architectures that rely on (threshold) trusted BB servers.
Last updated:  2025-05-12
Convolution-Friendly Image Compression with FHE
Axel Mertens, Georgio Nicolas, and Sergi Rovira
During the past few decades, the field of image processing has grown to cradle hundreds of applications, many of which are outsourced to be computed on trusted remote servers. More recently, Fully Homomorphic Encryption (FHE) has grown in parallel as a powerful tool enabling computation on encrypted data, and transitively on untrusted servers. As a result, new FHE-supported applications have emerged, but not all have reached practicality due to hardware, bandwidth or mathematical constraints inherent to FHE. One example is processing encrypted images, where practicality is closely related to bandwidth availability. In this paper, we propose and implement a novel technique for FHE-based image compression and decompression. Our technique is a stepping stone towards practicality of encrypted image-processing and applications such as private inference, object recognition, satellite-image searching or video editing. Inspired by the JPEG standard, and with new FHE-friendly compression/decompression algorithms, our technique allows a client to compress and encrypt images before sending them to a server, greatly reducing the required bandwidth. The server homomorphically decompresses a ciphertext to obtain an encrypted image to which generic pixel-wise processing or convolutional filters can be applied. To reduce the round-trip bandwidth requirement, we also propose a method for server-side post-processing compression. Using our pipeline, we demonstrate that a high-definition grayscale image () can be homomorphically decompressed, processed and re-compressed in s with a compression ratio of 100/34.4 on a standard personal computer without compromising on fidelity.
Last updated:  2025-05-12
T-Spoon: Tightly Secure Two-Round Multi-Signatures with Key Aggregation
Renas Bacho and Benedikt Wagner
Multi-signatures over pairing-free cyclic groups have seen significant advancements in recent years, including achieving two-round protocols and supporting key aggregation. Key aggregation enables the combination of multiple public keys into a single succinct aggregate key for verification and has essentially evolved from an optional feature to a requirement. To enhance the concrete security of two-round schemes, Pan and Wagner (Eurocrypt 2023, 2024) introduced the first tightly secure constructions in this setting. However, their schemes do not support key aggregation, and their approach inherently precludes a single short aggregate public key. This leaves the open problem of achieving tight security and key aggregation simultaneously. In this work, we solve this open problem by presenting the first tightly secure two-round multi-signature scheme in pairing-free groups supporting key aggregation. As for Pan and Wagner's schemes, our construction is based on the DDH assumption. In contrast to theirs, it also has truly compact signatures, with signature size asymptotically independent of the number of signers.
Last updated:  2025-05-12
Correlation power analysis of LESS and CROSS
Maciej Czuprynko, Anisha Mukherjee, and Sujoy Sinha Roy
This paper presents a side-channel attack targeting the LESS and CROSS post-quantum digital signature schemes, resulting in full key recovery for both. These schemes have advanced to the second round of NIST’s call for additional signatures. By leveraging correlation power analysis and horizontal attacks, we are able to recover the secret key by observing the power consumption during the multiplication of an ephemeral secret vector with a public matrix. The attack path is enabled by the presence of a direct link between the secret key elements and the ephemeral secret, given correct responses. This attack targets version 1.2 of both schemes. In both settings we can recover the secret key in a single trace for the NIST’s security level I parameter set. Additionally, we propose improvements to the existing horizontal attack on CROSS, reducing the required rounds that need to be observed by an order of magnitude for the same parameter sets.
Last updated:  2025-05-12
Post-Quantum Cryptography in eMRTDs: Evaluating PAKE and PKI for Travel Documents
Nouri Alnahawi, Melissa Azouaoui, Joppe W. Bos, Gareth T. Davies, SeoJeong Moon, Christine van Vredendaal, and Alexander Wiesmaier
Passports, identity cards and travel visas are examples of machine readable travel documents (MRTDs) or eMRTDs for their electronic variants. The security of the data exchanged between these documents and a reader is secured with a standardized password authenticated key exchange (PAKE) protocol known as PACE. A new world-wide protocol migration is expected with the arrival of post-quantum cryptography (PQC) standards. In this paper, we focus on the impact of this migration on constrained embedded devices as used in eMRTDs. We present a feasibility study of a candidate post-quantum secure PAKE scheme as the replacement for PACE on existing widely deployed resource-constrained chips. In a wider context, we study the size, performance and security impact of adding post-quantum cryptography with a focus on chip storage and certificate chains for existing eMRTDs. We show that if the required post-quantum certificates for the eMRTD fit in memory, the migration of existing eMRTD protocols to their post-quantum secure equivalent is already feasible but a performance penalty has to be paid. When using a resource constrained SmartMX3 P71D600 smart card, designed with classical cryptography in mind, then execution times of a post-quantum secure PAKE algorithm using the recommended post-quantum parameter of the new PQC standard ML-KEM can be done in under a second. This migration will be aided by future inclusion of dedicated hardware accelerators and increased memory to allow storage of larger keys and improve performance.
Last updated:  2025-05-12
Chosen Ciphertext Security via BARGs
Takahiro Matsuda
In this paper, we show a new set of cryptographic primitives that generically leads to chosen ciphertext secure (CCA secure) public-key encryption (PKE). Specifically, we show how a (non-interactive, publicly verifiable) batch argument (BARG) for NP can be combined with a chosen plaintext secure (CPA secure) PKE scheme to achieve a CCA secure one. The requirement of the succinctness of the proof size of a BARG used as a building block is arguably very mild: We require it to be only at most for some non-negative constant and polynomials , where denotes the security parameter, denotes the statement size, and denotes the batch size (i.e. the number of statements whose correctness is simultaneously proved), and thus it can even be (slightly) linear in . A BARG with such succinctness is so weak that it cannot be used in the recent constructions of a non-interactive zero-knowledge proof system for NP based on a BARG (and a one-way function) by Bitansky et al. (STOC 2024) and Bradley, Waters, and Wu (TCC 2024). Therefore, our result gives a new building block that can upgrade CPA security into CCA security.
Last updated:  2025-05-11
KeyJoin: Privacy-Focused CoinJoin Protocol for Bitcoin
Dmitry Astakhin
Bitcoin is based on the Blockchain, an open ledger containing information about each transaction in the Bitcoin network. Blockchain serves many purposes, but it allows anyone to track all transactions and activities of each Bitcoin address. The privacy of the network is being threatened by some organizations that track transactions. Tracking and subsequent filtering of coins lead to the loss of exchangeability of Bitcoin. Despite Bitcoin’s transparency, it is possible to increase user privacy using a variety of existing methods. One of these methods is called CoinJoin, was proposed by Bitcoin developer Greg Maxwell in 2013. This technology involves combining several users transactions to create a single transaction with multiple inputs and outputs, which makes transaction analysis more complicated. This work describes the KeyJoin, a privacy-focused CoinJoin protocol based on the keyed-verification anonymous credentials (KVAC).
Last updated:  2025-05-11
Towards Optimal Differential Attacks on FLY and PIPO
Insung Kim, Seonggyeom Kim, Sunyeop Kim, Donggeun Kwon, Hanbeom Shin, Dongjae Lee, Deukjo Hong, Jaechul Sung, and Seokhie Hong
Lightweight block ciphers such as PIPO and FLY are designed to operate efficiently and securely in constrained environments. While the differential attack on PIPO-64-128 has already been studied by the designers, no concrete differential attack had been conducted for PIPO-64-256 and FLY. Motivated by this gap, we revisit the security of PIPO against differential attacks and generalize the analysis framework to make it applicable to structurally related ciphers. Based on this generalized framework, we search for key-recovery-attack-friendly distinguishers and apply clustering techniques to enhance their effectiveness in key-recovery attacks. As a result, we improve the previously proposed differential attack on PIPO-64-128, reducing the time complexity by a factor of . Furthermore, we propose a 13-round differential attack on PIPO-64-256, which covers two more rounds than the previous result. We also apply the same methodology to FLY and present the first differential attack on 12-round FLY, reaching one round beyond the best-known distinguisher. We believe this work improves the understanding of the structures of FLY and PIPO, and provides a basis for future research on advanced key-recovery attacks for related cipher designs.
Last updated:  2025-05-11
An Attack on TON’s ADNL Secure Channel Protocol
Aviv Frenkel and Dmitry Kogan
We present an attack on the Abstract Datagram Network Layer (ADNL) protocol used in The Open Network (TON), currently the tenth largest blockchain by market capitalization. In its TCP variant, ADNL secures communication between clients and specialized nodes called liteservers, which provide access to blockchain data. We identify two cryptographic design flaws in this protocol: a handshake that permits session-key replay and a non-standard integrity mechanism whose security critically depends on message confidentiality. We transform these vulnerabilities into an efficient plaintext-recovery attack by exploiting two ADNL communication patterns, allowing message reordering across replayed sessions. We then develop a plaintext model for this scenario and construct an efficient algorithm that recovers the keystream using a fraction of known plaintexts and a handful of replays. We implement our attack and show that an attacker intercepting the communication between a TON liteserver and a widely deployed ADNL client can recover the keystream used to encrypt server responses by performing eight connection replays to the server. This allows the decryption of sensitive data, such as account balances and user activity patterns. Additionally, the attacker can modify server responses to manipulate blockchain information displayed to the client, including account balances and asset prices.
Last updated:  2025-05-11
Efficient CPA Attack on Hardware Implementation of ML-DSA in Post-Quantum Root of Trust
Merve Karabulut and Reza Azarderakhsh
Side-channel attacks (SCA) pose a significant threat to cryptographic implementations, including those designed to withstand the computational power of quantum computers. This paper introduces the first side-channel attack on an industry-grade post-quantum cryptography implementation. Specifically, we present a Correlation Power Analysis (CPA) attack targeting the open-source hardware implementation of ML-DSA within a Silicon Root of Trust framework developed through a multi-party collaboration involving leading technology companies. Our attack focuses on the modular reduction process that follows the Number Theoretic Transform-based polynomial pointwise multiplication. By exploiting side-channel leakage from a distinctive unique reduction algorithm and leveraging the zeroization mechanism used to securely erase sensitive information by clearing internal registers, we significantly enhance the attack's efficacy. Our findings reveal that an adversary can extract the secret keys using only 10,000 power traces. With access to these keys, an attacker could forge signatures for certificate generation, thereby compromising the integrity of the root of trust. This work highlights the vulnerabilities of industry-standard root-of-trust systems to side-channel attacks. It underscores the urgent need for robust countermeasures to secure commercially deployed systems against such threats.
Last updated:  2025-05-10
Registered Functional Encryption for Attribute-Weighted Sums with Access Control
Tapas Pal and Robert Schädlich
In this work, we present Functional Encryption (FE) schemes for Attribute-Weighted Sums (AWS), introduced by Abdalla, Gong and Wee (Crypto 2020) in the registration-based setting (RFE). In such a setting, users sample their own public/private key pairs ; a key curator registers user public keys along with their functions ; encryption takes as input attribute-value pairs where is public and is private; and decryption recovers the weighted sum while leaking no additional information about . Recently, Agrawal, Tomida and Yadav (Crypto 2023) studied the attribute-based case of AWS (AB-AWS) providing fine-grained access control, where the function is described by a tuple , the input is extended to and decryption recovers the weighted sum only if . Our main results are the following: - We build the first RFE for (AB-)1AWS functionality, where , that achieves adaptive indistinguishability-based security under the (bilateral) -Lin assumption in prime-order pairing groups. Prior works achieve RFE for linear and quadratic functions without access control in the standard model, or for attribute-based linear functions in the generic group model. - We develop the first RFE for AB-AWS functionality, where is unbounded, that achieves very selective simulation-based security under the bilateral -Lin assumption. Here, “very selective” means that the adversary declares challenge attribute values, all registered functions and corrupted users upfront. Previously, SIM-secure RFEs were only constructed for linear and quadratic functions without access control in the same security model. We devise a novel nested encoding mechanism that facilitates achieving attribute-based access control and unbounded inputs in the registration-based setting for AWS functionalities, proven secure in the standard model. In terms of efficiency, our constructions feature short public parameters, secret keys independent of , and compact ciphertexts unaffected by the length of public inputs. Moreover, as required by RFE properties, all objective sizes and algorithm costs scale poly-logarithmically with the total number of registered users in the system.
Last updated:  2025-05-10
The Meta-Complexity of Secret Sharing
Benny Applebaum and Oded Nir
A secret-sharing scheme allows the distribution of a secret among parties, such that only certain predefined “authorized” sets of parties can reconstruct the secret, while all other “unauthorized” sets learn nothing about . The collection of authorized/unauthorized sets is defined by a monotone function . It is known that any monotone function can be realized by a secret-sharing scheme; thus, the smallest achievable \emph{total share size}, , serves as a natural complexity measure. In this paper, we initiate the study of the following meta-complexity question: Given a monotone function , is it possible to efficiently distinguish between cases where the secret-sharing complexity of is small versus large? We examine this question across several computational models, yielding the following main results. (Hardness for formulas and circuits): Given a monotone formula of size , it is coNP-hard to distinguish between ``cheap'' functions, where the maximum share size is 1 bit and the total share size is , and ``expensive'' functions, where the maximum share size is and the total share size is . This latter bound nearly matches known secret-sharing constructions yielding a total share size of bits. For monotone circuits, we strengthen the bound on the expensive case to a maximum share size of and a total share size of . These results rule out the existence of instance-optimal compilers that map a formula to a secret-sharing scheme with complexity polynomially related to . (Hardness for truth tables): Under cryptographic assumptions, either (1) every -bit slice function can be realized by a -size secret-sharing scheme, or (2) given a truth-table representation of of size , it is computationally infeasible to distinguish in time between cases where and . Option (1) would be considered a breakthrough result, as the best-known construction for slices has a sub-exponential complexity of (Liu, Vaikuntanathan, and Wee; Eurocrypt 2018). Our proof introduces a new worst-case-to-average-case reduction for slices, which may be of independent interest. (Hardness for graphs): We examine the simple case where is given as a 2-DNF, represented by a graph whose edges correspond to 2-terms, and ask whether it is possible to distinguish between cases where the share size is constant and those where the share size is large, say . We establish several connections between this question and questions in communication complexity. For instance, we show that graphs admitting constant-cost secret sharing form a subclass of graphs with constant randomized communication complexity and constant-size adjacency sketches (Harms, Wild, and Zamaraev; STOC 2022). We leverage these connections to establish new lower bounds for specific graph families, derive a combinatorial characterization of graphs with constant-size linear secret-sharing schemes, and show that a natural class of myopic algorithms fails to distinguish cheap graphs from expensive ones.
Last updated:  2025-05-10
Engorgio: An Arbitrary-Precision Unbounded-Size Hybrid Encrypted Database via Quantized Fully Homomorphic Encryption
Song Bian, Haowen Pan, Jiaqi Hu, Zhou Zhang, Yunhao Fu, Jiafeng Hua, Yi Chen, Bo Zhang, Yier Jin, Jin Dong, and Zhenyu Guan
This work proposes an encrypted hybrid database framework that combines vectorized data search and relational data query over quantized fully homomorphic encryption (FHE). We observe that, due to the lack of efficient encrypted data ordering capabilities, most existing encrypted database (EDB) frameworks do not support hybrid queries involving both vectorized and relational data. To further enrich query expressiveness while retaining evaluation efficiency, we propose Engorgio, a hybrid EDB framework based on quantized data ordering techniques over FHE. Specifically, we design a new quantized data encoding scheme along with a set of novel comparison and permutation algorithms to accurately generate and apply orders between large-precision data items. Furthermore, we optimize specific query types, including full table scan, batched query, and Top-k query to enhance the practical performance of the proposed framework. In the experiment, we show that, compared to the state-of-the-art EDB frameworks, Engorgio is up to 28x--854x faster in homomorphic comparison, 65x--687x faster in homomorphic sorting and 15x--1,640x faster over a variety of end-to-end relational, vectorized, and hybrid SQL benchmarks. Using Engorgio, the amortized runtime for executing a relational and hybrid query on a 48-core processor is under 3 and 75 seconds, respectively, over a 10K-row hybrid database.
Last updated:  2025-05-10
Generic Construction of Secure Sketches from Groups
Axel Durbet, Koray Karabina, and Kevin Thiry-Atighehchi
Secure sketches are designed to facilitate the recovery of originally enrolled data from inputs that may vary slightly over time. This capability is important in applications where data consistency cannot be guaranteed due to natural variations, such as in biometric systems and hardware security. Traditionally, secure sketches are constructed using error-correcting codes to handle these variations effectively. Additionally, principles of information theory ensure the security of these sketches by managing the trade-off between data recoverability and confidentiality. In this paper, we show how to construct a new family of secure sketches generically from groups. The notion of groups with unique factorization property is first introduced, which is of independent interest and serves as a building block for our secure sketch construction. Next, an in-depth study of the underlying mathematical structures is provided, and some computational and decisional hardness assumptions are defined. As a result, it is argued that our secure sketches are efficient; can handle a linear fraction of errors with respect to the norm 1 distance; and that they are reusable and irreversible. To our knowledge, such generic group-based secure sketch construction is the first of its kind, and it offers a viable alternative to the currently known secure sketches.
Last updated:  2025-05-10
Universally Composable Interactive and Ordered Multi-Signatures
Uncategorized
Carsten Baum, Bernardo David, Elena Pagnin, and Akira Takahashi
Show abstract
Uncategorized
Multi-signatures allow a given set of parties to cooperate in order to create a digital signature whose size is independent of the number of signers. At the same time, no other set of parties can create such a signature. While non-interactive multi-signatures are known (e.g. BLS from pairings), many popular multi-signature schemes such as MuSig2 (which are constructed from pairing-free discrete logarithm-style assumptions) require interaction. Such interactive multi-signatures have recently found practical applications e.g. in the cryptocurrency space. Motivated by classical and emerging use cases of such interactive multi-signatures, we introduce the first systematic treatment of interactive multi-signatures in the universal composability (UC) framework. Along the way, we revisit existing game-based security notions and prove that constructions secure in the game-based setting can easily be made UC secure and vice versa. In addition, we consider interactive multi-signatures where the signers must interact in a fixed pattern (so-called ordered multi-signatures). Here, we provide the first construction of ordered multi-signatures based on the one-more discrete logarithm assumption, whereas the only other previously known construction required pairings. Our scheme achieves a stronger notion of unforgeability, guaranteeing that the adversary cannot obtain a signature altering the relative order of honest signers. We also present the first formalization of ordered multi-signatures in the UC framework and again show that our stronger game-based definitions are equivalent to UC security.
Last updated:  2025-05-10
A Note on ``CABC: A Cross-Domain Authentication Method Combining Blockchain with Certificateless Signature for IIoT''
Zhengjun Cao and Lihua Liu
We show that the authentication method [Future Gener. Comput. Syst. 158: 516-529 (2024)] cannot be practically implemented, because the signature scheme is insecure against certificateless public key replacement forgery attack. The explicit dependency between the certificateless public key and secret key is not properly used to construct some intractable problems, such as Elliptic Curve Discrete Logarithm (ECDL). An adversary can find an efficient signing algorithm functionally equivalent to the valid signing algorithm. We also correct some typos in the original presentation.
Last updated:  2025-05-09
Conditional disclosure of secrets with quantum resources
Vahid R. Asadi, Kohdai Kuroiwa, Debbie Leung, Alex May, Sabrina Pasterski, and Chris Waddell
The conditional disclosure of secrets (CDS) primitive is among the simplest cryptographic settings in which to study the relationship between communication, randomness, and security. CDS involves two parties, Alice and Bob, who do not communicate but who wish to reveal a secret to a referee if and only if a Boolean function has . Alice knows , Bob knows , and the referee knows . Recently, a quantum analogue of this primitive called CDQS was defined and related to f-routing, a task studied in the context of quantum position-verification. CDQS has the same inputs, outputs, and communication pattern as CDS but allows the use of shared entanglement and quantum messages. We initiate the systematic study of CDQS, with the aim of better understanding the relationship between privacy and quantum resources in the information theoretic setting. We begin by looking for quantum analogues of results already established in the classical CDS literature. Doing so we establish a number of basic properties of CDQS, including lower bounds on entanglement and communication stated in terms of measures of communication complexity. Because of the close relationship to the -routing position-verification scheme, our results have relevance to the security of these schemes.
Last updated:  2025-05-09
A note on closed addition chains and complete numbers
Theophilus Agama
We introduce a new class of addition chains and show the numbers for which these chains are optimal satisfy the Scholz conjecture, precisely the inequality
Last updated:  2025-05-09
Constant-time Integer Arithmetic for SQIsign
Fatna Kouider, Anisha Mukherjee, David Jacquemin, and Péter Kutas
SQIsign, the only isogeny-based signature scheme submitted to NIST’s additional signature standardization call, achieves the smallest public key and signature sizes among all post-quantum signature schemes. However, its existing implementation, particularly in its quaternion arithmetic operations, relies on GMP’s big integer functions, which, while efficient, are often not designed for constant-time execution. In this work, we take a step toward side-channel-protected SQIsign by implementing constant-time techniques for SQIsign’s big integer arithmetic, which forms the computational backbone of its quaternion module. For low-level fundamental functions including Euclidean division, exponentiation and the function that computes integer square root, we either extend or tailor existing solutions according to SQIsign's requirements such as handling signed integers or scaling them for integers up to 12,000 bits. Further, we propose a novel constant-time modular reduction technique designed to handle dynamically changing moduli.Our implementation is written in C without reliance on high-level libraries such as GMP and we evaluate the constant-time properties of our implementation using Timecop with Valgrind that confirm the absence of timing-dependent execution paths. We provide experimental benchmarks across various SQIsign parameter sizes to demonstrate the performance of our constant-time implementation.
Last updated:  2025-05-09
Worst-Case Time Analysis of Key Agreement Protocols in 10BASE-T1S Automotive Networks
Teodora Ljubevska, Alexander Zeh, Donjete Elshani Rama, and Ken Tindell
With the rise of in-vehicle and car-to-x communication systems, ensuring robust security in automotive networks is becoming increasingly vital. As the industry shifts toward Ethernet-based architectures, the IEEE 802.1AE MACsec standard is gaining prominence as a critical security solution for future in-vehicle networks (IVNs). MACsec utilizes the MACsec Key Agreement Protocol (MKA), defined in the IEEE 802.1X standard, to establish secure encryption keys for data transmission. However, when applied to 10BASE-T1S Ethernet networks with multidrop topologies, MKA encounters a significant challenge known as the real-time paradox. This paradox arises from the competing demands of prioritizing key agreement messages and real-time control data, which conflict with each other. Infineon addresses this challenge with its innovative In-Line Key Agreement (IKA) protocol. By embedding key agreement information directly within a standard data frame, IKA effectively resolves the real-time paradox and enhances network performance. This paper establishes a theoretical worst-case delay bound for key agreement in multidrop 10BASE-T1S IVNs with more than two nodes, using Network Calculus techniques. The analysis compares the MKA and IKA protocols in terms of performance. For a startup scenario involving a 16-node network with a 50 bytes MPDU size, the MKA protocol exhibits a worst-case delay that is 1080% higher than that of IKA. As the MPDU size increases to 1486 bytes, this performance gap narrows significantly, reducing the delay difference to just 6.6%.
Last updated:  2025-05-09
Simple Power Analysis Attack on SQIsign
Anisha Mukherjee, Maciej Czuprynko, David Jacquemin, Péter Kutas, and Sujoy Sinha Roy
The isogeny-based post-quantum digital signature algorithm SQIsign offers the most compact key and signature sizes among all candidates in the ongoing NIST call for additional post-quantum signature algorithms. To the best of our knowledge, we present the first Simple Power Analysis (SPA) side-channel attack on SQIsign, demonstrating its feasibility for key recovery. Our attack specifically targets secret-dependent computations within Cornacchia's algorithm, a fundamental component of SQIsign's quaternion module. At the core of this algorithm, a secret-derived yet ephemeral exponent is used in a modular exponentiation subroutine. By performing SPA on the modular exponentiation, we successfully recover this ephemeral exponent. We then develop a method to show how this leaked exponent can be exploited to ultimately reconstruct the secret signing key of SQIsign. Our findings emphasize the critical need for side-channel-resistant implementations of SQIsign, highlighting previously unexplored vulnerabilities in its design.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.