Papers updated in last 183 days (1689 results)

Last updated:  2025-04-14
Switching the Top Slice of the Sandwich with Extra Filling Yields a Stronger Boomerang for NLFSR-based Block Ciphers
Amit Jana, Mostafizar Rahman, Prathamesh Ram, Dhiman Saha, and Goutam Paul
The Boomerang attack was one of the first attempts to visualize a cipher (E) as a composition of two sub-ciphers (E1E0) to devise and exploit two high-probability (say ) shorter trails instead of relying on a single low probability (say ) longer trail for differential cryptanalysis. The attack generally works whenever . However, it was later succeeded by the so-called ``sandwich attack'' which essentially splits the cipher in three parts adding an additional middle layer () with distinguishing probability of . It is primarily the generalization of a body of research in this direction that investigate what is referred to as the switching activity and capture the dependencies and potential incompatibilities of the layers that the middle layer separates. This work revisits the philosophy of the sandwich attack over multiple rounds for NLFSR-based block ciphers and introduces a new method to find high probability boomerang distinguishers. The approach formalizes boomerang attacks using only ladder, And switches. The cipher is treated as , a specialized form of a sandwich attack which we called as the ``open-sandwich attack''. The distinguishing probability for this attack configuration is . Using this innovative approach, the study successfully identifies a deterministic boomerang distinguisher for the keyed permutation of the TinyJambu cipher over 320 rounds. Additionally, a 640-round boomerang with a probability of is presented with 95% success rate. In the related-key setting, we unveil full-round boomerangs with probabilities of , , and for all three variants, demonstrating a 99% success rate. Similarly, for Katan-32, a more effective related-key boomerang spanning 140 rounds with a probability of is uncovered with 70% success rate. Further, in the single-key setting, a 84-round boomerang with probability found with success rate of 60%. This research deepens the understanding of boomerang attacks, enhancing the toolkit for cryptanalysts to develop efficient and impactful attacks on NLFSR-based block ciphers.
Last updated:  2025-04-14
Reusable Fuzzy Extractors for Low-Entropy Distributions
Uncategorized
Ran Canetti, Benjamin Fuller, Omer Paneth, Leonid Reyzin, and Adam Smith
Show abstract
Uncategorized
Fuzzy extractors (Dodis et al., Eurocrypt 2004) convert repeated noisy readings of a secret into the same uniformly distributed key. To eliminate noise, they require an initial enrollment phase that takes the first noisy reading of the secret and produces a nonsecret helper string to be used in subsequent readings. Reusable fuzzy extractors (Boyen, CCS 2004) remain secure even when this initial enrollment phase is repeated multiple times with noisy versions of the same secret, producing multiple helper strings (for example, when a single person's biometric is enrolled with multiple unrelated organizations). We construct the first reusable fuzzy extractor that makes no assumptions about how multiple readings of the source are correlated (the only prior construction assumed a very specific, unrealistic class of correlations). The extractor works for binary strings with Hamming noise; it achieves computational security under assumptions on the security of hash functions or in the random oracle model. It is simple and efficient and tolerates near-linear error rates. Our reusable extractor is secure for source distributions of linear min-entropy rate. The construction is also secure for sources with much lower entropy rates--lower than those supported by prior (nonreusable) constructions--assuming that the distribution has some additional structure, namely, that random subsequences of the source have sufficient minentropy. We show that such structural assumptions are necessary to support low entropy rates. We then explore further how different structural properties of a noisy source can be used to construct fuzzy extractors when the error rates are high, providing a computationally secure and an information-theoretically secure construction for large-alphabet sources.
Last updated:  2025-04-14
SQIAsignHD: SQIsignHD Adaptor Signature
Farzin Renan and Péter Kutas
Adaptor signatures can be viewed as a generalized form of standard digital signature schemes by linking message authentication to the disclosure of a secret value. As a recent cryptographic primitive, they have become essential for blockchain applications, including cryptocurrencies, by reducing on-chain costs, improving fungibility, and enabling off-chain payments in payment-channel networks, payment-channel hubs, and atomic swaps. However, existing adaptor signature constructions are vulnerable to quantum attacks due to Shor's algorithm. In this work, we introduce , a new quantum-resistant adaptor signature scheme based on isogenies of supersingular elliptic curves, using SQIsignHD - as the underlying signature scheme - and exploiting the idea of the artificial orientation on the supersingular isogeny Diffie-Hellman key exchange protocol, SIDH, to define the underlying hard relation. We, furthermore, provide a formal security proof for our proposed scheme.
Last updated:  2025-04-14
Aether: Approaching the Holy Grail in Asynchronous BFT
Xiaohai Dai, Chaozheng Ding, Julian Loss, and Ling Ren
State-of-the-art asynchronous Byzantine Fault Tolerance (BFT) protocols integrate a partially-synchronous optimistic path. The holy grail in this paradigm is to match the performance of a partially-synchronous protocol in favorable situations and match the performance of a purely asynchronous protocol in unfavorable situations. Several prior works have made progress toward this goal by matching the efficiency of a partially-synchronous protocol in favorable conditions. However, their performance compared to purely asynchronous protocols is reduced when network conditions are unfavorable. To address these shortcomings, a recent work, Abraxas (CCS'23), presents the first optimistic asynchronous BFT protocol that retains stable throughput in all situations. However, Abraxas still incurs very high worst-case latency in unfavorable situations because it is slow at detecting the failure of its optimistic path. Another recent work, ParBFT (CCS'23) guarantees good latency in all situations, but suffers from reduced throughput in unfavorable situations due to its use of extra Asynchronous Binary Agreement (ABA) instances. To approach our holy grail, we propose Aether, which delivers performance comparable to partially-synchronous protocols in favorable situations, and attains performance on par with purely asynchronous protocols in unfavorable situations—in both throughput and latency. Aether also runs the two paths simultaneously. It adopts two-chain HotStuff as the optimistic path, thus achieving high performance in favorable situations. As for the pessimistic path, we introduce a new primitive Dual-functional Byzantine Agreement (DBA), which packs the functionalities of biased ABA and Validated Asynchronous Byzantine Agreement (VABA). Aether runs DBA instances continuously as the pessimistic path. DBA’s ABA functionality quickly detects the optimistic path’s failure, ensuring Aether’s low latency in unfavorable situations. Meanwhile, the VABA functionality continuously produces blocks, maintaining Aether’s high throughput. Additionally, the biased property ensures that blocks committed via the optimistic path are respected by DBA instances, guaranteeing consistency across two paths. We conduct extensive experiments to demonstrate that Aether achieves high throughput and low latency in all situations.
Last updated:  2025-04-14
On the Average Random Probing Model
Julien Béguinot and Loïc Masure
Masking is one of the main countermeasures against side-channel analysis since it relies on provable security. In this context, “provable” means that a security bound can be exhibited for the masked implementation through a theoretical analysis in a given threat model. The main goal in this line of research is therefore to provide the tightest security bound, in the most realistic model, in the most generic way. Yet, all of these objectives cannot be reached together. That is why the masking literature has introduced a large spectrum of threat models and reductions between them, depending on the desired trade-off with respect to these three goals. In this paper, we focus on three threat models, namely the noisy-leakage model (realistic yet hard to work with), the random probing (unrealistic yet easy to work with), and more particularly a third intermediate model called average random probing. Average random probing has been introduced by Dziembowski et al. at Eurocrypt 2015, in order to exhibit a tight reduction between noisy-leakage and random probing models, recently proven by Brian et al. at Eurocrypt 2024. This milestone has strong practical consequences, since otherwise the reduction from the noisy leakage model to the random probing model introduces a prohibitively high constant factor in the security bound, preventing security evaluators to use it in practice. However, we exhibit a gap between the average random probing definitions of Dziembowski et al. (denoted hereafter by DFS-ARP) and Brian et al. (simply denoted by ARP). Whereas any noisy leakage can be tightly reduced to DFS-ARP, we show in this paper that it cannot be tightly reduced to ARP, unless requiring extra assumptions, e.g., if the noisy leakage is deterministic. Our proof techniques do not involve more tools than the one used so far in such reductions, namely basic probability facts, and known properties of the total variation distance. As a consequence, the reduction from the noisy leakage to the random probing — without high constant factor — remains unproven. This stresses the need to clarify the practical relevance of analyzing the security of masking in the random probing model since most of the current efforts towards improving the constructions and their security proofs in the random probing model might be hindered by potentially unavoidable loss in the reduction from more realistic but currently less investigated leakage models.
Last updated:  2025-04-14
: Towards More Efficient and BBB-secure AE From a Single Public Permutation
Arghya Bhattacharjee, Ritam Bhaumik, Avijit Dutta, and Eik List
Four recent trends have emerged in the evolution of authenticated encryption schemes: (1) Regarding simplicity, the adoption of public permutations as primitives allows for sparing a key schedule and the need for storing round keys; (2) using the sums of permutation outputs, inputs, or outputs has been a well-studied means to achieve higher security beyond the birthday bound; (3) concerning robustness, schemes should provide graceful security degradation if a limited amount of nonces repeats during the lifetime of a key, and (4) Andreeva et al.'s ForkCipher approach can increase the efficiency of a scheme since they can use fewer rounds per output branch compared to full-round primitives. In this work, we improve on the state of the art by combining those aspects for efficient authenticated encryption. We propose , an efficient nonce-based AE scheme that employs a public permutation and one call to an XOR-universal hash function. provides -bit security and high throughput by combining forked public-permutation-based variants of and an Encrypted Davies-Meyer. Thus, it can use a single, in part round-reduced, public permutation for most operations, spare a key schedule, and guarantee security beyond the birthday bound even under limited nonce reuse.
Last updated:  2025-04-14
HQC Beyond the BSC: Towards Error Structure-Aware Decoding
Marco Baldi, Sebastian Bitzer, Nicholas Lilla, and Paolo Santini
In Hamming Quasi-Cyclic (HQC), one of the finalists in the NIST competition for the standardization of post-quantum cryptography, decryption relies on decoding a noisy codeword through a public error-correcting code. The noise vector has a special form that depends on the secret key (a pair of sparse polynomials). However, the decoder, which is currently employed in HQC, is agnostic to the secret key, operating under the assumption that the error arises from a Binary Symmetric Channel (BSC). In this paper, we demonstrate that this special noise structure can instead be leveraged to develop more powerful decoding strategies. We first study the problem from a coding-theoretic perspective. The current code design, which admits a non-zero decryption failure rate, is close to optimal in the setting of a decoder that is agnostic to the error structure. We show that there are code-decoder pairs with a considerably shorter code length that can guarantee unique decoding by taking the error structure into account. This result is non-constructive, i.e., we do not provide an explicit code construction and it remains open whether efficient decoding is possible. Nevertheless, it highlights the potential that can be tapped by taking the error structure into account. We then argue that, in practice, the matter of decoding in HQC can be related to solving an instance of the noisy syndrome decoding problem, in which the parity-check matrix is constituted by the polynomials in the secret key. We show that, using decoders for Low-Density Parity-Check (LDPC) and Moderate-Density Parity-Check (MDPC) codes, one can significantly reduce the entity of the noise and, de facto, also the Decoding Failure Rate (DFR) of the HQC decoder. This preliminary study leaves some open questions and problems. While it shows that decoding in HQC can be improved, the modeling of the DFR gets more complicated: even for the basic decoder we propose in this paper, we have not been able to devise a reliable DFR model. This is likely due to the fact that the decoder structure resembles the iterative nature of LDPC/MDPC decoders, for which devising a reliable DFR estimation is a well-known difficult problem.
Last updated:  2025-04-14
NTWE: A Natural Combination of NTRU and LWE
Joel Gärtner
Lattice-based cryptosystems are some of the primary post-quantum secure alternatives to the asymmetric cryptography that is used today. These lattice-based cryptosystems typically rely on the hardness of some version of either the NTRU or the LWE problem. In this paper, we present the NTWE problem, a natural combination of the NTRU and LWE problems, and construct a new lattice-based cryptosystem based on the hardness of the NTWE problem. As with the NTRU and LWE problems, the NTWE problem naturally corresponds to a problem in a -ary lattice. This allows the hardness of the NTWE problem to be estimated in the same way as it is estimated for the LWE and NTRU problems. We parametrize our cryptosystem from such a hardness estimate and the resulting scheme has performance that is competitive with that of typical lattice-based schemes. In some sense, our NTWE-based cryptosystem can be seen as a less structured and more compact version of a cryptosystem based on the module-NTRU problem. Thus, parameters for our cryptosystem can be selected with the flexibility of a module-LWE-based scheme, while other properties of our system are more similar to those in an NTRU-based system.
Last updated:  2025-04-14
PipeSwap: Forcing the Timely Release of a Secret for Atomic Cross-Chain Swaps
Peifang Ni, Anqi Tian, and Jing Xu
Atomic cross-chain swaps mitigate the interoperability challenges faced by current cryptocurrencies, thereby facilitating inter-currency exchange and trading between the distrusting users. Although numerous atomic swaps protocols utilizing Hash Timelock Contracts have been deployed and put into practice, they are substantially far from universality due to their inherent dependence of rich scripting language supported by the underlying blockchains. The recently proposed Universal Atomic Swaps protocol [IEEE S&P'22] represents a significant advancement in the field of scriptless cross-chain swaps by ingeniously delegating scripting functionalities to cryptographic locking mechanisms, particularly the adaptor signatures and timed commitment schemes. However, we identify a new form of attack termed the double-claiming attack that leverages these scriptless functionalities to undermine atomicity with a high probability. This attack is inherent to the designs adopted by the existing scriptless cross-chain swaps protocols as well as the payment channel networks. We further quantify the severity of this attack based on real-word swap transactions processed by the most widely deployed decentralized exchange platforms, highlighting the critical challenges in designing universal atomic swaps. To address the double-claiming attack while ensuring both security and practical universality, we also present a cross-chain swaps protocol called \textup{PipeSwap}. Specifically, PipeSwap protects the frozen coins from being double-claimed by a novelly designed paradigm of pipelined coins flow that utilizes the techniques of two-hop swap and two-hop refund. In addition to a comprehensive security analysis in the Universal Composability framework, we develop a proof-of-concept implementation of PipeSwap with Schnorr/ECDSA signatures, and conduct extensive experiments to evaluate the overhead. The experimental results show that PipeSwap can be performed in less than 1.7 seconds while maintaining less than 7 kb of communication overhead on commodity machines.
Last updated:  2025-04-14
Self-Orthogonal Minimal Codes From (Vectorial) p-ary Plateaued Functions
René Rodríguez Aldama, Enes Pasalic, Fengrong Zhang, and Yongzhuang Wei
In this article, we derive the weight distribution of linear codes stemming from a subclass of (vectorial) -ary plateaued functions (for a prime ), which includes all the explicitly known examples of weakly and non-weakly regular plateaued functions. This construction of linear codes is referred in the literature as the first generic construction. First, we partition the class of -ary plateaued functions into three classes and , according to the behavior of their dual function . Using these classes, we refine the results presented in a series of articles \cite{Mesnager2017, MesOzSi,Pelen2020, RodPasZhaWei, WeiWangFu}. Namely, we derive the full weight distributions of codes stemming from all -plateaued functions for odd (parametrized by the weight of the dual ), whereas for even, the weight distributions are derived from the class of -plateaued functions in parametrized using two parameters (including and a related parameter ). Additionally, we provide more results on the different weight distributions of codes stemming from functions in subclasses of the three different classes. The exact derivation of such distributions is achieved by using some well-known equations over finite fields to count certain dual preimages. In order to improve the dimension of these codes, we then study the vectorial case, thus providing the weight distributions of a few codes associated to known vectorial plateaued functions and obtaining codes with parameters . For the first time, we provide the full weight distributions of codes from (a subclass of) vectorial -ary plateaued functions. This class includes all known explicit examples in the literature. The obtained codes are minimal and self-orthogonal virtually in all cases. Notably, we show that this is the best one can achieve---there are no -ary self-dual minimal codes for any prime power , except for the ternary tetracode and the binary repetition code.
Last updated:  2025-04-14
Revisiting the attacker's knowledge in inference attacks against Searchable Symmetric Encryption
Marc Damie, Jean-Benoist Leger, Florian Hahn, and Andreas Peter
Encrypted search schemes have been proposed to address growing privacy concerns. However, several leakage-abuse attacks have highlighted some security vulnerabilities. Recent attacks assumed an attacker's knowledge containing data "similar" to the indexed data. However, this vague assumption is barely discussed in literature: how likely is it for an attacker to obtain a "similar enough" data? Our paper provides novel statistical tools usable on any attack in this setting to analyze its sensitivity to data similarity. First, we introduce a mathematical model based on statistical estimators to analytically understand the attackers' knowledge and the notion of similarity. Second, we conceive statistical tools to model the influence of the similarity on the attack accuracy. We apply our tools on three existing attacks to answer questions such as: is similarity the only factor influencing accuracy of a given attack? Third, we show that the enforcement of a maximum index size can make the ``similar-data'' assumption harder to satisfy. In particular, we propose a statistical method to estimate an appropriate maximum size for a given attack and dataset. For the best known attack on the Enron dataset, a maximum index size of 200 guarantees (with high probability) the attack accuracy to be below 5%.
Last updated:  2025-04-14
K-Linkable Ring Signatures and Applications in Generalized Voting
Wonseok Choi, Xiangyu Liu, Lirong Xia, and Vassilis Zikas
(LRS) allow a user to sign anonymously on behalf of a ring, while maintaining linkability—two signatures from the same signer are publicly identified, i.e., linked. This linkability makes LRS suitable to prevent double-voting in classical, voting protocols—each voter casts one vote and the candidate with the most votes wins the election. Several voting scenarios rely on (generalized) rules rather than plurality. For example, in , voters submit a ranking of the candidates, and the outcome is a function of these rankings. Such generalized voting rules are common in social choice theory, and have recently found their way into blockchain governance, e.g., for prioritizing (voting on) proposed (candidate) projects. However, unlike plurality voting, using LRS for voters to sign their votes (rankings) does not guarantee vote privacy as one can observe the rankings of each individual voter, which, depending on the scoring rule, is more information than what the outcome of the election offers. We introduce - (-LRS) as a primitive for simultaneously achieving anonymity and privacy in generalized voting. A -LRS scheme has the following properties: (-): a user can sign anonymously (on behalf of the ring) up to times, so that even an unbounded adversary cannot link his signatures. (-): If any signer signs more than times, all his signatures are publicly linked ; and, any set of signers cannot generate more than unlinked signatures . We provide two constructions of -LRS: one is from the DDH, and the other is from SIS (hence post-quantum). Finally, we show how -LRS can be applied to a broad range of voting rules, including , , and . Our protocols are non-interactive voting—each voter just posts a message on a bulletin board—which highlights the potential of -LRS in blockchain-governance scenarios.
Last updated:  2025-04-13
A Unified Treatment of Anamorphic Encryption
Wonseok Choi, Daniel Collins, Xiangyu Liu, and Vassilis Zikas
Receiver anamorphic encryption (hereafter anamorphic encryption), introduced by Persiano et al. at Eurocrypt 2022, allows for a double message to be symmetrically hidden in a public-key encryption ciphertext via a pre-shared -double key-. In anamorphic encryption, confidentiality must be preserved even if the adversary (or the -dictator-) has access to all regular keys. It has been the subject of several works since its introduction that explore tweaks and extensions to the core primitive. However, this study has not been systematic, and so disparate security notions have been proposed, for which their relationships are not clear. Moreover, there are clear gaps in the literature, including in the treatment of chosen-ciphertext attacks. In this work, we conduct a systematic study of receiver anamorphic encryption. We unify existing security notions and propose several new ones, and prove implications and separations between them. Our main findings are as follows. First, we identify gaps in previous security notions against an anamorphic -sender-, namely an adversary who is given the double key, and propose three new security notions to bridge these gaps. We also identify several gaps in the treatment of chosen-ciphertext attacks, a setting only very recently considered in anamorphic cryptography (Jaeger and Stracovsky, Asiacrypt 2024). Moreover, observing that no previous construction achieves all desirable security properties in this setting, we propose a suitable construction that does. Finally, we propose several security notions for -asymmetric- anamorphic encryption, and explore the case here where the dictator and the anamorphic sender collude.
Last updated:  2025-04-13
Simple and General Counterexamples for Private-Coin Evasive LWE
Nico Döttling, Abhishek Jain, Giulio Malavolta, Surya Mathialagan, and Vinod Vaikuntanathan
We present a simple counterexample to all known variants of the private-coin evasive learning with errors (LWE) assumption. Unlike prior works, our counterexample is direct, it does not use heavy cryptographic machinery (such as obfuscation or witness encryption), and it applies to all variants of the assumption. Our counterexample can be seen as a "zeroizing" attack against evasive LWE, calling into question the soundness of the underlying design philosophy.
Last updated:  2025-04-13
Powerformer: Efficient and High-Accuracy Privacy-Preserving Language Model with Homomorphic Encryption
Dongjin Park, Eunsang Lee, and Joon-Woo Lee
We propose \textit{Powerformer}, an efficient homomorphic encryption (HE)-based privacy-preserving language model (PPLM) designed to reduce computation overhead while maintaining model performance. Powerformer incorporates three key techniques to optimize encrypted computations: 1) A novel distillation technique that replaces softmax and layer normalization (LN) with computationally efficient power and linear functions, ensuring no performance degradation while enabling seamless encrypted computation. 2) A pseudo-sign composite approximation method that accurately approximates GELU and tanh functions with minimal computational overhead. 3) A homomorphic matrix multiplication algorithm specifically optimized for Transformer models, enhancing efficiency in encrypted environments. By integrating these techniques, Powerformer based on the BERT-base model achieves a 45\% reduction in computation time compared to the state-of-the-art HE-based PPLM without any loss in accuracy.
Last updated:  2025-04-13
Eccfrog512ck2: An Enhanced 512-bit Weierstrass Elliptic Curve
Víctor Duarte Melo and William J Buchanan
Whilst many key exchange and digital signature methods use the NIST P256 (secp256r1) and secp256k1 curves, there is often a demand for increased security. With these curves, we have a 128-bit security. These security levels can be increased to 256-bit security with NIST P-521 Curve 448 and Brainpool-P512. This paper outlines a new curve - Eccfrog512ck2 - and which provides 256-bit security and enhanced performance over NIST P-521. Along with this, it has side-channel resistance and is designed to avoid weaknesses such as related to the MOV attack. It shows that Eccfrog512ck2 can have a 61.5% speed-up on scalar multiplication and a 33.3% speed-up on point generation over the NIST P-521 curve.
Last updated:  2025-04-13
Fast, private and regulated payments in asynchronous networks
Maxence Brugeres, Victor Languille, Petr Kuznetsov, and Hamza Zarfaoui
We propose a decentralized asset-transfer system that enjoys full privacy: no party can learn the details of a transaction, except for its issuer and its recipient. Furthermore, the recipient is only aware of the amount of the transaction. Our system does not rely on consensus or synchrony assumptions, and therefore, it is responsive, since it runs at the actual network speed. Under the hood, every transaction creates a consumable coin equipped with a non-interactive zero-knowledge proof (NIZK) that confirms that the issuer has sufficient funds without revealing any information about her identity, the recipient's identity, or the payment amount. Moreover, we equip our system with a regulatory enforcement mechanism that can be used to regulate transfer limits or restrict specific addresses from sending or receiving funds, while preserving the system's privacy guarantees. Finally, we report on Paxpay, our implementation of Fully Private Asset Transfer (FPAT) that uses the Gnark library for the NIZKs. In our benchmark, Paxpay exhibits better performance than earlier proposals that either ensure only partial privacy, require some kind of network synchrony or do not implement regulation features. Our system thus reconciles privacy, responsiveness, regulation enforcement and performance.
Last updated:  2025-04-13
Reusable Dynamic Multi-Party Homomorphic Encryption
Jung Hee Cheon, Hyeongmin Choe, Seunghong Kim, and Yongdong Yeo
Homomorphic Encryption (HE) is a promising primitive for evaluating arbitrary circuits while keeping the user's privacy. We investigate how to use HE in the multi-party setting where data is encrypted with several distinct keys. One may use the Multi-Key Homomorphic Encryption (MKHE) in this setting, but it has space/computation overhead of for the number of users , which makes it impractical when grows large. On the contrary, Multi-Party Homomorphic Encryption (MPHE) is the other Homomorphic Encryption primitive in the multi-party setting, where the space/computation overhead is ; however, is limited in terms of ciphertext reusability and dynamicity, that ciphertexts are encrypted just for a group of parties and cannot be reused for other purposes, and that additional parties cannot join the computation dynamically. Contrary to MKHE, where the secret key owners engage only in the decryption phase, we consider a more relaxed situation where the secret key owners can communicate before the computation. In that case, we can reduce the size of a ciphertext and the evaluation complexity from to as in a single-key HE setting. We call this primitive as {\em Reusable Dynamic Multi-Party Homomorphic Encryption}, which is more suitable in real-world scenarios. We show that 1) the procedures before the computation can be done in a very few rounds of communications, 2) the evaluation/space complexities are independent of the number of users, and 3) the functionalities are as efficient as MKHE, with asymptotic analysis and with implementation.
Last updated:  2025-04-13
Aurora: Leaderless State-Machine Replication with High Throughput
Hao Lu, Jian Liu, and Kui Ren
State-machine replication (SMR) allows a {deterministic} state machine to be replicated across a set of replicas and handle clients' requests as a single machine. Most existing SMR protocols are leader-based requiring a leader to order requests and coordinate the protocol. This design places a disproportionately high load on the leader, inevitably impairing the scalability. If the leader fails, a complex and bug-prone fail-over protocol is needed to switch to a new leader. An adversary can also exploit the fail-over protocol to slow down the protocol. In this paper, we propose a crash-fault tolerant SMR named Aurora, with the following properties: • Leaderless: it does not require a leader, hence completely get rid of the fail-over protocol. • Scalable: it can scale up to 11 replicas. • Robust: it behaves well even under a poor network connection. We provide a full-fledged implementation of Aurora and systematically evaluate its performance. Our benchmark results show that Aurora achieves a throughput of around two million Transactions Per Second (TPS), up to 8.7X higher than the state-of-the-art leaderless SMR.
Last updated:  2025-04-13
Vector Commitment Design, Analysis, and Applications: A Survey
Vir Pathak, Sushmita Ruj, and Ron van der Meyden
Due to their widespread applications in decentralized and privacy preserving technologies, commitment schemes have become increasingly important cryptographic primitives. With a wide variety of applications, many new constructions have been proposed, each enjoying different features and security guarantees. In this paper, we systematize the designs, features, properties, and applications of vector commitments (VCs). We define vector, polynomial, and functional commitments and we discuss the relationships shared between these types of commitment schemes. We first provide an overview of the definitions of the commitment schemes we will consider, as well as their security notions and various properties they can have. We proceed to compare popular constructions, taking into account the properties each one enjoys, their proof/update information sizes, and their proof/commitment complexities. We also consider their effectiveness in various decentralized and privacy preserving applications. Finally, we conclude by discussing some potential directions for future work.
Last updated:  2025-04-12
How to Recover the Full Plaintext of XCB
Peng Wang, Shuping Mao, Ruozhou Xu, Jiwu Jing, and Yuewu Wang
XCB, a tweakable enciphering mode, is part of IEEE Std. 1619.2 for shared storage media. We show that all versions of XCB are not secure through three plaintext recovery attacks. A key observation is that XCB behaves like an LRW1-type tweakable block cipher for single-block messages, which lacks CCA security. The first attack targets one-block XCB, using three queries to recover the plaintext. The second one requires four queries to recover the plaintext that excludes one block. The last one requires seven queries to recover the \emph{full} plaintext. The first attack applies to any scheme that follows the XCB structure, whereas the latter two attacks work on all versions of XCB, exploiting the separable property of the underlying universal hash function. To address these flaws, we propose the XCB* structure, an improved version of XCB that adds only two XOR operations. We prove that XCB* is STPRP-secure when using AXU hash functions, SPRPs, and a secure random-IV-based stream cipher.
Last updated:  2025-04-12
PAC-Private Algorithms
Mayuri Sridhar, Hanshen Xiao, and Srinivas Devadas
Provable privacy typically requires involved analysis and is often associated with unacceptable accuracy loss. While many empirical verification or approximation methods, such as Membership Inference Attacks (MIA) and Differential Privacy Auditing (DPA), have been proposed, these do not offer rigorous privacy guarantees. In this paper, we apply recently-proposed Probably Approximately Correct (PAC) Privacy to give formal, mechanized, simulation-based proofs for a range of practical, black-box algorithms: K-Means, Support Vector Machines (SVM), Principal Component Analysis (PCA) and Random Forests. To provide these proofs, we present a new simulation algorithm that efficiently determines anisotropic noise perturbation required for any given level of privacy. We provide a proof of correctness for this algorithm and demonstrate that anisotropic noise has substantive benefits over isotropic noise. Stable algorithms are easier to privatize, and we demonstrate privacy amplification resulting from introducing regularization in these algorithms; meaningful privacy guarantees are obtained with small losses in accuracy. We propose new techniques in order to reduce instability in algorithmic output and convert intractable geometric stability verification into efficient deterministic stability verification. Thorough experiments are included, and we validate our provable adversarial inference hardness against state-of-the-art empirical attacks.
Last updated:  2025-04-12
Adaptive Robustness of Hypergrid Johnson-Lindenstrauss
Andrej Bogdanov, Alon Rosen, Neekon Vafa, and Vinod Vaikuntanathan
Johnson and Lindenstrauss (Contemporary Mathematics, 1984) showed that for , a scaled random projection from to is an approximate isometry on any set of size at most exponential in . If is larger, however, its points can contract arbitrarily under . In particular, the hypergrid is expected to contain a point that is contracted by a factor of , where . We give evidence that finding such a point exhibits a statistical-computational gap precisely up to . On the algorithmic side, we design an online algorithm achieving , inspired by a discrepancy minimization algorithm of Bansal and Spencer (Random Structures \& Algorithms, 2020). On the hardness side, we show evidence via a multiple overlap gap property (mOGP), which in particular captures online algorithms; and a reduction-based lower bound, which shows hardness under standard worst-case lattice assumptions. As a cryptographic application, we show that the rounded Johnson-Lindenstrauss embedding is a robust property-preserving hash function (Boyle, Lavigne and Vaikuntanathan, TCC 2019) on the hypergrid for the Euclidean metric in the computationally hard regime. Such hash functions compress data while preserving distances between inputs up to some distortion factor, with the guarantee that even knowing the hash function, no computationally bounded adversary can find any pair of points that violates the distortion bound.
Last updated:  2025-04-12
MProve-Nova: A Privacy-Preserving Proof of Reserves Protocol for Monero
Varun Thakore and Saravanan Vijayakumaran
A proof of reserves (PoR) protocol enables a cryptocurrency exchange to prove to its users that it owns a certain amount of coins, as a first step towards proving that it is solvent. We present the design, implementation, and security analysis of MProve-Nova, a PoR protocol for Monero that leverages the Nova recursive SNARK to achieve two firsts (without requiring any trusted setup). It is the first Monero PoR protocol that reveals only the number of outputs owned by an exchange; no other information about the outputs or their key images is revealed. It is also the first Monero PoR protocol where the proof size and proof verification time are constant, i.e. they are independent of the number of outputs on the Monero blockchain and the number of outputs owned by the exchange. To achieve constant verification times, MProve-Nova requires a pre-processing step which creates two Merkle trees from all the outputs and key images on the Monero blockchain. MProve-Nova consists of two Nova-based subprotocols, a reserves commitment generator (RCG) protocol used to compute a commitment to the total reserves owned by an exchange and a non-collusion (NC) protocol used to prove non-collusion between two exchanges. For the RCG protocol, we observed proof sizes of about 28 KB and verification times of 4.3 seconds. For the NC protocol, we observed proof sizes of about 24 KB and verification times of 0.2 seconds. Proving times for both protocols increase linearly with the number of outputs owned by the exchange but remain independent of the number of outputs on the Monero blockchain. On average, the RCG protocol required about 42 minutes per 1000 outputs and the NC protocol required about 5 minutes per 1000 outputs.
Last updated:  2025-04-12
Spilling-Cascade: an Optimal PKE Combiner for KEM Hybridization
Céline Chevalier, Guirec Lebrun, and Ange Martinelli
Hybrid Post-Quantum cryptography is a cautious approach that aims to guard against the threat posed by the quantum computer, through the simultaneous use of Post-Quantum (PQ) and classical (i.e. pre-quantum) cryptosystems, should the post-quantum schemes used prove insecure. Regarding the hybridization of Key Encapsulation Mechanisms (KEMs), most recent studies focus on safely combining the symmetric keys output by a parallel execution of classical and Post-Quantum KEMs. While this architecture is straightforward, it appears to lack bandwidth optimization. Hence, we propose a novel method for hybridizing several KEMs more effectively, by combining the underlying Public-Key Encryption schemes (PKEs) in an innovative variant of the cascade composition that we call “spilling-cascade”, before turning the hybrid PKE into a KEM with a FO transformation. We prove that this architecture constitutes a robust combiner for encryption schemes up to IND-CPA security, which permits to eventually generate an IND-CCA-secure KEM. In terms of performance, our spilling-cascade scheme has a better communication cost than the commonly used parallel combination, with a bandwidth gain of its ciphertext that ranges from 2.8% to 13 % com- pared to the latter, depending on the number and the characteristics of the PKEs that are combined. Moreover, we prove that for given PKEs to hybridize, the ciphertext communication cost of the spilling-cascade is optimal.
Last updated:  2025-04-12
HyperPianist: Pianist with Linear-Time Prover and Logarithmic Communication Cost
Chongrong Li, Pengfei Zhu, Yun Li, Cheng Hong, Wenjie Qu, and Jiaheng Zhang
Recent years have seen great improvements in zero-knowledge proofs (ZKPs). Among them, zero-knowledge SNARKs are notable for their compact and efficiently-verifiable proofs, but suffer from high prover costs. Wu et al. (Usenix Security 2018) proposed to distribute the proving task across multiple machines, and achieved significant improvements in proving time. However, existing distributed ZKP systems still have quasi-linear prover cost, and may incur a communication cost that is linear in circuit size. In this paper, we introduce . Inspired by the state-of-the-art distributed ZKP system Pianist (Liu et al., S&P 2024) and the multivariate proof system HyperPlonk (Chen et al., EUROCRYPT 2023), we design a distributed multivariate polynomial interactive oracle proof (PIOP) system with a linear-time prover cost and logarithmic communication cost. Unlike Pianist, incurs no extra overhead in prover time or communication when applied to general (non-data-parallel) circuits. To instantiate the PIOP system, we adapt two additively-homomorphic multivariate polynomial commitment schemes, multivariate KZG (Papamanthou et al., TCC 2013) and Dory (Lee et al., TCC 2021), into the distributed setting, and get and respectively. Both systems have linear prover complexity and logarithmic communication cost; furthermore, requires no trusted setup. We also propose , incorporating an optimized lookup argument based on Lasso (Setty et al., EUROCRYPT 2024) with lower prover cost. Experiments demonstrate and achieve speedups of and over HyperPlonk with 32 distributed machines. Compared to Pianist, can be and as fast and can be and as fast, on vanilla gates and custom gates respectively. With layered circuits, is up to as fast on custom gates, and achieves a speedup.
Last updated:  2025-04-12
Malleable SNARKs and Their Applications
Suvradip Chakraborty, Dennis Hofheinz, Roman Langrehr, Jesper Buus Nielsen, Christoph Striecks, and Daniele Venturi
Succinct non-interactive arguments of knowledge (SNARKs) are variants of non-interactive zero-knowledge proofs (NIZKs) in which complex statements can be proven in a compact way. SNARKs have had tremendous impact in several areas of cryptography, including verifiable computing, blockchains, and anonymous communication. A recurring concept in many applications is the concept of recursive SNARKs, in which a proof references a previous proof to show an evolved statement. In this work, we investigate malleable SNARKs, a generalization of this concept of recursion. An adaptation of the existing concept of malleable NIZKs, malleable SNARKs allow to modify SNARK proofs to show related statements, but such that such mauled proofs are indistinguishable from “properly generated” fresh proofs of the related statement. We show how to instantiate malleable SNARKs for universal languages and relations, and give a number of applications: the first post-quantum RCCA-secure rerandomizable and updatable encryption schemes, a generic construction of reverse firewalls, and an unlinkable (i.e., computation-hiding) targeted malleable homomorphic encryption scheme. Technically, our malleable SNARK construction relies on recursive proofs, but with a twist: in order to support the strong indistinguishability properties of mauled and fresh SNARK proofs, we need to allow an unbounded recursion depth. To still allow for a reasonable notion of extractability in this setting (and in particular to guarantee that extraction eventually finishes with a “proper” witness that does not refer to a previous SNARK proof), we rely on a new and generic computational primitive called adversarial one-way function (AOWF) that may be of independent interest. We give an AOWF candidate and prove it secure in the random oracle model.
Last updated:  2025-04-12
Multi-Party Private Set Operations from Predicative Zero-Sharing
Minglang Dong, Yu Chen, Cong Zhang, Yujie Bai, and Yang Cao
Typical protocols in the multi-party private set operations (MPSO) setting enable parties to perform certain secure computation on the intersection or union of their private sets, realizing a very limited range of MPSO functionalities. Most works in this field focus on just one or two specific functionalities, resulting in a large variety of isolated schemes and a lack of a unified framework in MPSO research. In this work, we present an MPSO framework, which allows parties, each holding a set, to securely compute any set formulas (arbitrary compositions of a finite number of binary set operations, including intersection, union and difference) on their private sets. Our framework is highly versatile and can be instantiated to accommodate a broad spectrum of MPSO functionalities. To the best of our knowledge, this is the first framework to achieve such a level of flexibility and generality in MPSO, without relying on generic secure multi-party computation (MPC) techniques. Our framework exhibits favorable theoretical and practical performance. The computation and communication complexity scale linearly with the set size , and it achieves optimal complexity that is on par with the naive solution for widely used functionalities, such as multi-party private set intersection (MPSI), MPSI with cardinality output (MPSI-card), and MPSI with cardinality and sum (MPSI-card-sum), in the standard semi-honest model. Furthermore, the instantiations of our framework mainly from symmetric-key techniques yield efficient protocols for MPSI, MPSI-card, MPSI-card-sum, and multi-party private set union (MPSU), with online performance surpassing or matching the state of the art. At the technical core of our framework is a newly introduced primitive called predicative zero-sharing. This primitive captures the universality of a number of MPC protocols and is composable. We believe it may be of independent interest.
Last updated:  2025-04-12
Revisiting the Robustness of {(R/M)LWR} under Polynomial Moduli and its Applications
Haoxiang Jin, Feng-Hao Liu, Zhedong Wang, Yang Yu, and Dawu Gu
This work conducts a comprehensive investigation on determining the entropic hardness of (R/M)LWR under polynomial modulus. Particularly, we establish the hardness of (M)LWR for general entropic secret distributions from (Module) LWE assumptions based on a new conceptually simple framework called rounding lossiness. By combining this hardness result and a trapdoor inversion algorithm with asymptotically the most compact parameters, we obtain a compact lossy trapdoor function (LTF) with improved efficiency. Extending our LTF with other techniques, we can derive a compact all-but-many LTF and PKE scheme against selective opening and chosen ciphertext attacks, solely based on (Module) LWE assumptions within a polynomial modulus. Additionally, we show a search-to-decision reduction for RLWR with Gaussian secrets from a new R\'enyi Divergence-based analysis.
Last updated:  2025-04-12
Publicly Verifiable Generalized Secret Sharing Schemes and Their Applications
Liang Zhang, Dongliang Cai, Tao Liu, Haibin Kan, and Jiheng Zhang
Generalized secret sharing (GSS), which accommodates monotone access structures, has been under-explored in distributed computing over the past decades. In this paper, we propose the publicly verifiable generalized secret sharing (PVGSS) scheme, enhancing the applicability of GSS in transparent systems. PVGSS not only enables a dealer to share a secret with fine-grained access structures, but also allows anyone to verify whether the dealer and shareholders are acting honestly or not. We begin by introducing two approaches to implement GSS schemes: one based on recursive Shamir secret sharing and another utilizing linear secret sharing scheme (LSSS). Then, we present PVGSS constructions by integrating non-interactive zero-knowledge proofs into the GSS schemes. Further, we prove that the proposed PVGSS schemes achieve IND1-secrecy under DDH assumption. To showcase the practical applicability of PVGSS schemes, we implement a decentralized exchange (DEX) protocol that enables fair atomic swaps of ERC-20 tokens. A sophisticated access structure is devised to: (1) enable fair atomic swaps during normal protocol execution, and (2) incorporate fault-tolerant passive watchers to provide accountable arbitration when disputes occur. Our benchmarks on the BN128 curve demonstrate the computational efficiency of PVGSS schemes, while Ethereum gas cost analysis confirms the viability of the DEX implementation.
Last updated:  2025-04-12
Impossible Boomerang Distinguishers Revisited
Xichao Hu, Lin Jiao, Dengguo Feng, Yonglin Hao, Xinxin Gong, Yongqiang Li, and Siwei Sun
The Impossible Boomerang Attack (IBA) has shown significant power in evaluating the security of block ciphers, such as AES. However, current studies still lack foundational theory, user guild and universal method for constructing IBDs. This paper addresses these gaps through comprehensive research. Theoretically, we establish a new framework for constructing a series of IBDs by differential propagation, state propagation, and generalized boomerang tables. We rigorously prove their inclusion relations, resulting in a complete theory and hierarchical apply strategy for both single-key and related-key settings. We further analyze IBD constructions in two types of related-key settings: two-related-keys with arbitrary schedules and four-related-keys with linear schedules, structurally in a unified way. Technically, we develop a scheduling algorithm and a general SAT-based method to search for IBDs across various block cipher designs, including SPN, Feistel, and ARX. Additionally, we propose several strategies to enhance the search process. As applications, we derive (RK-)IBDs for 10 block ciphers, almost for the first time. Compared to impossible differentials, our IBDs are at least as effective, such as DES and PRESENT. Notably, we achieve 1 more round on PRINTcipher48 in single-key setting; 2 more rounds on AES-128, and 1 or 2 more rounds on SPECK variants in two-related-keys settings; 1, 4, 2 more rounds on GIFT-64, CHAM-64/128 and CHAM-128/256 in four-related-keys settings. We also obtain full-round RK-IBDs on GOST. Compared to current IBDs, we achieve 1, 1 more rounds on SKINNY-64/192 and SKINNYee. Furthermore, as an applied case of derived IBDs, we present a 31-round IBA on SKINNYee, which is the first 31-round attack on SKINNYee and the best result to date.
Last updated:  2025-04-11
Intermundium-DL: Assessing the Resilience of Current Schemes to Discrete-Log-Computation Attacks on Public Parameters
Mihir Bellare, Doreen Riepel, and Laura Shea
We consider adversaries able to perform a nonzero but small number of discrete logarithm computations, as would be expected with near-term quantum computers. Schemes with public parameters consisting of a few group elements are now at risk; could an adversary knowing the discrete logarithms of these elements go on to easily compromise the security of many users? We study this question for known schemes and find, across them, a perhaps surprising variance in the answers. In a first class are schemes, including Okamoto and Katz-Wang signatures, that we show fully retain security even when the discrete logs of the group elements in their parameters are known to the adversary. In a second class are schemes like Cramer-Shoup encryption and the SPAKE2 password-authenticated key exchange protocol that we show retain some partial but still meaningful and valuable security. In the last class are schemes we show by attack to totally break. The distinctions uncovered by these results shed light on the security of classical schemes in a setting of immediate importance, and help make choices moving forward.
Last updated:  2025-04-11
Hard Quantum Extrapolations in Quantum Cryptography
Luowen Qian, Justin Raizes, and Mark Zhandry
Although one-way functions are well-established as the minimal primitive for classical cryptography, a minimal primitive for quantum cryptography is still unclear. Universal extrapolation, first considered by Impagliazzo and Levin (1990), is hard if and only if one-way functions exist. Towards better understanding minimal assumptions for quantum cryptography, we study the quantum analogues of the universal extrapolation task. Specifically, we put forth the classicalquantum extrapolation task, where we ask to extrapolate the rest of a bipartite pure state given the first register measured in the computational basis. We then use it as a key component to establish new connections in quantum cryptography: (a) quantum commitments exist if classicalquantum extrapolation is hard; and (b) classicalquantum extrapolation is hard if any of the following cryptographic primitives exists: quantum public-key cryptography (such as quantum money and signatures) with a classical public key or 2-message quantum key distribution protocols. For future work, we further generalize the extrapolation task and propose a fully quantum analogue. We show that it is hard if quantum commitments exist, and it is easy for quantum polynomial space.
Last updated:  2025-04-11
Efficient Permutation Correlations and Batched Random Access for Two-Party Computation
Stanislav Peceny, Srinivasan Raghuraman, Peter Rindal, and Harshal Shah
In this work we formalize the notion of a two-party permutation correlation s.t. for a random permutation of elements and vectors . This correlation can be viewed as an abstraction and generalization of the Chase et al. (Asiacrypt 2020) share translation protocol. We give a systematization of knowledge for how such a permutation correlation can be derandomized to allow the parties to perform a wide range of oblivious permutations of secret-shared data. This systematization immediately enables the translation of various popular honest-majority protocols to be efficiently instantiated in the two-party setting, e.g. collaborative filtering, sorting, database joins, graph algorithms, and many more. We give two novel protocols for efficiently generating a random permutation correlation. The first uses MPC-friendly PRFs to generate a correlation of elements, each of size bits, with bit-OTs, time, communication, and only 3 rounds including setup. Similar asymptotics previously required relatively expensive public-key cryptography, e.g. Paillier or LWE. Our protocol implementation for requires just 7 seconds & bits of communication, a respective 40 & improvement on the LWE solution of Juvekar at al. (CCS 2018). The second protocol is based on pseudo-random correlation generators and achieves an overhead that is sublinear in the string length , i.e. the communication and number of OTs is . The overhead of the latter protocol has larger hidden constants, and therefore is more efficient only when long strings are permuted, e.g. in graph algorithms. Finally, we present a suite of highly efficient protocols based on permutations for performing various batched random access operations. These include the ability to extract a hidden subset of a secret-shared list. More generally, we give ORAM-like protocols for obliviously reading and writing from a list in a batched manner. We argue that this suite of batched random access protocols should be a first class primitive in the MPC practitioner's toolbox.
Last updated:  2025-04-11
Attribute-Based Publicly Verifiable Secret Sharing
Liang Zhang, Xingyu Wu, Qiuling Yue, Haibin Kan, and Jiheng Zhang
Can a dealer share a secret without knowing the shareholders? We provide a positive answer to this question by introducing the concept of an attribute-based secret sharing (AB-SS) scheme. With AB-SS, a dealer can distribute a secret based on attributes rather than specific individuals or shareholders. Only authorized users whose attributes satisfy a given access structure can recover the secret. Furthermore, we introduce the concept of attribute-based publicly verifiable secret sharing (AB-PVSS). An AB-PVSS scheme allows external users to verify the correctness of all broadcast messages from the dealer and shareholders, similar to a traditional PVSS scheme. Additionally, AB-SS (or AB-PVSS) distinguishes itself from traditional SS (or PVSS) by enabling a dealer to generate shares according to an arbitrary monotone access structure. To construct an AB-PVSS scheme, we first implement a decentralized ciphertext-policy attribute-based encryption (CP-ABE) scheme. The proposed CP-ABE scheme offers a smaller ciphertext size and requires fewer computational operations, although it is not fully-fledged as a trade-off. We then incorporate non-interactive zero-knowledge (NIZK) proofs to enable public verification of the CP-ABE ciphertext. Based on the CP-ABE and NIZK proofs, we construct an AB-PVSS primitive. Furthermore, we present an intuitive implementation of optimistic fair exchange based on the AB-PVSS scheme. Finally, we conduct security analysis and comprehensive experiments on the proposed CP-ABE and AB-PVSS schemes. The results demonstrate that both schemes exhibit plausible performance compared to related works.
Last updated:  2025-04-11
SoK: Self-Generated Nudes over Private Chats: How Can Technology Contribute to a Safer Sexting?
Joel Samper and Bernardo Ferreira
More and more people take advantage of mobile apps to strike up relationships and casual contacts. This sometimes results in the sharing of self-generated nudes. While this opens a way for sexual exploration, it also raises concerns. In this paper, we review existing technology-assisted permissive proposals/features that provide security, privacy or accountability benefits when sharing nudes online. To do so, we performed a systematic literature review combing through 10,026 search results and cross-references, and we identified real-world solutions by surveying OS features and 52 dating, messaging and social network apps. We systematized knowledge by defining a sexting threat model, deriving a taxonomy of the proposals/features, discussing some of their shortcomings, organizing privacy-related concepts, and providing take-aways with some directions for future research and development. Our study found a very diverse ecosystem of academic proposals and app features, showing that safer sexting goes far beyond nude detection. None of the techniques represents the ultimate solution for all threats, but each contributes to a safer sexting in a different way.
Last updated:  2025-04-11
The Art of Bonsai: How Well-Shaped Trees Improve the Communication Cost of MLS
Céline Chevalier, Guirec Lebrun, Ange Martinelli, and Jérôme Plût
Messaging Layer Security (MLS) is a Secure Group Messaging protocol that uses for its handshake a binary tree – called a Ratchet Tree – in order to reach a logarithmic communication cost in the number of group members. This Ratchet Tree represents users as its leaves; therefore any change in the group membership results in adding or removing a leaf in the tree. MLS consequently implements what we call a tree evolution mechanism, consisting of a user add algorithm – determining where to insert a new leaf – and a tree expansion process – stating how to increase the size of the tree when no space is available for a new user. The tree evolution mechanism currently used by MLS is designed so that it naturally left-balances the Ratchet Tree. However, such a tree structure is often quite inefficient in terms of communication cost. Furthermore, one may wonder whether the binary Ratchet Tree has a degree optimized for the features of MLS. Therefore, we study in this paper how to improve the communication cost of the handshake in MLS – realized through an operation called a commit – by considering both the tree evolution mechanism and the tree degree used for the Ratchet Tree. To do so, we determine the tree structure that optimizes its communication cost and we propose algorithms for both the user add and the tree expansion processes, that allow to remain close to that optimal structure and thus to have a communication cost as close as possible to the optimum. We also find out the Ratchet Tree degree that is best suited to a given set of parameters induced by the encryption scheme used by MLS. This study shows that when using classical (i.e. pre-quantum) ciphersuites, a binary tree is indeed the most appropriate Ratchet Tree; nevertheless, with post-quantum algorithms, it generally becomes more interesting to use instead a ternary tree. Our improvements do not change the TreeKEM protocol and are easy to implement. With parameter sets corresponding to practical ciphersuites, they reduce TreeKEM’s communication cost by 5 to 10%. In particular, the gain of 10% appears in the post-quantum setting – when both an optimized tree evolution mechanism and a ternary tree are necessary –, which is precisely the context where any optimization of the protocol’s communication cost is welcome, due to the large bandwidth of PQ encrypted communication.
Last updated:  2025-04-11
An LLM Framework For Cryptography Over Chat Channels
Danilo Gligoroski, Mayank Raikwar, and Sonu Kumar Jha
Recent advancements in Large Language Models (LLMs) have transformed communication, yet their role in secure messaging remains underexplored, especially in surveillance-heavy environments. At the same time, many governments all over the world are proposing legislation to detect, backdoor, or even ban encrypted communication. That emphasizes the need for alternative ways to communicate securely and covertly over open channels. We propose a novel cryptographic embedding framework that enables covert Public Key or Symmetric Key encrypted communication over public chat channels with human-like produced texts. Some unique properties of our framework are: 1. It is LLM agnostic, i.e., it allows participants to use different local LLM models independently; 2. It is pre- or post-quantum agnostic; 3. It ensures indistinguishability from human-like chat-produced texts. Thus, it offers a viable alternative where traditional encryption is detectable and restricted.
Last updated:  2025-04-11
Optimized One-Dimensional SQIsign Verification on Intel and Cortex-M4
Marius A. Aardal, Gora Adj, Arwa Alblooshi, Diego F. Aranha, Isaac A. Canales-Martínez, Jorge Chavez-Saab, Décio Luiz Gazzoni Filho, Krijn Reijnders, and Francisco Rodríguez-Henríquez
SQIsign is a well-known post-quantum signature scheme due to its small combined signature and public-key size. However, SQIsign suffers from notably long signing times, and verification times are not short either. To improve this, recent research has explored both one-dimensional and two-dimensional variants of SQIsign, each with distinct characteristics. In particular, SQIsign2D's efficient signing and verification times have made it a focal point of recent research. However, the absence of an optimized one-dimensional verification implementation hampers a thorough comparison between these different variants. This work bridges this gap in the literature: we provide a state-of-the-art implementation of one-dimensional SQIsign verification, including novel optimizations. We report a record-breaking one-dimensional SQIsign verification time of 8.55 Mcycles on a Raptor Lake Intel processor, closely matching SQIsign2D on the same processor. For uncompressed signatures, the signature size doubles and we verify in only 5.6 Mcycles. Taking advantage of the inherent parallelism available in isogeny computations, we present 5-core variants that can go as low as 1.3 Mcycles. Furthermore, we present the first implementation that supports both 32-bit and 64-bit processors. It includes optimized assembly code for the Cortex-M4 and has been integrated with the pqm4 project. Our results motivate further research into one-dimensional SQIsign, as it boasts unique features among isogeny-based schemes.
Last updated:  2025-04-11
Do Not Disturb a Sleeping Falcon: Floating-Point Error Sensitivity of the Falcon Sampler and Its Consequences
Xiuhan Lin, Mehdi Tibouchi, Yang Yu, and Shiduo Zhang
Falcon is one of the three postquantum signature schemes already selected by NIST for standardization. It is the most compact among them, and offers excellent efficiency and security. However, it is based on a complex algorithm for lattice discrete Gaussian sampling which presents a number of implementation challenges. In particular, it relies on (possibly emulated) floating-point arithmetic, which is often regarded as a cause for concern, and has been leveraged in, e.g., side-channel analysis. The extent to which Falcon's use of floating point arithmetic can cause security issues has yet to be thoroughly explored in the literature. In this paper, we contribute to filling this gap by identifying a way in which Falcon's lattice discrete Gaussian sampler, due to specific design choices, is singularly sensitive to floating-point errors. In the presence of small floating-point discrepancies (which can occur in various ways, including the use of the two almost but not quite equivalent signing procedures ``dynamic'' and ``tree'' exposed by the Falcon API), we find that, when called twice on the same input, the Falcon sampler has a small but significant chance (on the order of once in a few thousand calls) of outputting two different lattice points with a very structured difference, that immediately reveals the secret key. This is in contrast to other lattice Gaussian sampling algorithms like Peikert's sampler and Prest's hybrid sampler, that are stable with respect to small floating-point errors. Correctly generated Falcon signatures include a salt that should in principle prevent the sampler to ever be called on the same input twice. In that sense, our observation has little impact on the security of Falcon signatures per se (beyond echoing warnings about the dangers of repeated randomness). On the other hand, it is critical for derandomized variants of Falcon, which have been proposed for use in numerous settings. One can mention in particular identity-based encryption, SNARK-friendly signatures, and sublinear signature aggregation. For all these settings, small floating point discrepancies have a chance of resulting in full private key exposure, even when using the slower, integer-based emulated floating-point arithmetic of Falcon's reference implementation.
Last updated:  2025-04-10
Exponent-VRFs and Their Applications
Dan Boneh, Iftach Haitner, Yehuda Lindell, and Gil Segev
Verifiable random functions (VRFs) are pseudorandom functions where the function owner can prove that a generated output is correct relative to a committed key. In this paper we introduce the notion of an exponent-VRF (eVRF): a VRF that does not provide its output explicitly, but instead provides , where is a generator of some finite cyclic group (or in multiplicative notation). We construct eVRFs from the Paillier encryption scheme and from DDH, both in the random-oracle model. We then show that an eVRF is a powerful tool that has many important applications in threshold cryptography. In particular, we construct (1) a one-round fully simulatable distributed key-generation protocol (after a single two-round initialization phase), (2) a two-round fully simulatable signing protocol for multiparty Schnorr with a deterministic variant, (3) a two-party ECDSA protocol that has a deterministic variant, (4) a threshold Schnorr signing protocol where the parties can later prove that they signed without being able to frame another group, and (5) an MPC-friendly and verifiable HD-derivation. All these applications are derived from this single new eVRF abstraction, and the resulting protocols are concretely efficient.
Last updated:  2025-04-10
On the Soundness of Algebraic Attacks against Code-based Assumptions
Miguel Cueto Noval, Simon-Philipp Merz, Patrick Stählin, and Akin Ünal
We study recent algebraic attacks (Briaud-Øygarden EC'23) on the Regular Syndrome Decoding (RSD) problem and the assumptions underlying the correctness of their attacks' complexity estimates. By relating these assumptions to interesting algebraic-combinatorial problems, we prove that they do not hold in full generality. However, we show that they are (asymptotically) true for most parameter sets, supporting the soundness of algebraic attacks on RSD. Further, we prove—without any heuristics or assumptions—that RSD can be broken in polynomial time whenever the number of error blocks times the square of the size of error blocks is larger than 2 times the square of the dimension of the code. Additionally, we use our methodology to attack a variant of the Learning With Errors problem where each error term lies in a fixed set of constant size. We prove that this problem can be broken in polynomial time, given a sufficient number of samples. This result improves on the seminal work by Arora and Ge (ICALP'11), as the attack's time complexity is independent of the LWE modulus.
Last updated:  2025-04-10
Anamorphic-Resistant Encryption; Or Why the Encryption Debate is Still Alive
Yevgeniy Dodis and Eli Goldin
Ever since the introduction of encryption, society has been divided over whether the government (or law enforcement agencies) should have the capability to decrypt private messages (with or without a war- rant) of its citizens. From a technical viewpoint, the folklore belief is that semantic security always enables some form of steganography. Thus, adding backdoors to semantically secure schemes is pointless: it only weakens the security of the “good guys”, while “bad guys” can easily circumvent censorship, even if forced to hand over their decryption keys. In this paper we put a dent in this folklore. We formalize three worlds: Dictatoria (“dictator wins”: no convenient steganography, no user co- operation needed), Warrantland (“checks-and-balances”: no convenient steganography, but need user’s cooperation) and Privatopia (“privacy wins”: built-in, high-rate steganography, even if giving away the decryption key). We give strong evidence that all these worlds are possible, thus reopening the encryption debate on a technical level. Our main novelty is the definition and design of special encryption schemes we call anamorphic-resistant (AR). In contrast to so called “anamorphic schemes”, — which were studied in the literature and form the basis of Privatopia, — any attempt to steganographically communicate over an AR-encryption scheme will be either impossible or hugely slow (depending on the definitional details).
Last updated:  2025-04-10
ColliderVM: Stateful Computation on Bitcoin without Fraud Proofs
Victor I. Kolobov, Avihu M. Levy, and Moni Naor
Bitcoin script cannot easily access and store state information onchain without an upgrade such as BIP-347 (OP_CAT); this makes performing general (stateful) computation on Bitcoin impossible to do directly. Despite this limitation, several approaches have been proposed to bypass it, with BitVM being the closest to production. BitVM enables fraud-proof-based computation on Bitcoin, relying on a -out-of- honesty assumption. This left the question of whether it is possible to achieve computation under the same honesty assumption without requiring onlookers to ensure validity through fraud proofs. In this note, we answer this question affirmatively by introducing ColliderVM, a new approach for performing computation on Bitcoin today. Crucially, this approach eliminates some capital inefficiency concerns stemming from reliance on fraud proofs. For our construction, a key point is to replace the Lamport or Winternitz signature-based storage component in contemporary protocols with a hash collision-based commitment. Our techniques are inspired by ColliderScript, but are more efficient, reducing the number of hash evaluations required by at least . With it, we estimate that the Bitcoin script length for STARK proof verification becomes nearly practical, allowing it to be used alongside other, pairing-based proof systems common today in applications.
Last updated:  2025-04-10
Scalable and Fine-Tuned Privacy Pass from Group Verifiable Random Functions
Dnnis Faut, Julia Hesse, Lisa Kohl, and Andy Rupp
Abstract—Anonymous token schemes are cryptographic protocols for limiting the access to online resources to credible users. The resource provider issues a set of access tokens to the credible user that they can later redeem anonymously, i.e., without the provider being able to link their redemptions. When combined with credibility tests such as CAPTCHAs, anonymous token schemes can significantly increase user experience and provider security, without exposing user access patterns to providers. Current anonymous token schemes such as the Privacy Pass protocol by Davidson et al. rely on oblivious pseudorandom functions (OPRFs), which let server and user jointly compute randomly looking access tokens. For those protocols, token issuing costs are linear in the number of requested tokens. In this work, we propose a new approach for building anonymous token schemes. Instead of relying on two-party computation to realize a privacy-preserving pseudorandom function evaluation, we propose to offload token generation to the user by using group verifiable random functions (GVRFs). GVRFs are a new cryptographic primitive that allow users to produce verifiable pseudorandomness. Opposed to standard VRFs, verification is anonymous within the group of credible users. We give a construction of group VRFs from the Dodis-Yampolskiy VRF and Equivalence- Class Signatures, based on pairings and a new Diffie- Hellman inversion assumption that we analyze in the Generic Group Model. Our construction enjoys compact public keys and proofs, while evaluation and verification costs are only slightly increased compared to the Dodis-Yampolskiy VRF. By deploying a group VRF instead of a OPRF, we obtain an anonymous token scheme where communication as well as server-side computation during the issuing phase is constant and independent of the number of tokens a user requests. Moreover, by means of our new concept of updatable token policies, the number of unspent tokens in circulation can retrospectively (i.e., even after the credibility check) be decreased or increased in order to react to the current or expected network situation. Our tokens are further countable and publicly verifiable. This comes at the cost of higher computational efforts for token redemption and verification as well as somewhat weaker unlinkability guarantees compared to Privacy Pass.
Last updated:  2025-04-10
Efficient Verifiable Mixnets from Lattices, Revisited
Jonathan Bootle, Vadim Lyubashevsky, and Antonio Merino-Gallardo
Mixnets are powerful building blocks for providing anonymity in applications like electronic voting and anonymous messaging. The en- cryption schemes upon which traditional mixnets are built, as well as the zero-knowledge proofs used to provide verifiability, will, however, soon become insecure once a cryptographically-relevant quantum computer is built. In this work, we construct the most compact verifiable mixnet that achieves privacy and verifiability through encryption and zero-knowledge proofs based on the hardness of lattice problems, which are believed to be quantum-safe. A core component of verifiable mixnets is a proof of shuffle. The starting point for our construction is the proof of shuffle of Aranha et al. (CT- RSA 2021). We first identify an issue with the soundness proof in that work, which is also present in the adaptation of this proof in the mixnets of Aranha et al. (ACM CCS 2023) and Hough et al. (IACR CiC 2025). The issue is that one cannot directly adapt classical proofs of shuffle to the lattice setting due to the splitting structure of the rings used in lattice-based cryptography. This is not just an artifact of the proof, but a problem that manifests itself in practice, and we successfully mount an attack against the implementation of the first of the mixnets. We fix the problem and introduce a general approach for proving shuffles in split- ting rings that can be of independent interest. The efficiency improvement of our mixnet over prior work is achieved by switching from re-encryption mixnets (as in the works of Aranha et al. and Hough et al.) to decryption mixnets with very efficient layering based on the hardness of the LWE and LWR problems over polynomial rings. The ciphertexts in our scheme are smaller by approximately a factor of 10X and 2X over the aforementioned instantiations, while the linear-size zero-knowledge proofs are smaller by a factor of 4X and 2X.
Last updated:  2025-04-10
A Systematic Analysis of Pseudorandom Generation for Masked Cryptographic Implementation
Rei Ueno, Naofumi Homma, and Kazuhiko Minematsu
This study analyzes and investigates pseudorandom generation (PRG) in the context of masked cryptographic implementation. Although masking and PRGs have been distinctly studied for a long time, little literature studies how the randomness in the masked implementation should be generated. The lack of analysis on mask-bits generators makes the practical security of masked cryptographic implementation unclear, and practitioners (e.g., designers, implementers, and evaluators) may be confused about how to realize it. This study provides its systematic analysis by developing new three adversarial models, which correspond to respective practical scenarios of leakage assessment, quantitative evaluation of sidechannel security (e.g., success rate and key-lifetime), and practical deployment. We present the properties required for each scenario by a stage-by-stage analysis. In particular, we support a long-held belief/folklore with a proof: for the output of PRG for masking, cryptographic security is sufficient but unnecessary, but only a statistical uniformity (precisely, high-dimensional uniform equidistribution) is necessary. In addition, we thoroughly investigate the side-channel security of PRGs in the wild in the masking context. We conclude this study with some recommendations for practitioners and a proposal of leakage-resilient method of comparative performance.
Last updated:  2025-04-10
Key Derivation Functions Without a Grain of Salt
Matilda Backendal, Sebastian Clermont, Marc Fischlin, and Felix Günther
Key derivation functions (KDFs) are integral to many cryptographic protocols. Their functionality is to turn raw key material, such as a Diffie-Hellman secret, into a strong cryptographic key that is indistinguishable from random. This guarantee was formalized by Krawczyk together with the seminal introduction of HKDF (CRYPTO 2010), in a model where the KDF only takes a single key material input. Modern protocol designs, however, regularly need to combine multiple secrets, possibly even from different sources, with the guarantee that the derived key is secure as long as at least one of the inputs is good. This is particularly relevant in settings like hybrid key exchange for quantum-safe migration. Krawczyk's KDF formalism does not capture this goal, and there has been surprisingly little work on the security considerations for KDFs since then. In this work, we thus revisit the syntax and security model for KDFs to treat multiple, possibly correlated inputs. Our syntax is assertive: We do away with salts, which are needed in theory to extract from arbitrary sources in the standard model, but in practice, they are almost never used (or even available) and sometimes even misused, as we argue. We use our new model to analyze real-world multi-input KDFs—in Signal's X3DH protocol, ETSI's TS 103 744 standard, and MLS' combiner for pre-shared keys—as well as new constructions we introduce for specialized settings—e.g., a purely blockcipher-based one. We further discuss the importance of collision resistance for KDFs and finally apply our multi-input KDF model to show how hybrid KEM key exchange can be analyzed from a KDF perspective.
Last updated:  2025-04-10
Unbounded Multi-Hop Proxy Re-Encryption with HRA Security: An LWE-Based Optimization
Xiaohan Wan, Yang Wang, Haiyang Xue, and Mingqiang Wang
Proxy re-encryption (PRE) schemes enable a semi-honest proxy to transform a ciphertext of one user to another user while preserving the privacy of the underlying message. Multi-hop PRE schemes allow a legal ciphertext to undergo multiple transformations, but for lattice-based multi-hop PREs, the number of transformations is typically bounded due to the increase of error terms. Recently, Zhao et al. (Esorics 2024) introduced a lattice-based unbounded multi-hop (homomorphic) PRE scheme that supports an unbounded number of hops. Nevertheless, their scheme only achieves the selective CPA security. In contrast, Fuchsbauer et al. (PKC 2019) proposed a generic framework for constructing HRA-secure unbounded multi-hop PRE schemes from FHE. Despite this, when instantiated with state-of-the-art FHEW-like schemes, the overall key size and efficiency remain unsatisfactory. In this paper, we present a lattice-based unbounded multi-hop PRE scheme with the stronger adaptive HRA security (i.e. security against honest re-encryption attacks), which is more suitable for practical applications. Our scheme features an optimized re-encryption process based on the FHEW-like blind rotation, which resolves the incompatibility between the noise flooding technique and Fuchsbauer et al. 's framework when instantiated with FHEW-like schemes. This results in reduced storage requirements for public keys and offers higher efficiency. Moreover, our optimized unbounded multi-hop PRE scheme can be modified to an unbounded homomorphic PRE, a scheme allowing for arbitrary homomorphic computations over fresh, re-encrypted, and evaluated ciphertexts.
Last updated:  2025-04-10
Taking AI-Based Side-Channel Attacks to a New Dimension
Lucas David Meier, Felipe Valencia, Cristian-Alexandru Botocan, and Damian Vizár
This paper revisits the Hamming Weight (HW) labelling function for machine learning assisted side channel attacks. Contrary to what has been suggested by previous works, our investigation shows that, when paired with modern deep learning architectures, appropriate pre-processing and normalization techniques; it can perform as well as the popular identity labelling functions and sometimes even beat it. In fact, we hereby introduce a new machine learning method, dubbed, that helps solve the class imbalance problem associated to HW, while significantly improving the performance of unprofiled attacks. We additionally release our new, easy to use python package that we used in our experiments, implementing a broad variety of machine learning driven side channel attacks as open source, along with a new dataset AES_nRF, acquired on the nRF52840 SoC.
Last updated:  2025-04-10
Hash-Based Multi-Signatures for Post-Quantum Ethereum
Justin Drake, Dmitry Khovratovich, Mikhail Kudinov, and Benedikt Wagner
With the threat posed by quantum computers on the horizon, systems like Ethereum must transition to cryptographic primitives resistant to quantum attacks. One of the most critical of these primitives is the non-interactive multi-signature scheme used in Ethereum's proof-of-stake consensus, currently implemented with BLS signatures. This primitive enables validators to independently sign blocks, with their signatures then publicly aggregated into a compact aggregate signature. In this work, we introduce a family of hash-based signature schemes as post-quantum alternatives to BLS. We consider the folklore method of aggregating signatures via (hash-based) succinct arguments, and our work is focused on instantiating the underlying signature scheme. The proposed schemes are variants of the XMSS signature scheme, analyzed within a novel and unified framework. While being generic, this framework is designed to minimize security loss, facilitating efficient parameter selection. A key feature of our work is the avoidance of random oracles in the security proof. Instead, we define explicit standard model requirements for the underlying hash functions. This eliminates the paradox of simultaneously treating hash functions as random oracles and as explicit circuits for aggregation. Furthermore, this provides cryptanalysts with clearly defined targets for evaluating the security of hash functions. Finally, we provide recommendations for practical instantiations of hash functions and concrete parameter settings, supported by known and novel heuristic bounds on the standard model properties.
Last updated:  2025-04-10
Perfect 2-Party Computation from Additive Somewhat Homomorphic Encryption
Jonathan Trostle
Two-party computation has been an active area of research since Yao's breakthrough results on garbled circuits. We present secret key somewhat additive homomorphic schemes where the client has perfect privacy (server can be computationally unbounded). Our basic scheme is somewhat additive homomorphic and we extend it to support multiplication. The server handles circuit multiplication gates by returning the multiplicands to the client which updates the decryption key so that the original ciphertext vector includes the encrypted multiplication gate outputs. We give a 2-party protocol that also incorporates server inputs where the client has perfect privacy. Server privacy holds against a computationally bounded adversary since it depends on the hardness of the subset sum problem. Correctness of the server in the malicious model can be verified by a 3rd party where the client and server privacy are protected from the verifier.
Last updated:  2025-04-09
ECDSA Cracking Methods
William J Buchanan, Jamie Gilchrist, and Keir Finlow-Bates
The ECDSA (Elliptic Curve Digital Signature Algorithm) is used in many blockchain networks for digital signatures. This includes the Bitcoin and the Ethereum blockchains. While it has good performance levels and as strong current security, it should be handled with care. This care typically relates to the usage of the nonce value which is used to create the signature. This paper outlines the methods that can be used to break ECDSA signatures, including revealed nonces, weak nonce choice, nonce reuse, two keys and shared nonces, and fault attack.
Last updated:  2025-04-09
Fission: Distributed Privacy-Preserving Large Language Model Inference
Mehmet Ugurbil, Dimitris Mouris, Manuel B. Santos, José Cabrero-Holgueras, Miguel de Vega, and Shubho Sengupta
The increased popularity of large language models (LLMs) raises serious privacy concerns, where users' private queries are sent to untrusted servers. Many cryptographic techniques have been proposed to provide privacy, such as secure multiparty computation (MPC), which enables the evaluation of LLMs directly on private data. However, cryptographic techniques have been deemed impractical as they introduce large communication and computation. On the other hand, many obfuscation techniques have been proposed, such as split inference, where part of the model is evaluated on edge devices to hide the input data from untrusted servers, but these methods provide limited privacy guarantees. We propose Fission, a privacy-preserving framework that improves latency while providing strong privacy guarantees. Fission utilizes an MPC network for linear computations, while nonlinearities are computed on a separate evaluator network that receives shuffled values in the clear and returns nonlinear functions evaluated at these values back to the MPC network. As a result, each evaluator only gets access to parts of the shuffled data, while the model weights remain private. We evaluate fission on a wide set of LLMs and compare it against prior works. Fission results in up to eight times faster inference and eight times reduced bandwidth compared to prior works while retaining high accuracy. Finally, we construct an attack on obfuscation techniques from related works that show significant information leakage, and we demonstrate how Fission enhances privacy.
Last updated:  2025-04-09
MultiCent: Secure and Scalable Centrality Measures on Multilayer Graphs
Andreas Brüggemann, Nishat Koti, Varsha Bhat Kukkala, and Thomas Schneider
As real-world networks such as social networks and computer networks are often complex and distributed, modeling them as multilayer graphs is gaining popularity. For instance, when studying social interactions across platforms like LinkedIn, Facebook, TikTok, and Bluesky, users may be connected on several of these platforms. To identify important nodes/users, the platforms might wish to analyze user interactions using, e.g., centrality measures when accounting for connections across all platforms. That raises the challenge for platforms to perform such computation while simultaneously protecting their user data to shelter their own business as well as uphold data protection laws. This necessitates designing solutions that allow for performing secure computation on a multilayer graph which is distributed among mutually distrusting parties while keeping each party's data hidden. The work of Asharov et al. (WWW'17) addresses this problem by designing secure solutions for centrality measures that involve computing the truncated Katz score and reach score on multilayer graphs. However, we identify several limitations in that work which render the solution inefficient or even unfeasible for realistic networks with significantly more than 10k nodes. We address these limitations by designing secure solutions that are significantly more efficient and scalable. In more detail, given that real-world graphs are known to be sparse, our solutions move away from an expensive matrix-based representation to a more efficient list-based representation. We design novel, secure, and efficient solutions for computing centrality measures and prove their correctness. Our solutions drastically reduce the asymptotic complexity from the prohibitive even for the fastest solution by Asharov et al. down to , for nodes. To design our solutions, we extend upon the secure graph computation framework of Koti et al. (CCS'24), providing a novel framework with improved capabilities in multiple directions. Finally, we provide an end-to-end implementation of our secure graph analysis framework and establish concrete efficiency improvements over prior work, observing several orders of magnitude improvement.
Last updated:  2025-04-09
Low-Latency Bootstrapping for CKKS using Roots of Unity
Jean-Sébastien Coron and Robin Köstler
We introduce a new bootstrapping equation for the CKKS homomorphic encryption scheme of approximate numbers. The original bootstrapping approach for CKKS consists in homomorphically evaluating a polynomial that approximates the modular reduction modulo q. In contrast, our new bootstrapping equation directly embeds the additive group modulo q into the complex roots of unity, which can be evaluated natively in the CKKS scheme. Due to its reduced multiplicative depth, our new bootstrapping equation achieves a 7x latency improvement for a single slot compared to the original CKKS bootstrapping, though it scales less efficiently when applied to a larger number of slots.
Last updated:  2025-04-09
ADC-BE: Optimizing Worst-Case Bandwidth in Broadcast Encryption with Boolean Functions
Yadi Zhong
Recently, Dupin and Abelard proposed a broadcast encryption scheme which outperforms the Complete Subtree-based and Subset Difference broadcast encryption in terms of encryption cost and bandwidth requirement. However, Dupin and Abelard acknowledge that the worst-case bound for bandwidth requirement of Complete Subtree approach can be reached in their scheme as well. In this paper, we answer the call to further reduce this bandwidth bottleneck. We first provide concrete analysis to show how this worst-case upper-bound is reached from concrete Boolean functions. Then we present two improved broadcast encryption schemes to significantly reduce this worst-case bandwidth consumption for further optimization of Dupin and Abelard’s technique. Our proposed approach ADC-BE, composed of two algorithms, AD-BE and AC-BE, can significantly optimize this worst-case complexity from n/2 down to 1 for a system of n users. This is efficient especially for large number of users in the system. Our proposed schemes combines the algebraic normal form, disjunctive normal form, and conjunctive normal form to optimize a Boolean function to its minimized representation. In addition, our approaches can be made secure against quantum adversaries and are therefore post-quantum, where both algorithms AD-BE and AC-BE require minimal assumptions based on existence of one-way function.
Last updated:  2025-04-09
Legacy Encryption Downgrade Attacks against LibrePGP and CMS
Falko Strenzke and Johannes Roth
This work describes vulnerabilities in the specification of AEAD modes and Key Wrap in two cryptographic message formats. Firstly, this applies to AEAD packets as introduced in the novel LibrePGP specification that is implemented by the widely used GnuPG application. Secondly, we describe vulnerabilities in the AES-based AEAD schemes as well as the Key Wrap Algorithm specified in the Cryptographic Message Syntax (CMS). These new attacks exploit the possibility to downgrade AEAD or AES Key Wrap ciphertexts to valid legacy CFB- or CBC-encrypted related ciphertexts and require that the attacker learns the content of the legacy decryption result. This can happen in two principal ways: either due to the human recipient returning the decryption output to the attacker as a quote or due to a programmatic decryption oracle in the receiving system that reveals information about the plaintext. The attacks effect the decryption of low-entropy plaintext blocks in AEAD ciphertexts and, in the case of LibrePGP, also the manipulation of existing AEAD ciphertexts. For AES Key Wrap in CMS, full key decryption is possible. Some of the attacks require multiple successful oracle queries. The attacks thus demonstrate that CCA2 security is not achieved by the LibrePGP and CMS AEAD or Key Wrap encryption in the presence of a legacy cipher mode decryption oracle.
Last updated:  2025-04-09
Quarantined-TreeKEM: a Continuous Group Key Agreement for MLS, Secure in Presence of Inactive Users
Céline Chevalier, Guirec Lebrun, Ange Martinelli, and Abdul Rahman Taleb
The recently standardized secure group messaging protocol Messaging Layer Security (MLS) is designed to ensure asynchronous communications within large groups, with an almost-optimal communication cost and the same security level as point-to-point se- cure messaging protocols such as Signal. In particular, the core sub-protocol of MLS, a Continuous Group Key Agreement (CGKA) called TreeKEM, must generate a common group key that respects the fundamental security properties of post-compromise security and forward secrecy which mitigate the effects of user corruption over time. Most research on CGKAs has focused on how to improve these two security properties. However, post-compromise security and forward secrecy require the active participation of respectively all compromised users and all users within the group. Inactive users – who remain offline for long periods – do not update anymore their encryption keys and therefore represent a vulnerability for the entire group. This issue has already been identified in the MLS standard, but no solution, other than expelling these inactive users after some disconnection time, has been found. We propose here a CGKA protocol based on TreeKEM and fully compatible with the MLS standard, that implements a quarantine mechanism for the inactive users in order to mitigate the risk induced by these users during their inactivity period and before they are removed from the group. That mechanism indeed updates the inactive users’ encryption keys on their behalf and secures these keys with a secret sharing scheme. If some of the inactive users eventually reconnect, their quarantine stops and they are able to recover all the messages that were exchanged during their offline period. Our Quarantined-TreeKEM protocol thus increases the security of original TreeKEM, with a very limited – and sometimes negative – communication overhead.
Last updated:  2025-04-09
Guaranteed Termination Asynchronous Complete Secret Sharing with Lower Communication and Optimal Resilience
Ying Cai, Chengyi Qin, and Mingqiang Wang
Asynchronous Complete Secret Sharing (ACSS) is a foundational module for asynchronous networks, playing a critical role in cryptography. It is essential for Asynchronous Secure Multi-Party Computation (AMPC) and, with termination, is widely applied in Validated Asynchronous Byzantine Agreement (VABA) and Asynchronous Distributed Key Generation (ADKG) to support secure distributed systems. Currently, there are relatively few statistical secure ACSS protocols that can guarantee termination, and their communication complexity is relatively high. To reduce communication complexity, we propose a new multi-receiver signature scheme, ARICP, which supports linear operations on signatures. Leveraging the ARICP scheme and the properties of symmetric polynomials, we propose an ACSS protocol that ensures termination and optimal resilience () with ) bits per sharing. Compared with the best-known result of ACSS protocols that guarantee termination [CP23], the amortized communication complexity of our protocol is reduced by a factor of .
Last updated:  2025-04-09
Fully Homomorphic Encryption for Cyclotomic Prime Moduli
Robin Geelen and Frederik Vercauteren
This paper presents a Generalized BFV (GBFV) fully homomorphic encryption scheme that encrypts plaintext spaces of the form with the -th cyclotomic polynomial and an arbitrary polynomial. GBFV encompasses both BFV where is a constant, and the CLPX scheme (CT-RSA 2018) where and is a linear polynomial. The latter can encrypt a single huge integer modulo , has much lower noise growth than BFV, but it is not known to be efficiently bootstrappable. We show that by a clever choice of and higher degree polynomial , our scheme combines the SIMD capabilities of BFV with the low noise growth of CLPX, whilst still being efficiently bootstrappable. Moreover, we present parameter families that natively accommodate packed plaintext spaces defined by a large cyclotomic prime, such as the Fermat prime and the Goldilocks prime . These primes are often used in homomorphic encryption applications and zero-knowledge proof systems. Due to the lower noise growth, GBFV can evaluate much deeper circuits compared to native BFV in the same ring dimension. As a result, we can evaluate either larger circuits or work with smaller ring dimensions. In particular, we can natively bootstrap GBFV at 128-bit security already at ring dimension , which was impossible before. We implemented the GBFV scheme on top of the SEAL library and achieve a latency of only 2 seconds to bootstrap a ciphertext encrypting up to 8192 elements modulo .
Last updated:  2025-04-09
Revisiting Oblivious Top- Selection with Applications to Secure -NN Classification
Kelong Cong, Robin Geelen, Jiayi Kang, and Jeongeun Park
An oblivious Top- algorithm selects the smallest elements from elements while ensuring the sequence of operations and memory accesses do not depend on the input. In 1969, Alekseev proposed an oblivious Top- algorithm with complexity , which was later improved by Yao in 1980 for small . In this paper, we revisit the literature on oblivious Top- and propose another improvement of Alekseev's method that outperforms both for large . Our construction is equivalent to applying a new truncation technique to Batcher's odd-even sorting algorithm. In addition, we propose a combined network to take advantage of both Yao's and our technique that achieves the best concrete performance, in terms of the number of comparators, for any . To demonstrate the efficiency of our combined Top- network, we implement a secure non-interactive -nearest neighbors classifier using homomorphic encryption as an application. Compared with the work of Zuber and Sirdey (PoPETS 2021) where oblivious Top- was realized with complexity , our experimental results show a speedup of up to 47 times (not accounting for difference in CPU) for .
Last updated:  2025-04-09
SLAMP-FSS: Two-Party Multi-Point Function Secret Sharing from Simple Linear Algebra
Erki Külaots, Toomas Krips, Hendrik Eerikson, and Pille Pullonen-Raudvere
Multi-point function secret sharing (FSS) is a building block for pseudo- random correlation generators used in the novel silent correlation generation methods for various secure multiparty computation applications. However, the main construction used so far is the naive approach to combining several single point functions. In this paper, we propose an efficient and natural generalization of the point function. FSS scheme of Boyle et al. 2016 [BGI16 ] using a tree structure, a pseudorandom generator and systems of linear equations. Our schemes are more efficient in the evaluation phase than other previously proposed multi-point FSS schemes while being also more flexible and being similar in other efficiency parameters. Our setup phase is similar in cost to previous best versions, while the full evaluation, which is by far the costliest step, is more efficient.
Last updated:  2025-04-09
Anamorphic Voting: Ballot Freedom Against Dishonest Authorities
Rosario Giustolisi, Mohammadamin Rakeei, and Gabriele Lenzini
Electronic voting schemes typically ensure ballot privacy by assuming that the decryption key is distributed among tallying authorities, preventing any single authority from decrypting a voter’s ballot. However, this assumption may fail in a fully dishonest environment where all tallying authorities collude to break ballot privacy. In this work, we introduce the notion of anamorphic voting, which enables voters to convey their true voting intention to an auditor while casting an (apparently) regular ballot. We present new cryptographic techniques demonstrating that several existing voting schemes can support anamorphic voting.
Last updated:  2025-04-09
An Efficient SNARK for Field-Programmable and RAM Circuits
Jehyuk Jang and Jamie Judd
The advancement of succinct non-interactive argument of knowledge (SNARK) with constant proof size has significantly enhanced the efficiency and privacy of verifiable computation. Verifiable computation finds applications in distributed computing networks, particularly in scenarios where nodes cannot be generally trusted, such as blockchains. However, fully harnessing the efficiency of SNARK becomes challenging when the computing targets in the network change frequently, as the SNARK verification can involve some untrusted preprocess of the target, which is expected to be reproduced by other nodes. This problem can be addressed with two approaches: One relieves the reproduction overhead by reducing the dimensionality of preprocessing data; The other utilizes verifiable machine computation, which eliminates the dependency on preprocess at the cost of increased overhead to SNARK proving and verification. In this paper, we propose a new SNARK with constant proof size applicable to both approaches. The proposed SNARK combines the efficiency of Groth16 protocol, albeit lacking universality for new problems, and the universality of PlonK protocol, albeit with significantly larger preprocessing data dimensions. Consequently, we demonstrate that our proposed SNARK maintains the efficiency and the universality while significantly reducing the dimensionality of preprocessing data. Furthermore, our SNARK can be seamlessly applied to the verifiable machine computation, requiring a proof size smaller about four to ten times than other related works.
Last updated:  2025-04-09
Cauchyproofs: Batch-Updatable Vector Commitment with Easy Aggregation and Application to Stateless Blockchains
Zhongtang Luo, Yanxue Jia, Alejandra Victoria Ospina Gracia, and Aniket Kate
Stateless blockchain designs have emerged to address the challenge of growing blockchain size using succinct global states. Previous works have developed vector commitments that support proof updates and aggregation to be used as such states. However, maintaining proofs for multiple users still demands significant computational resources, particularly to update proofs with every transaction. This paper introduces Cauchyproofs, a batch-updatable vector commitment that enables proof-serving nodes to efficiently update proofs in quasi-linear time relative to the number of users and transactions, utilizing an optimized KZG scheme to achieve complexity for users and transactions, compared to the previous approaches. This advancement reduces the computational burden on proof-serving nodes, allowing efficient proof maintenance across large user groups. We demonstrate that our approach is approximately eight times faster than the naive approach at the Ethereum-level transaction throughput if we perform batch update every hour. Additionally, we present a novel matrix representation for KZG proofs utilizing Cauchy matrices, enabling faster all-proof computations with reduced elliptic curve operations. Finally, we propose an algorithm for history proof query, supporting retrospective proof generation with high efficiency. Our contributions substantially enhance the scalability and practicality of proof-serving nodes in stateless blockchain frameworks.
Last updated:  2025-04-09
Security in the Presence of Key Reuse: Context-Separable Interfaces and their Applications
Christopher Patton and Thomas Shrimpton
Key separation is often difficult to enforce in practice. While key reuse can be catastrophic for security, we know of a number of cryptographic schemes for which it is provably safe. But existing formal models, such as the notions of joint security (Haber-Pinkas, CCS ’01) and agility (Acar et al., EUROCRYPT ’10), do not address the full range of key-reuse attacks—in particular, those that break the abstraction of the scheme, or exploit protocol interactions at a higher level of abstraction. This work attends to these vectors by focusing on two key elements: the game that codifies the scheme under attack, as well as its intended adversarial model; and the underlying interface that exposes secret key operations for use by the game. Our main security experiment considers the implications of using an interface (in practice, the API of a software library or a hardware platform such as TPM) to realize the scheme specified by the game when the interface is shared with other unspecified, insecure, or even malicious applications. After building up a definitional framework, we apply it to the analysis of two real-world schemes: the EdDSA signature algorithm and the Noise protocol framework. Both provide some degree of context separability, a design pattern for interfaces and their applications that aids in the deployment of secure protocols.
Last updated:  2025-04-08
The many faces of Schnorr: a toolkit for the modular design of threshold Schnorr signatures
Victor Shoup
Recently, a number of highly optimized threshold signing protocols for Schnorr signatures have been proposed. While these proposals contain important new techniques, some of them present and analyze these techniques in very specific contexts, making it less than obvious how these techniques can be adapted to other contexts, or combined with one another. The main goal of this paper is to abstract out and extend in various ways some of these techniques, building a toolbox of techniques that can be easily combined in different ways and in different contexts. To this end, we present security results for various "enhanced" modes of attack on the Schnorr signature scheme in the non-distributed setting, and we demonstrate how to reduce the security in the distributed threshold setting to these enhanced modes of attack in the non-distributed setting. This results in a very modular approach to protocol design and analysis, which can be used to easily design new threshold Schnorr protocols that enjoy better security and/or performance properties than existing ones.
Last updated:  2025-04-08
Secret-Key PIR from Random Linear Codes
Caicai Chen, Yuval Ishai, Tamer Mour, and Alon Rosen
Private information retrieval (PIR) allows to privately read a chosen bit from an -bit database with bits of communication. Lin, Mook, and Wichs (STOC 2023) showed that by preprocessing into an encoded database , it suffices to access only bits of per query. This requires , and prohibitively large server circuit size. We consider an alternative preprocessing model (Boyle et al. and Canetti et al., TCC 2017), where the encoding depends on a client's short secret key. In this secret-key PIR (sk-PIR) model we construct a protocol with communication, for any constant , from the Learning Parity with Noise assumption in a parameter regime not known to imply public-key encryption. This is evidence against public-key encryption being necessary for sk-PIR. Under a new conjecture related to the hardness of learning a hidden linear subspace of with noise, we construct sk-PIR with similar communication and encoding size in which the server is implemented by a Boolean circuit of size . This is the first candidate PIR scheme with such a circuit complexity.
Last updated:  2025-04-08
GIGA Protocol: Unlocking Trustless Parallel Computation in Blockchains
Alberto Garoffolo, Dmytro Kaidalov, Roman Oliynykov, Daniele Di Tullio, and Mariia Rodinko
The scalability of modern decentralized blockchain systems is constrained by the requirement that the participating nodes execute the entire chains transactions without the ability to delegate the verification workload across multiple actors trustlessly. This is further limited by the need for sequential transaction execution and repeated block validation, where each node must re-execute all transactions before accepting blocks, also leading to delayed broadcasting in many architectures. Consequently, throughput is limited by the capacity of individual nodes, significantly preventing scalability. In this paper, we introduce GIGA, a SNARK-based protocol that enables trustless parallel execution of transactions, processing non-conflicting operations concurrently, while preserving security guarantees and state consistency. The protocol organizes transactions into non-conflicting batches which are executed and proven in parallel, distributing execution across multiple decentralized entities. These batch proofs are recursively aggregated into a single succinct proof that validates the entire block. As a result, the protocol both distributes the execution workload and removes redundant re-execution from the network, significantly improving blockchain throughput while not affecting decentralization. Performance estimates demonstrate that, under the same system assumptions (e.g., consensus, networking, and virtual machine architecture) and under high degrees of transaction parallelism (i.e., when most transactions operate on disjoint parts of the state), our protocol may achieve over a 10000x throughput improvement compared to popular blockchain architectures that use sequential execution models, and over a 500x improvement compared to blockchain architectures employing intra-node parallelization schemes. Furthermore, our protocol enables a significant increase in transaction computational complexity, unlocking a wide range of use cases that were previously unfeasible on traditional blockchain architectures due to the limited on-chain computational capacity. Additionally, we propose a reward mechanism that ensures the economic sustainability of the proving network, dynamically adjusting to computational demand while fostering competition among provers based on cost-efficiency and reliability.
Last updated:  2025-04-08
Binding Security of Implicitly-Rejecting KEMs and Application to BIKE and HQC
Juliane Krämer, Patrick Struck, and Maximiliane Weishäupl
In this work, we continue the analysis of the binding properties of implicitly-rejecting key-encapsulation mechanisms (KEMs) obtained via the Fujisaki-Okamoto (FO) transform. These binding properties, in earlier literature known under the term robustness, thwart attacks that can arise when using KEMs in complex protocols. Recently, Cremers et al. (CCS'24) introduced a framework for binding notions, encompassing previously existing but also new ones. While implicitly-rejecting FO-KEMs have been analyzed with respect to multiple of these notions, there are still several gaps. We complete the picture by providing positive and negative results for the remaining notions. Further, we show how to apply our results to the code-based KEMs BIKE and HQC, which were round-4 candidates in NIST's PQC standardization process. Through this, we close a second gap as our results complete the analysis of the binding notions for the NIST round-4 KEMs. Finally, we give a modified version of the FO transform that achieves all binding notions.
Last updated:  2025-04-08
Attacking at non-harmonic frequencies in screaming-channel attacks
Jeremy Guillaume, Maxime Pelcat, Amor Nafkha, and Ruben Salvador
Screaming-channel attacks enable Electromagnetic (EM) Side-Channel Attacks (SCAs) at larger distances due to higher EM leakage energies than traditional SCAs, relaxing the requirement of close access to the victim. This attack can be mounted on devices integrating Radio Frequency (RF) modules on the same die as digital circuits, where the RF can unintentionally capture, modulate, amplify, and transmit the leakage along with legitimate signals. Leakage results from digital switching activity, so the hypothesis of previous works was that this leakage would appear at multiples of the digital clock frequency, i.e., harmonics. This work demonstrates that compromising signals appear not only at the harmonics and that leakage at non-harmonics can be exploited for successful attacks. Indeed, the transformations undergone by the leaked signal are complex due to propagation effects through the substrate and power and ground planes, so the leakage also appears at other frequencies. We first propose two methodologies to locate frequencies that contain leakage and demonstrate that it appears at non-harmonic frequencies. Then, our experimental results show that screaming-channel attacks at non-harmonic frequencies can be as successful as at harmonics when retrieving a 16-byte AES key. As the RF spectrum is polluted by interfering signals, we run experiments and show successful attacks in a more realistic, noisy environment where harmonic frequencies are contaminated by multi-path fading and interference. These attacks at non-harmonic frequencies increase the attack surface by providing attackers with an increased number of potential frequencies where attacks can succeed.
Last updated:  2025-04-08
Obfuscation for Deep Neural Networks against Model Extraction: Attack Taxonomy and Defense Optimization
Yulian Sun, Vedant Bonde, Li Duan, and Yong Li
Well-trained deep neural networks (DNN), including large language models (LLM), are valuable intellectual property assets. To defend against model extraction attacks, one of the major ideas proposed in a large body of previous research is obfuscation: splitting the original DNN and storing the components separately. However, systematically analyzing the methods’ security against various attacks and optimizing the efficiency of defenses are still challenging. In this paper, We propose a taxonomy of model-based extraction attacks, which enables us to identify vulnerabilities of several existing obfuscation methods. We also propose an extremely efficient model obfuscation method called O2Splitter using trusted execution environment (TEE). The secrets we store in TEE have O(1)-size, i.e., independent of model size. Although O2Splitter relies on a pseudo-random function to provide a quantifiable guarantee for protection and noise compression, it does not need any complicated training or filtering of the weights. Our comprehensive experiments show that O2Splitter can mitigate norm-clipping and fine-tuning attacks. Even for small noise (ϵ = 50), the accuracy of the obfuscated model is close to random guess, and the tested attacks cannot extract a model with comparable accuracy. In addition, the empirical results also shed light on discovering the relation between DP parameters in obfuscation and the risks of concrete extraction attacks.
Last updated:  2025-04-08
A Meta-Complexity Characterization of Quantum Cryptography
Bruno P. Cavalar, Eli Goldin, Matthew Gray, and Peter Hall
We prove the first meta-complexity characterization of a quantum cryptographic primitive. We show that one-way puzzles exist if and only if there is some quantum samplable distribution of binary strings over which it is hard to approximate Kolmogorov complexity. Therefore, we characterize one-way puzzles by the average-case hardness of a uncomputable problem. This brings to the quantum setting a recent line of work that characterizes classical cryptography with the average-case hardness of a meta-complexity problem, initiated by Liu and Pass. Moreover, since the average-case hardness of Kolmogorov complexity over classically polynomial-time samplable distributions characterizes one-way functions, this result poses one-way puzzles as a natural generalization of one-way functions to the quantum setting. Furthermore, our equivalence goes through probability estimation, giving us the additional equivalence that one-way puzzles exist if and only if there is a quantum samplable distribution over which probability estimation is hard. We also observe that the oracle worlds of defined by Kretschmer et. al. rule out any relativizing characterization of one-way puzzles by the hardness of a problem in or , which means that it may not be possible with current techniques to characterize one-way puzzles with another meta-complexity problem.
Last updated:  2025-04-08
Scalable Non-Fungible Tokens on Bitcoin
Jordi Herrera-Joancomartí, Cristina Pérez-Solà, and Toni Mateos
This paper presents a protocol for scaling the creation, management, and trading of non-fungible tokens (NFTs) on Bitcoin by extending bridgeless minting patterns previously used on other blockchains. The protocol leverages on-chain Bitcoin data to handle all aspects of token ownership, including trading, while integrating a secondary consensus system for minting and optionally modifying token metadata. To minimize its on-chain footprint, the protocol utilizes the OP_RETURN mechanism for ownership records, while complementary NFT-related actions are stored on the LAOS blockchain. All data remains permanently on-chain, with no reliance on bridges or third-party operators.
Last updated:  2025-04-08
Side-Channel and Fault Injection Attacks on VOLEitH Signature Schemes: A Case Study of Masked FAEST
Sönke Jendral and Elena Dubrova
Ongoing efforts to transition to post-quantum public-key cryptosystems have created the need for algorithms with a variety of performance characteristics and security assumptions. Among the candidates in NIST's post-quantum standardisation process for additional digital signatures is FAEST, a Vector Oblivious Linear Evaluation in-the-Head (VOLEitH)-based scheme, whose security relies on the one-wayness of the Advanced Encryption Standard (AES). The VOLEitH paradigm enables competitive performance and signature sizes under conservative security assumptions. However, since it was introduced recently, in 2023, its resistance to physical attacks has not yet been analysed. In this paper, we present the first security analysis of VOLEitH-based signature schemes in the context of side-channel and fault injection attacks. We demonstrate four practical attacks on a masked implementation of FAEST in ARM Cortex-M4 capable of recovering the full secret key with high probability (greater than 0.87) from a single signature. These attacks exploit vulnerabilities of components specific to VOLEitH schemes and FAEST, such as the parallel all-but-one vector commitments, the VOLE generation, and the AES proof generation. Finally, we propose countermeasures to mitigate these attacks and enhance the physical security of VOLEitH-based signature schemes.
Last updated:  2025-04-08
More NTRU+Sign Signatures from Cyclotomic Trinomials
Ga Hee Hong, Joo Woo, Jonghyun Kim, Minkyu Kim, Hochang Lee, and Jong Hwan Park
Recently, + was proposed as a new compact signature scheme, following `Fiat-Shamir with Aborts' (FSwA) framework. Its compactness is mainly based on their novel NTRU-based key structure that fits well with bimodal distributions in the FSwA framework. However, despite its compactness, + fails to provide a diverse set of parameters that can meet some desired security levels. This limitation stems from its reliance on a ring , where is restricted to powers of two, limiting the flexibility in selecting appropriate security levels. To overcome this limitation, we propose a revised version of + by adopting a ring from cyclotomic trinomials, where for some positive integers and . Our parameterization offers three distinct security levels: approximately , , and bits, while preserving the compactness in . We implement these re-parameterized + schemes, showing that the performance of + from cyclotomic trinomials is still comparable to previous lattice-based signature schemes such as and .
Last updated:  2025-04-08
Lattice-based Cryptography: A survey on the security of the lattice-based NIST finalists
Koen de Boer and Wessel van Woerden
This survey, mostly written in the years 2022-2023, is meant as an as short as possible description of the current state-of-the-art lattice attacks on lattice-based cryptosystems, without losing the essence of the matter. The main focus is the security of the NIST finalists and alternatives that are based on lattices, namely CRYSTALS-Kyber, CRYSTALS-Dilithium and Falcon. Instead of going through these cryptosystems case by case, this survey considers attacks on the underlying hardness assumptions: in the case of the mentioned lattice-based schemes, these are (variants of) LWE (Learning With Errors) and NTRU.
Last updated:  2025-04-08
Cryptomania v.s. Minicrypt in a Quantum World
Longcheng Li, Qian Li, Xingjian Li, and Qipeng Liu
We prove that it is impossible to construct perfect-complete quantum public-key encryption (QPKE) with classical keys from quantumly secure one-way functions (OWFs) in a black-box manner, resolving a long-standing open question in quantum cryptography. Specifically, in the quantum random oracle model (QROM), no perfect-complete QPKE scheme with classical keys, and classical/quantum ciphertext can be secure. This improves the previous works which require either unproven conjectures or imposed restrictions on key generation algorithms. This impossibility even extends to QPKE with quantum public key if the public key can be uniquely determined by the secret key, and thus is tight to all existing QPKE constructions.
Last updated:  2025-04-08
Inner Product Masked Integral Distinguishers and Integral Sets over Large Finite Fields (Full Version)
Weizhe Wang, Deng Tang, and Haoyang Wang
The security and performance of many advanced protocols heavily rely on the underlying symmetric-key primitives. These primitives, known as arithmetization-oriented (\texttt{AO}) ciphers, focus on arithmetic metrics over large finite fields. Most \texttt{AO} ciphers are vulnerable to algebraic attacks, especially integral attacks. In this work, we explore integral attacks over large finite fields. By combining integral distinguishers with inner product masks, we propose inner product masked (IPM) integral distinguishers and establish a system of equations concerning the inner product mask. Additionally, we provide theoretical lower bounds on the complexity of IPM integral distinguishers in certain cases. For practical applications, we introduce IPM integral sets, which effectively characterize the integral property of sets over the finite field . Besides the IPM sets based on additive subgroups and multiplicative subgroups, we present a method to obtain sets with improved integral properties by merging existing sets. Exploring different classes of IPM integral sets can help us find lower-complexity integral distinguishers. Combining with monomial detection techniques, we propose a framework for searching for integral distinguishers based on the IPM integral set. Our framework is compatible with various monomial detection techniques, including general monomial prediction proposed by Cui et al. at ASIACRYPT 2022 and coefficient grouping invented by Liu et al. at EUROCRYPT 2023. Previous integral distinguishers over were predominantly based on additive subgroups. With IPM integral sets based on multiplicative subgroups and merged sets, we successfully obtain IPM integral distinguishers with lower complexity for \textsf{MiMC}, \textsf{CIMINION}, and \textsf{Chaghri}. We believe that our work will provide new insights into integral attacks over large finite fields.
Last updated:  2025-04-08
Round-Efficient Adaptively Secure Threshold Signatures with Rewinding
Yanbo Chen
A threshold signature scheme allows distributing a signing key to users, such that any of them can jointly sign, but any cannot. It is desirable to prove \emph{adaptive security} of threshold signature schemes, which considers adversaries that can adaptively corrupt honest users even after interacting with them. For a class of signatures that relies on security proofs with rewinding, such as Schnorr signatures, proving adaptive security entails significant challenges. This work proposes two threshold signature schemes that are provably adaptively secure with rewinding proofs. Our proofs are solely in the random oracle model (ROM), without relying on the algebraic group model (AGM). - We give a 3-round scheme based on the algebraic one-more discrete logarithm (AOMDL) assumption. The scheme outputs a standard Schnorr signature. - We give a 2-round scheme based on the DL assumption. Signatures output by the scheme contain one more scalar than a Schnorr signature. We follow the recent work by Katsumata, Reichle, and Takemure (Crypto 2024) that proposed the first threshold signature scheme with a rewinding proof of full adaptive security. Their scheme is a 5-round threshold Schnorr scheme based on the DL assumption. Our results significantly improve the round complexity. Katsumata et al.'s protocol can be viewed as applying a masking technique to Sparkle, a threshold Schnorr signature scheme by Crites, Komlo, and Maller (Crypto 2023). This work shows wider applications of the masking technique. Our first scheme is obtained by masking FROST, a threshold Schnorr protocol by Komlo and Goldberg (SAC 2020). The second scheme is obtained by masking a threshold version of HBMS, a multi-signature scheme by Bellare and Dai (Asiacrypt 2021). Katsumata et al. masked Sparkle at the cost of 2 additional rounds. Our main insight is that this cost varies across schemes, especially depending on how to simulate signing in the security proofs. The cost is 1 extra round for our first scheme, and is 0 for our second scheme.
Last updated:  2025-04-08
Malicious Security for SCALES: Outsourced Computation with Ephemeral Servers
Anasuya Acharya, Carmit Hazay, Vladimir Kolesnikov, and Manoj Prabhakaran
SCALES (Small Clients And Larger Ephemeral Servers) model is a recently proposed model for MPC (Acharya et al., TCC 2022). While the SCALES model offers several attractive features for practical large-scale MPC, the result of Acharya et al. only offered semi-honest secure protocols in this model. We present a new efficient SCALES protocol secure against malicious adversaries, for general Boolean circuits. We start with the base construction of Acharya et al. and design and use a suite of carefully defined building blocks that may be of independent interest. The resulting protocol is UC-secure without honest majority, with a CRS and bulletin-board as setups, and allows publicly identifying deviations from correct execution.
Last updated:  2025-04-08
SCALES: MPC with Small Clients and Larger Ephemeral Servers
Anasuya Acharya, Carmit Hazay, Vladimir Kolesnikov, and Manoj Prabhakaran
The recently proposed YOSO model is a groundbreaking approach to MPC, executable on a public blockchain, circumventing adaptive player corruption by hiding the corruption targets until they are worthless. Players are selected unpredictably from a large pool to perform MPC subtasks, in which each selected player sends a single message (and reveals their identity). While YOSO MPC has attractive asymptotic complexity, unfortunately, it is concretely prohibitively expensive due to the cost of its building blocks. We propose a modification to the YOSO model that preserves resilience to adaptive server corruption, but allows for much more efficient protocols. In SCALES (Small Clients And Larger Ephemeral Servers) only the servers facilitating the MPC computation are ephemeral (unpredictably selected and ``speak once''). Input providers (clients) publish problem instances and collect the output, but do not otherwise participate in computation. SCALES offers attractive features, and improves over YOSO protocols in outsourcing MPC to a large pool of servers under adaptive corruption. We build SCALES from rerandomizable garbling schemes, which is a contribution of independent interest, with additional applications.
Last updated:  2025-04-08
A Study of Blockchain Consensus Protocols
Shymaa M. Arafat
When Nakamoto invented Bitcoin, the first generation of cryptocurrencies followed it in applying POW (Proof of Work) consensus mechanism; due to its excessive energy consumption and heavy carbon footprints, new innovations evolved like Proof of Space, POS (Proof of Stake), and a lot more with many variants for each. Furthermore, the emergence of more blockchain applications and kinds beyond just cryptocurrencies needed more consensus mechanisms that is optimized to fit requirements of each application or blockchain kind; examples range from IoT (Internet of Things) blockchains for sustainability applications that often use variants of BFT (Byzantine Fault Tolerance) algorithm, and consensus needed to relay transactions and/or assets between different blockchains in interoperability solutions. Previous studies concentrated on surveying and/or proposing different blockchain consensus rules, on a specific consensus issue like attacks, randomization, or on deriving theoretical results. Starting from discussing most important theoretical results, this paper tries to gather and organize all significant existing material about consensus in the blockchain world explaining design challenges, tradeoffs and research areas. We realize that the topic could fit for a complete textbook, so we summarize the basic concepts and support with tables and appendices. Then we highlight some case examples from interoperability solutions to show how flexible and wide the design space is to fit both general and special purpose systems. The aim is to provide researchers with a comprehensive overview of the topic, along with the links to go deeper into every detail.
Last updated:  2025-04-08
Impossible Differential Attack on SAND-64
Nobuyuki Sugio
SAND is an AND-RX-based lightweight block cipher proposed by Chen et al. There are two variants of SAND, namely SAND-64 and SAND-128, due to structural differences. In this paper, we search for impossible differential distinguishers of SAND-64 using the Constraint Programming (CP) and reveal 56 types of impossible differential distinguishers up to 11 rounds. Furthermore, we demonstrate a key recovery attack on 17-round SAND-64. The complexities for the attack require data, encryptions, and bytes of memory, respectively. Although this result currently achieves the best attack on round-reduced SAND-64, this attack does not threaten the security of SAND-64 against impossible differential attack.
Last updated:  2025-04-08
Towards Scalable YOSO MPC via Packed Secret-Sharing
Daniel Escudero, Elisaweta Masserova, and Antigoni Polychroniadou
The YOSO (You Only Speak Once) model, introduced by Gentry et al. (CRYPTO 2021), helps to achieve strong security guarantees in cryptographic protocols for distributed settings, like blockchains, with large number of parties. YOSO protocols typically employ smaller anonymous committees to execute individual rounds of the protocol instead of having all parties execute the entire protocol. After completing their tasks, parties encrypt protocol messages for the next anonymous committee and erase their internal state before publishing ciphertexts, thereby enhancing security in dynamically changing environments. In this work, we consider the problem of secure multi-party computation (MPC), a fundamental problem in cryptography and distributed computing. We assume honest majority among the committee members, and work in the online-offline, i.e., preprocessing, setting. In this context, we present the first YOSO MPC protocol where efficiency---measured as communication complexity---improves as the number of parties increases. Specifically, for and an adversary corrupting out of parties, our MPC protocol exhibits enhanced scalability as increases, where the online phase communication becomes independent of . Prior YOSO MPC protocols considered as large as , but a significant hurdle persisted in obtaining YOSO MPC with communication that does not scale linearly with the number of committee members, a challenge that is exagerbated when the committee size was large per YOSO's requirements. We show that, by considering a small ``gap'' of , the sizes of the committees are only marginally increased, while online communication is significantly reduced. Furthermore, we explicitly consider fail-stop adversaries, i.e., honest participants who may inadvertently fail due to reasons such as denial of service or software/hardware errors. In prior YOSO work, these adversaries were grouped with fully malicious parties. Adding explicit support for them allows us to achieve even better scalability.
Last updated:  2025-04-08
Periodic Table of Cryptanalysis: Geometric Approach with Different Bases
Kai Hu, Chi Zhang, Chengcheng Chang, Jiashu Zhang, Meiqin Wang, and Thomas Peyrin
In the past three decades, we have witnessed the creation of various cryptanalytic attacks. However, relatively little research has been done on their potential underlying connections. The geometric approach, developed by Beyne in 2021, shows that a cipher can be viewed as a linear operation when we treat its input and output as points in an induced \textit{free vector space}. By performing a change of basis for the input and output spaces, one can obtain various transition matrices. Linear, differential, and (ultrametic) integral attacks have been well reinterpreted by Beyne's theory in a unified way. Thus far, the geometric approach always uses the same basis for the input and output spaces. We observe here that this restriction is unnecessary and allowing different bases makes the geometric approach more flexible and able to interpret/predict more attack types. Given some set of bases for the input and output spaces, a family of basis-based attacks is defined by combining them, and all attacks in this family can be studied in a unified automatic search method. We revisit three kinds of bases from previous geometric approach papers and extend them to four extra ones by introducing new rules when generating new bases. With the final seven bases, we can obtain different basis-based attacks in the -th order spaces, where the \textit{order} is defined as the number of messages used in one sample during the attack. We then provide four examples of applications of this new framework. First, we show that by choosing a better pair of bases, Beyne and Verbauwhede's ultrametric integral cryptanalysis can be interpreted as a single element of a transition matrix rather than as a linear combination of elements. This unifies the ultrametric integral cryptanalysis with the previous linear and quasi-differential attacks. Second, we revisit the multiple-of- property with our refined geometric approach and exhibit new multiple-of- distinguishers that can reach more rounds of the \skinny-64 cipher than the state-of-the-art. Third, we study the multiple-of- property for the first-order case, which is similar to the subspace trail but it is the divisibility property that is considered. This leads to a new distinguisher for 11-round-reduced \skinny-64. Finally, we give a closed formula for differential-linear approximations without any assumptions, even confirming that the two differential-linear approximations of \simeck-32 and \simeck-48 found by Hadipour \textit{et al.} are deterministic independently of concrete key values. We emphasize that all these applications were not possible before.
Last updated:  2025-04-07
Cryptography based on 2D Ray Tracing
Sneha Mohanty and Christian Schindelhauer
We introduce a novel symmetric key cryptographic scheme involving a light ray's interaction with a 2D cartesian coordinate setup, several smaller boxes within this setup, of either reflection or refraction type and , or degree polynomial curves inside each of these smaller boxes. We also incorporate boolean logic gates of types XOR, NOT-Shift and Permutation which get applied to the light ray after each interaction with a reflecting or refracting polynomial curve. This alternating interaction between Optical gates (polynomial curves) and Non-optical gates creates a complex and secure cryptographic system. Furthermore, we design and launch customized attacks on our cryptographic system and discuss the robustness of it against these.
Last updated:  2025-04-07
Hybrid-query bounds with partial input control - framework and application to tight M-eTCR
Andreas Hülsing, Mikhail Kudinov, and Christian Majenz
In this paper, we present an improved framework for proving query bounds in the Quantum Random Oracle Model (QROM) for algorithms with both quantum and classical query interfaces, where the classical input is partially controlled by the adversary. By extending existing techniques, we develop a method to bound the progress an adversary can make with such partial-control classical queries. While this framework is applicable to different hash function properties, we decided to demonstrate the impact of the new techniques by giving an analysis of the multi-target extended target collision resistance property (m-eTCR). This new approach allows us to achieve an improved bound that significantly reduces the required function key size. Our proof is tight in terms of query complexity and has significant implications for cryptographic applications, especially for signature schemes in the hash & sign paradigm, enabling more efficient instantiations with reduced salt sizes and smaller signature lengths. For an example of multiple signatures aggregation, we achieve a signature size of 30 kB smaller.
Last updated:  2025-04-07
On breaking McEliece keys using brute force
Lorenz Panny
In the McEliece public-key encryption scheme, a private key is almost always not determined uniquely by its associated public key. This paper gives a structural characterization of equivalent private keys, generalizing a result known for the more approachable special case . These equivalences reduce the cost estimate for a simple private-key search using the support-splitting algorithm (SSA) by a polynomial but practically very substantial factor. We provide an optimized software implementation of the SSA for this kind of key search and demonstrate its capabilities in practice by solving a key-recovery challenge with a naïve a‑priori cost estimate of bit operations in just core days, testing private-key candidates per core and second in the process. We stress that the speedup from those equivalences is merely polynomial and does not indicate any weakness in realistic instantiations of the McEliece cryptosystem, whose parameter choices are primarily constrained by decoding attacks rather than ludicrously more expensive key-recovery attacks.
Last updated:  2025-04-07
Dyna-hinTS: Silent Threshold Signatures for Dynamic Committees
Aniket Kate, Pratyay Mukherjee, Samipa Samanta, and Pratik Sarkar
The works of Garg et al. [S&P'24] (aka hinTS) and Das et al. [CCS'23] introduced the notion of silent threshold signatures (STS) - where a set of signers silently perform local computation to generate a public verification key. To sign a message, any set of signers sign the message non-interactively and these are aggregated into a constant-sized signature. This paradigm avoids performing expensive Distributed Key Generation procedure for each set of signers while keeping the public verification key constant-sized. In this work, we propose the notion of committee-based silent threshold signature (c-STS) scheme. In a c-STS scheme, a set of signers initially perform a one-time setup to generate the verification key, and then a subset of signers are randomly chosen for an epoch to perform the threshold signing while the other signers are not authorized to sign during that epoch. This captures existing systems like Ethereum Altair and Dfinity where only a specific committee is authorized to sign in a designated epoch. The existing STS schemes cannot be extended to the committee setting because the signature verification only attests to the number of signing parties, not which committee they belong to. So, we upgrade hinTS to the committee setting by proposing Dyna-hinTS. It is the c-STS scheme and it requires a one-time silent setup and generates a one-time public verification key that does not vary with the committee. Assuming a set of 1024 signers (with corrupt 682 signers), hinTS generates an aggregated signature in 1.7s whereas Dyna-hinTS generates it in s within a committee of signers. This yields a improvement over hinTS for signature generation at the cost of increasing signature verification time by over hinTS. Dyna-hinTS supports general access structure, weighted signatures and improves existing multiverse threshold signatures.
Last updated:  2025-04-07
Charge Your Clients: Payable Secure Computation and Its Applications
Cong Zhang, Liqiang Peng, Weiran Liu, Shuaishuai Li, Meng Hao, Lei Zhang, and Dongdai Lin
The online realm has witnessed a surge in the buying and selling of data, prompting the emergence of dedicated data marketplaces. These platforms cater to servers (sellers), enabling them to set prices for access to their data, and clients (buyers), who can subsequently purchase these data, thereby streamlining and facilitating such transactions. However, the current data market is primarily confronted with the following issues. Firstly, they fail to protect client privacy, presupposing that clients submit their queries in plaintext. Secondly, these models are susceptible to being impacted by malicious client behavior, for example, enabling clients to potentially engage in arbitrage activities. To address the aforementioned issues, we propose payable secure computation, a novel secure computation paradigm specifically designed for data pricing scenarios. It grants the server the ability to securely procure essential pricing information while protecting the privacy of client queries. Additionally, it fortifies the server's privacy against potential malicious client activities. As specific applications, we have devised customized payable protocols for two distinct secure computation scenarios: Keyword Private Information Retrieval (KPIR) and Private Set Intersection (PSI). We implement our two payable protocols and compare them with the state-of-the-art related protocols that do not support pricing as a baseline. Since our payable protocols are more powerful in the data pricing setting, the experiment results show that they do not introduce much overhead over the baseline protocols. Our payable KPIR achieves the same online cost as baseline, while the setup is about slower than it. Our payable PSI needs about more communication cost than that of baseline protocol, while the runtime is slower than it depending on the network setting.
Last updated:  2025-04-07
Audience Injection Attacks: A New Class of Attacks on Web-Based Authorization and Authentication Standards
Pedram Hosseyni, Ralf Kuesters, and Tim Würtele
We introduce audience injection attacks, a novel class of vulnerabilities that impact widely used Web-based authentication and authorization protocols, including OAuth 2.0, OpenID Connect, FAPI, CIBA, the Device Authorization Grant, and various well-established extensions, such as Pushed Authorization Requests, Token Revocation, Token Introspection, and their numerous combinations. These protocols underpin services for billions of users across diverse ecosystems worldwide, spanning low-risk applications like social logins to high-risk domains such as open banking, insurance, and healthcare. Audience injection attacks exploit a critical weakness in a core security mechanism of these protocols - the handling of so-called audiences in signature-based client authentication mechanisms. This vulnerability allows attackers to compromise fundamental security objectives whenever these mechanisms are utilized across two or more server endpoints. They enable the attacker to impersonate users and gain unauthorized access to their resources, even in high-security protocol families specifically designed for sensitive applications. We responsibly disclosed these vulnerabilities to the relevant standardization bodies, which recognized their severity. In collaboration with these organizations, we developed fixes and supported a coordinated response, leading to an ongoing effort to update a dozen of standards, numerous major implementations, and far-reaching ecosystems.
Last updated:  2025-04-07
Improving the Masked Division for the FALCON Signature
Pierre-Augustin Berthet, Justine Paillet, Cédric Tavernier, Lilian Bossuet, and Brice Colombier
FALCON is a post-quantum signature selected by the National Institute of Standards and Technology (NIST). Although its side-channel resilience has been studied and a masking countermeasure proposed, the division is a major performance bottleneck. This work proposes a different approach to the masked FALCON division. We use the Newton method and a convergent sequence to approximate this operation. The performance of the masked division is improved by a factor 6.7 for two shares and 6.98 for three shares. For the Gaussian sampler, the improvements are of a factor 1.45 for two shares and 1.43 for three shares. Formal security proofs using the MIMO-SNI criteria are also provided.
Last updated:  2025-04-07
Everlasting Fully Dynamic Group Signatures
Yimeng He, San Ling, Khai Hanh Tang, and Huaxiong Wang
Group signatures allow a user to sign anonymously on behalf of a group of users while allowing a tracing authority to trace the signer's identity in case of misuse. In Chaum and van Heyst's original model (EUROCRYPT'91), the group needs to stay fixed. Throughout various attempts, including partially dynamic group signatures and revocations, Bootle et al. (ACNS'16, J. Cryptol.) formalized the notion of fully dynamic group signatures (FDGS), enabling both enrolling and revoking users of the group. However, in their scheme, the verification process needs to take into account the latest system information, and a previously generated signature will be invalidated as soon as, for example, there is a change in the group. We therefore raise a research question: Is it possible to construct an FDGS under which the validity of a signature can survive future changes in the system information? In this paper, we propose Everlasting Fully Dynamic Group Signatures (EFDGS) that allow signers to generate signatures that do not require verification with any specific epoch. Specifically, once the signatures are created, they are valid forever. It also guarantees that the signer can only output such a signature when she is a valid user of the system. We realize the above new model by constructing a plausibly post-quantum standard-lattice-based EFDGS.
Last updated:  2025-04-07
An Optimized Instantiation of Post-Quantum MQTT protocol on 8-bit AVR Sensor Nodes
YoungBeom Kim and Seog Chung Seo
Since the selection of the National Institute of Standards and Technology (NIST) Post-Quantum Cryptography (PQC) standardization algorithms, research on integrating PQC into security protocols such as TLS/SSL, IPSec, and DNSSEC has been actively pursued. However, PQC migration for Internet of Things (IoT) communication protocols remains largely unexplored. Embedded devices in IoT environments have limited computational power and memory, making it crucial to optimize PQC algorithms for efficient computation and minimal memory usage when deploying them on low-spec IoT devices. In this paper, we introduce KEM-MQTT, a lightweight and efficient Key Encapsulation Mechanism (KEM) for the Message Queuing Telemetry Transport (MQTT) protocol, widely used in IoT environments. Our approach applies the NIST KEM algorithm Crystals-Kyber (Kyber) while leveraging MQTT’s characteristics and sensor node constraints. To enhance efficiency, we address certificate verification issues and adopt KEMTLS to eliminate the need for Post-Quantum Digital Signatures Algorithm (PQC-DSA) in mutual authentication. As a result, KEM-MQTT retains its lightweight properties while maintaining the security guarantees of TLS 1.3. We identify inefficiencies in existing Kyber implementations on 8-bit AVR microcontrollers (MCUs), which are highly resource-constrained. To address this, we propose novel implementation techniques that optimize Kyber for AVR, focusing on high-speed execution, reduced memory consumption, and secure implementation, including Signed LookUp-Table (LUT) Reduction. Our optimized Kyber achieves performance gains of 81%,75%, and 85% in the KeyGen, Encaps, and DeCaps processes, respectively, compared to the reference implementation. With approximately 3 KB of stack usage, our Kyber implementation surpasses all state-of-the-art Elliptic Curve Diffie-Hellman (ECDH) implementations. Finally, in KEM-MQTT using Kyber-512, an 8-bit AVR device completes the handshake preparation process in 4.32 seconds, excluding the physical transmission and reception times.
Last updated:  2025-04-07
Tree-based Quantum Carry-Save Adder
Hyunjun Kim, Sejin Lim, Kyungbae Jang, Siyi Wang, Anubhab Baksi, Anupam Chattopadhyay, and Hwajeong Seo
Quantum computing is regarded as one of the most significant upcoming advancements in computer science. Although fully operational quantum computers have yet to be realized, they are expected to solve specific problems that are difficult to solve using classical computers. Given the limitations of quantum computing resources, it is crucial to design compact quantum circuits for core operations, such as quantum arithmetic. In this paper, we focus on optimizing the circuit depth of quantum multi-operand addition, which is a fundamental component in quantum implementations (as an example, SHA-2). Building on the foundational quantum carry-save approach by Phil Gossett, we introduce a tree-based quantum carry-save adder. Our design integrates the Wallace and Dadda trees to optimize carry handling during multi-operand additions. To further reduce circuit depth, we utilize additional ancilla qubits for parallel operations and introduce an efficient technique for reusing these ancilla qubits. Our tree-based carry-save adder achieves the lowest circuit depth (-depth) and provides an improvement of over 82% (up to 99%) in the qubit count–circuit depth product for multi-operand addition. Furthermore, we apply our method to multiplication, achieving the lowest circuit depth and an improvement of up to 87% in the qubit count–circuit depth product.
Last updated:  2025-04-07
FHECAP: An Encrypted Control System with Piecewise Continuous Actuation
Song Bian, Yunhao Fu, Dong Zhao, Haowen Pan, Yuexiang Jin, Jiayue Sun, Hui Qiao, and Zhenyu Guan
We propose an encrypted controller framework for linear time-invariant systems with actuator non-linearity based on fully homomorphic encryption (FHE). While some existing works explore the use of partially homomorphic encryption (PHE) in implementing linear control systems, the impacts of the non-linear behaviors of the actuators on the systems are often left unconcerned. In particular, when the inputs to the controller become too small or too large, actuators may burn out due to unstable system state oscillations. To solve this dilemma, we design and implement FHECAP, an FHE-based controller framework that can homomorphically apply non-linear functions to the actuators to rectify the system inputs. In FHECAP, we first design a novel data encoding scheme tailored for efficient gain matrix evaluation. Then, we propose a high-precision homomorphic algorithm to apply non-arithmetic piecewise function to realize the actuator normalization. In the experiments, compared with the existing state-of-the-art encrypted controllers, FHECAP achieves -- reduction in computational latency. We evaluate the effectiveness of FHECAP in the real-world application of encrypted control for spacecraft rendezvous. The simulation results show that the FHECAP achieves real-time spacecraft rendezvous with negligible accuracy loss.
Last updated:  2025-04-07
Trapdoor one-way functions from tensors
Anand Kumar Narayanan
Weyman and Zelevinsky generalised Vandermonde matrices to higher dimensions, which we call Vandermonde-Weyman-Zelevinsky tensors. We generalise Lagrange interpolation to higher dimensions by devising a nearly linear time algorithm that given a Vandermonde-Weyman-Zelevinsky tensor and a sparse target vector, finds a tuple of vectors that hit the target under tensor evaluation. Tensor evaluation to us means evaluating the usual multilinear form associated with the tensor in all but one chosen dimension. Yet, this interpolation problem phrased with respect to a random tensor appears to be a hard multilinear system. Leveraging this dichotomy, we propose preimage sampleable trapdoor one-way functions in the spirit of Gentry-Peikert-Vaikuntanathan (GPV) lattice trapdoors. We design and analyse ``Hash-and-Sign'' digital signatures from such trapdoor one-way functions, yielding short signatures whose lengths scale nearly linearly in the security parameter. We also describe an encryption scheme. Our trapdoor is a random Vandermonde-Weyman-Zelevinsky tensor over a finite field and a random basis change. We hide the Vandermonde-Weyman-Zelevinsky tensor under the basis change and publish the resulting pseudorandom tensor. The one way function is the tensor evaluation derived from the public tensor, restricted so as to only map to sparse vectors. We then design the domain sampler and preimage sampler demanded by the GPV framework. The former samples inputs that map to uniform images under the one-way function. The latter samples preimages given supplementary knowledge of the trapdoor. Preimage sampling is a randomised version of interpolation and knowing the basis change allows efficient translation between interpolation corresponding to the public and trapdoor tensors. An adversary seeking a preimage must solve a pseudorandom multilinear system, which seems cryptographically hard.
Last updated:  2025-04-06
Pacmann: Efficient Private Approximate Nearest Neighbor Search
Mingxun Zhou, Elaine Shi, and Giulia Fanti
We propose a new private Approximate Nearest Neighbor (ANN) search scheme named Pacmann that allows a client to perform ANN search in a vector database without revealing the query vector to the server. Unlike prior constructions that run encrypted search on the server side, Pacmann carefully offloads limited computation and storage to the client, no longer requiring computationally-intensive cryptographic techniques. Specifically, clients run a graph-based ANN search, where in each hop on the graph, the client privately retrieves local graph information from the server. To make this efficient, we combine two ideas: (1) we adapt a leading graph-based ANN search algorithm to be compatible with private information retrieval (PIR) for subgraph retrieval; (2) we use a recent class of PIR schemes that trade offline preprocessing for online computational efficiency. Pacmann achieves significantly better search quality than the state-of-the-art private ANN search schemes, showing up to 2.5 better search accuracy on real-world datasets than prior work and reaching 90\% quality of a state-of-the-art non-private ANN algorithm. Moreover on large datasets with up to 100 million vectors, Pacmann shows better scalability than prior private ANN schemes with up to reduction in computation time and reduction in overall latency.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.