Papers updated in last 365 days (Page 3 of 2931 results)

Last updated:  2025-02-27
Towards a White-Box Secure Fiat-Shamir Transformation
Gal Arnon and Eylon Yogev
The Fiat–Shamir transformation is a fundamental cryptographic technique widely used to convert public-coin interactive protocols into non-interactive ones. This transformation is crucial in both theoretical and practical applications, particularly in the construction of succinct non-interactive arguments (SNARKs). While its security is well-established in the random oracle model, practical implementations replace the random oracle with a concrete hash function, where security is merely assumed to carry over. A growing body of work has given theoretical examples of protocols that remain secure under the Fiat–Shamir transformation in the random oracle model but become insecure when instantiated with any white-box implementation of the hash function. Recent research has shown how these attacks can be applied to natural cryptographic schemes, including real-world systems. These attacks rely on a general diagonalization technique, where the protocol exploits its access to the white-box implementation of the hash function. These attacks cast serious doubt on the security of cryptographic systems deployed in practice today, leaving their soundness uncertain. We propose a new Fiat–Shamir transformation (XFS) that aims to defend against a broad family of attacks. Our approach is designed to be practical, with minimal impact on the efficiency of the prover and verifier and on the proof length. At a high level, our transformation combines the standard Fiat–Shamir technique with a new type of proof-of-work that we construct. We provide strong evidence for the security of our transformation by proving its security in a relativized random oracle model. Specifically, we show diagonalization attacks on the standard Fiat–Shamir transformation that can be mapped to analogous attacks within this model, meaning they do not rely on a concrete instantiation of the random oracle. In contrast, we prove unconditionally that our XFS variant of the Fiat–Shamir transformation remains secure within this model. Consequently, any successful attack on XFS must deviate from known techniques and exploit aspects not captured by our model. We hope that our transformation will help preserve the security of systems relying on the Fiat–Shamir transformation.
Last updated:  2025-02-27
Twist and Shout: Faster memory checking arguments via one-hot addressing and increments
Srinath Setty and Justin Thaler
A memory checking argument enables a prover to prove to a verifier that it is correctly processing reads and writes to memory. They are used widely in modern SNARKs, especially in zkVMs, where the prover proves the correct execution of a CPU including the correctness of memory operations. We describe a new approach for memory checking, which we call the method of one-hot addressing and increments. We instantiate this method via two different families of protocols, called Twist and Shout. Twist supports read/write memories, while Shout targets read-only memories (also known as lookup arguments). Both Shout and Twist have logarithmic verifier costs. Unlike prior works, these protocols do not invoke "grand product" or "grand sum" arguments. Twist and Shout significantly improve the prover costs of prior works across the full range of realistic memory sizes, from tiny memories (e.g., 32 registers as in RISC-V), to memories that are so large they cannot be explicitly materialized (e.g., structured lookup tables of size or larger, which arise in Lasso and the Jolt zkVM). Detailed cost analysis shows that Twist and Shout are well over 10x cheaper for the prover than state-of-the-art memory-checking procedures configured to have logarithmic proof length. Prior memory-checking procedures can also be configured to have larger proofs. Even then, we estimate that Twist and Shout are at least 2--4x faster for the prover in key applications. Finally, using Shout, we provide two fast-prover SNARKs for non-uniform constraint systems, both of which achieve minimal commitment costs (the prover commits only to the witness): (1) SpeedySpartan applies to Plonkish constraints, substantially improving the previous state-of-the-art protocol, BabySpartan; and (2)Spartan++ applies to CCS (a generalization of Plonkish and R1CS), improving prover times over the previous state-of-the-art protocol, Spartan, by 6x.
Last updated:  2025-02-27
Kleptographic Attacks against Implicit Rejection
Antoine Joux, Julian Loss, and Benedikt Wagner
Given its integral role in modern encryption systems such as CRYSTALS-Kyber, the Fujisaki-Okamoto (FO) transform will soon be at the center of our secure communications infrastructure. An enduring debate surrounding the FO transform is whether to use explicit or implicit rejection when decapsulation fails. Presently, implicit rejection, as implemented in CRYSTALS-Kyber, is supported by a strong set of arguments. Therefore, understanding its security implications in different attacker models is essential. In this work, we study implicit rejection through a novel lens, namely, from the perspective of kleptography. Concretely, we consider an attacker model in which the attacker can subvert the user's code to compromise security while remaining undetectable. In this scenario, we present three attacks that significantly reduce the security level of the FO transform with implicit rejection.
Last updated:  2025-02-27
Faster FHEW Bootstrapping with Adaptive Key Update
Qi Zhang, Mingqiang Wang, and Xiaopeng Cheng
Lee et al. proposed a new bootstrapping algorithm based on homomorphic automorphism, which merges the empty sets of ciphertexts by adjusting the window size. This algorithm supports arbitrary secret key distributions with no additional runtime costs while using small evaluation keys. However, our implementation reveals that once the window size exceeds a certain threshold, the time required for bootstrapping remains relatively constant. This observation prompts the question of how to further reduce the running time. To address this challenge, we introduce a new trick called Adaptive Key Update (AKU). With AKU and automorphism techniques, we propose a new bootstrapping algorithm for Gaussian secret keys that requires only external products and no switching for blind rotation. Building on this, we employ window size optimization and key switching techniques to further improve the algorithm. The improved algorithm provides a useful trade-off between key storage and computational efficiency, depending on the choice of the window size. Compared to the current fastest FHEW bootstrapping method for Gaussian secret keys (the LLWW+ method proposed by Li et al.), our AKU-based algorithm reduces the number of key switching by 76% and decreases the running time of bootstrapping by 20.7%. At this time, the practical runtime for bootstrapping is approximately equal to that of performing only external products.
Last updated:  2025-02-27
A New Generalized Attack on RSA-like Cryptosystems
Michel SECK, Oumar Niang, and Djiby Sow
Rivest, Shamir, and Adleman published the RSA cryptosystem in 1978, which has been widely used over the last four decades. The security of RSA is based on the difficulty of factoring large integers , where and are prime numbers. The public exponent and the private exponent are related by the equation . Recently, Cotan and Te{\c{s}}eleanu (NordSec 2023) introduced a variant of RSA, where the public exponent and the private exponent satisfy the equation for some positive integer . In this paper, we study the general equation with positive integers and , and . We show that, given the public parameters and , one can recover and and factor the modulus in polynomial time by combining continued fractions with Coppersmith's algorithm which relies on lattice reduction techniques, under specific conditions on , , and . Furthermore, we show that if the private exponent in an RSA-like cryptosystem is either small or too large, then can be factored in polynomial time. This attack applies to the standard RSA cryptosystem.
Last updated:  2025-02-27
Protocol for Tight Optimal Symmetric Security
Emanuele Di Giandomenico, Yong Li, and Sven Schäge
We present , a new balanced PAKE protocol with optimal communication efficiency. Messages are only 160 bits long and the computational complexity is lower than all previous approaches. Our protocol is proven secure in the random oracle model and features a security proof in a strong security model with multiple parties and multiple sessions, while allowing for generous attack queries including multiple -queries. Moreover, the proof is in the practically relevant single-bit model (that is harder to achieve than the multiple-bit model) and tightly reduces to the Strong Square Diffie-Hellman assumption (SSQRDH). This allows for very efficient, theoretically-sound instantiations and tight compositions with symmetric primitives.
Last updated:  2025-02-27
A Complete Security Proof of SQIsign
Marius A. Aardal, Andrea Basso, Luca De Feo, Sikhar Patranabis, and Benjamin Wesolowski
SQIsign is the leading digital signature from isogenies. Despite the many improvements that have appeared in the literature, all its recents variants lack a complete security proof. In this work, we provide the first full security proof of SQIsign, as submitted to the second round of NIST's on-ramp track for digital signatures. To do so, we introduce a new framework, which we call Fiat-Shamir with hints, that captures all those protocols where the simulator needs additional information to simulate a transcript. Using this framework, we show that SQIsign is EUF-CMA secure in the ROM, assuming the hardness of the One Endomorphism problem with hints, or the hardness of the Full Endomorphism Ring problem with hints together with a hint indistinguishability assumption; all assumptions, unlike previous ones in the literature, are non-interactive. Along the way, we prove several intermediate results that may be of independent interest.
Last updated:  2025-02-27
Side-Channel and Fault Injection Attacks on VOLEitH Signature Schemes: A Case Study of Masked FAEST
Sönke Jendral and Elena Dubrova
Ongoing efforts to transition to post-quantum secure public- key cryptosystems have created the need for algorithms with a variety of performance characteristics and security assumptions. Among the can- didates in NIST’s post-quantum standardisation process for additional digital signatures is FAEST, a Vector Oblivious Linear Evaluation in-the- Head (VOLEitH)-based scheme, whose security relies on the one-wayness of the Advanced Encryption Standard (AES). The VOLEitH paradigm enables competitive performance and signature sizes under conservative security assumptions. However, since it was introduced recently, in 2023, its resistance to physical attacks has not yet been analysed. In this paper, we present the first security analysis of VOLEitH-based signa- ture schemes in the context of side-channel and fault injection attacks. We demonstrate four practical attacks on a masked implementation of FAEST in ARM Cortex-M4 capable of recovering the full secret key with high probability (greater than 0.87) from a single signature. These at- tacks exploit vulnerabilities of components specific to VOLEitH schemes and FAEST, such as the all-but-one vector commitments, VOLE gener- ation, and AES proof generation. Finally, we propose countermeasures to mitigate these attacks and enhance the physical security of VOLEitH- based signature schemes.
Last updated:  2025-02-27
Pseudorandom Obfuscation and Applications
Pedro Branco, Nico Döttling, Abhishek Jain, Giulio Malavolta, Surya Mathialagan, Spencer Peters, and Vinod Vaikuntanathan
We introduce the notion of pseudorandom obfuscation, a way to obfuscate (keyed) pseudorandom functions in an average-case sense. We study several variants of pseudorandom obfuscation and show a number of applications. 1. Applications in the iO World: Our weakest variant of pseudorandom obfuscation, named obfuscation for identical pseudorandom functions (iPRO), is weaker than indistinguishability obfuscation (iO): rather than obfuscating arbitrary circuits as in iO, iPRO only obfuscates circuits computing pseudorandom functions. We show that iPRO already enables several applications of iO, such as unleveled fully homomorphic encryption (without assuming circular security) and succinct randomized encodings. 2. From iPRO to iO: Despite being a weaker notion than iO, we show two transformations from iPRO to iO. Our first (and main) construction builds iO from iPRO plus standard assumptions on cryptographic bilinear maps. Our second construction builds iO, and even ideal obfuscation, from iPRO alone in the pseudorandom oracle model (Jain, Lin, Luo and Wichs, CRYPTO 2023). We also formulate stronger variants of pseudorandom obfuscation that help us go beyond the applications of indistinguishability obfuscation. 3. Applications outside the iO World: We show how to construct a succinct witness encryption scheme from a strong version of PRO, where the size of the ciphertext is independent of the witness size. Such a witness encryption scheme is not known to exist even assuming iO. 4. Construction: We show a *heuristic* construction of the strongest version of PRO, secure under the sub-exponential hardness of the learning with errors (LWE) problem, and the private-coin evasive LWE heuristic (Wee, EUROCRYPT 2022; Tsabary, CRYPTO 2022). Finally, we highlight some barriers in achieving the strongest version of pseudorandom obfuscation.
Last updated:  2025-02-27
HiAE: A High-Throughput Authenticated Encryption Algorithm for Cross-Platform Efficiency
Han Chen, Tao Huang, Phuong Pham, and Shuang Wu
This paper addresses the critical challenges in designing cryptographic algorithms that achieve both high performance and cross-platform efficiency on ARM and x86 architectures, catering to the demanding requirements of next-generation communication systems, such as 6G and GPU/NPU interconnections. We propose HiAE, a high-throughput authenticated encryption algorithm optimized for performance exceeding 100 Gbps and designed to meet the stringent security requirements of future communication networks. HiAE leverages the stream cipher structure, integrating the AES round function for non-linear diffusion. Our design achieves exceptional efficiency, with benchmark results from software implementations across various platforms showing over 180 Gbps on ARM devices in AEAD mode, making it the fastest AEAD solution on ARM chips
Last updated:  2025-02-27
Another Look at the Quantum Security of the Vectorization Problem with Shifted Inputs
Paul Frixons, Valerie Gilchrist, Péter Kutas, Simon-Philipp Merz, and Christophe Petit
Cryptographic group actions provide simple post-quantum generalizations to many cryptographic protocols based on the discrete logarithm problem (DLP). However, many advanced group action-based protocols do not solely rely on the core group action problem (the so-called vectorization problem), but also on variants of this problem, to either improve efficiency or enable new functionalities. In particular, the security of the CSI-SharK threshold signature protocol relies on the Vectorization Problem with Shifted Inputs where (in DLP formalism) the adversary not only receives and , but also for multiple known values of . A natural open question is then whether the extra data provided to the adversary in this variant allows for more efficient attacks. In this paper, we revisit the concrete quantum security of this problem. We start from a quantum multiple hidden shift algorithm of Childs and van Dam, which to the best of our knowledge was never applied in cryptography before. We specify algorithms for its subroutines and we provide concrete complexity estimates for both these subroutines and the overall algorithm. We then apply our analysis to the CSI-SharK protocol. In prior analyses based on Kuperberg’s algorithms, group action evaluations contributed to a significant part of the overall T-gate cost. For CSI-SharK suggested parameters, our new approach requires significantly fewer calls to the group action evaluation subroutine, leading to significant T-gate complexity improvements overall. We also show that the quantum security of the protocol decreases when the number of public keys increases, and quantify this degradation. Beyond its direct application to the CSI-Shark protocol, our work more generally questions the quantum security of vectorization problem variants, and it introduces the Childs-van Dam algorithm as a new quantum cryptanalysis tool.
Last updated:  2025-02-27
Traceable Secret Sharing Based on the Chinese Remainder Theorem
Charlotte Hoffmann
Traceable threshold secret sharing schemes, introduced by Goyal, Song and Srinivasan (CRYPTO'21), allow to provably trace leaked shares to the parties that leaked them. The authors give the first definition and construction of traceable secret sharing schemes. However, the size of the shares in their construction are quadratic in the size of the secret. Boneh, Partap and Rotem (CRYPTO'24) recently proposed a new definition of traceable secret sharing and the first practical constructions. In their definition, one considers a reconstruction box that contains leaked shares and, on input additional shares, outputs the secret . A scheme is traceable if one can find out the leaked shares inside the box by only getting black-box access to and a private tracing key. Boneh, Partap and Rotem give constructions from Shamir's secret sharing and Blakley's secret sharing. The constructions are efficient as the size of the secret shares is only twice the size of the secret. In this work we present the first traceable secret sharing scheme based on the Chinese remainder theorem. This was stated as an open problem by Boneh, Partap and Rotem, as it gives rise to traceable secret sharing with weighted threshold access structures. The scheme is based on Mignotte's secret sharing and increases the size of the shares of the standard Mignotte secret sharing scheme only by a factor of . We also introduce a new definition of semi-public secret sharing that does not require a private tracing key and give a construction based on the Chinese Remainder Theorem.
Last updated:  2025-02-27
Watermarkable and Zero-Knowledge Verifiable Delay Functions from any Proof of Exponentiation
Charlotte Hoffmann and Krzysztof Pietrzak
A verifiable delay function maps an input and time parameter to an output together with an efficiently verifiable proof certifying that was correctly computed. The function runs in sequential steps, and it should not be possible to compute much faster than that. The only known practical VDFs use sequential squaring in groups of unknown order as the sequential function, i.e., . There are two constructions for the proof of exponentiation (PoE) certifying that , with Wesolowski (Eurocrypt'19) having very short proofs, but they are more expensive to compute and the soundness relies on stronger assumptions than the PoE proposed by Pietrzak (ITCS'19). A recent application of VDFs by Arun, Bonneau and Clark (Asiacrypt'22) are short-lived proofs and signatures, which are proofs and signatures which are only sound for some time , but after that can be forged by anyone. For this they rely on "watermarkable VDFs", where the proof embeds a prover chosen watermark. To achieve stronger notions of proofs/signatures with reusable forgeability, they rely on "zero-knowledge VDFs", where instead of the output , one just proves knowledge of this output. The existing proposals for watermarkable and zero-knowledge VDFs all build on Wesolowski's PoE, for the watermarkable VDFs there's currently no security proof. In this work we give the first constructions that transform any PoEs in hidden order groups into watermarkable VDFs and into zkVDFs, solving an open question by Arun et al.. Unlike our watermarkable VDF, the zkVDF (required for reusable forgeability) is not very practical as the number of group elements in the proof is a security parameter. To address this, we introduce the notion of zero-knowledge proofs of sequential work (zkPoSW), a notion that relaxes zkVDFs by not requiring that the output is unique. We show that zkPoSW are sufficient to construct proofs or signatures with reusable forgeability, and construct efficient zkPoSW from any PoE, ultimately achieving short lived proofs and signatures that improve upon Arun et al's construction in several dimensions (faster forging times, arguably weaker assumptions). A key idea underlying our constructions is to not directly construct a (watermarked or zk) proof for , but instead give a (watermarked or zk) proof for the more basic statement that satisfy for some , together with a normal PoE for .
Last updated:  2025-02-27
Evasive LWE: Attacks, Variants & Obfustopia
Shweta Agrawal, Anuja Modi, Anshu Yadav, and Shota Yamada
Evasive LWE (Wee, Eurocrypt 2022 and Tsabary, Crypto 2022) is a recently introduced, popular lattice assumption which has been used to tackle long-standing problems in lattice based cryptography. In this work, we develop new counter-examples against Evasive LWE, in both the private and public-coin regime, propose counter-measures that define safety zones, and finally explore modifications to construct full compact FE/iO. Attacks: Our attacks are summarized as follows. - The recent work by Hseih, Lin and Luo [HLL23] constructed the first ABE for unbounded depth circuits by relying on the (public coin) ''circular'' evasive LWE assumption, which incorporates circularity into the Evasive LWE assumption. We provide a new attack against this assumption by exhibiting a sampler such that the pre-condition is true but post-condition is false. - We demonstrate a counter-example against public-coin evasive LWE which exploits the freedom to choose the error distributions in the pre and post conditions. Our attack crucially relies on the error in the pre-condition being larger than the error in the post-condition. - The recent work by Agrawal, Kumari and Yamada [AKY24a] constructed the first functional encryption scheme for pseudorandom functionalities () and extended this to obfuscation for pseudorandom functionalities () [AKY24c] by relying on private-coin evasive LWE. We provide a new attack against the stated assumption. - The recent work by Branco et al. [BDJ+24] (concurrently to [AKY24c]) provides a construction of obfuscation for pseudorandom functionalities by relying on private-coin evasive LWE. By adapting the counter-example against [AKY24a], we provide an attack against this assumption. - Branco et al. [BDJ+24] showed that there exist contrived, somehow ''self-referential'', classes of pseudorandom functionalities for which pseudorandom obfuscation cannot exist. We develop an analogous result to the setting of pseudorandom functional encryption. While Evasive LWE was developed to specifically avoid zeroizing attacks as discussed above, our attacks show that in some (contrived) settings, the adversary may nevertheless obtain terms in the zeroizing regime. Counter-measures: Guided by the learning distilled from the above attacks, we develop counter-measures to prevent against them. Our interpretation of the above attacks is that Evasive LWE, as defined, is too general -- we suggest restrictions to identify safe zones for the assumption, using which, the broken applications can be recovered. Variants to give full FE and iO: Finally, we show that certain modifications of Evasive LWE, which respect the counter-measures developed above, yield full compact FE in the standard model. We caution that the main goal of presenting these candidates is as goals for cryptanalysis to further our understanding of this regime of assumptions.
Last updated:  2025-02-27
New Black-Box Separations through Mathematically Structured Primitives
Hart Montgomery and Sikhar Patranabis
We provide a novel view of public-key cryptography by showing full equivalences of certain primitives to "hard" monoid actions. More precisely, we show that key exchange and two-party computation are exactly equivalent to monoid actions with certain structural and hardness properties. To the best of our knowledge, this is the first "natural" characterization of the mathematical structure inherent to any key exchange or two-party computation protocol, and the first explicit proof of the necessity of mathematical structure for public-key cryptography. We then utilize these characterizations to show new black-box separation results. Concretely, we obtain the following results: Two-Party Key Exchange. We show that that any two-party noninteractive key exchange protocol is equivalent to the existence of an abelian monoid action equipped with a natural hardness property, namely (distributional) unpredictability. More generally, we show that any -round (two-party) key exchange protocol is essentially equivalent to the existence of a (distributional) unpredictable monoid action with certain commutator-like properties. Rudich (Crypto '91) shows a black-box separation of -round and -round key exchange for any ; we use our generic primitive here to formalize this result and extend it to efficient key exchange protocols (where communication is ). Two-Party Computation. We show that any maliciously secure two-party computation protocol is also equivalent to a monoid action with commutator-like properties and certain hardness guarantees. We then use a generic version of this primitive to show a black-box separation between -round semi-honest secure two-party computation and -round maliciously secure two-party computation. This yields the first black-box separation (to our knowledge) between -round and -round maliciously secure two-party computation protocols.
Last updated:  2025-02-26
Split Prover Zero-Knowledge SNARKs
Sanjam Garg, Aarushi Goel, Dimitris Kolonelos, Sina Shiehian, and Rohit Sinha
We initiate the study of {\em split prover zkSNARKs}, which allow Alice to offload part of the zkSNARK computation to her assistant, Bob. In scenarios like online transactions (e.g., zCash), a significant portion of the witness (e.g., membership proofs of input coins) is often available to the prover (Alice) before the transaction begins. This setup offers an opportunity to Alice to initiate the proof computation early, even before the entire witness is available. The remaining computation can then be delegated to Bob, who can complete it once the final witness (e.g., the transaction amount) is known. To prevent Bob from generating proofs independently (e.g., initiating unauthorized transactions), it is essential that the data provided to him for the second phase of computation does not reveal the witness used in the first phase. Additionally, the verifier of the zkSNARK should be unable to determine whether the proof was generated solely by Alice or through this two-step process. To achieve this efficiently, we require this two-phase proof generation to only use cryptography in a black-box manner. We propose a split prover zkSNARK based on the Groth16 zkSNARKs [Groth, EUROCRYPT 2016], meeting all these requirements. Our solution is also \emph{asymptotically tight}, meaning it achieves the optimal second phase proof generation time for Groth16. Importantly, our split prover zkSNARK preserves the verification algorithm of the original Groth16 zkSNARK, enabling seamless integration into existing deployments of Groth16.
Last updated:  2025-02-26
Consensus Under Adversary Majority Done Right
Srivatsan Sridhar, Ertem Nusret Tas, Joachim Neu, Dionysis Zindros, and David Tse
A specter is haunting consensus protocols—the specter of adversary majority. Dolev and Strong in 1983 showed an early possibility for up to 99% adversaries. Yet, other works show impossibility results for adversaries above 50% under synchrony, seemingly the same setting as Dolev and Strong's. What gives? It is high time that we pinpoint a key culprit for this ostensible contradiction: the modeling details of clients. Are the clients sleepy or always-on? Are they silent or communicating? Can validators be sleepy too? We systematize models for consensus across four dimensions (sleepy/always-on clients, silent/communicating clients, sleepy/always-on validators, and synchrony/partial-synchrony), some of which are new, and tightly characterize the achievable safety and liveness resiliences with matching possibilities and impossibilities for each of the sixteen models. To this end, we unify folklore and earlier results, and fill gaps left in the literature with new protocols and impossibility theorems.
Last updated:  2025-02-26
KLPT²: Algebraic Pathfinding in Dimension Two and Applications
Wouter Castryck, Thomas Decru, Péter Kutas, Abel Laval, Christophe Petit, and Yan Bo Ti
Following Ibukiyama, Katsura and Oort, all principally polarized superspecial abelian surfaces over can be represented by a certain type of matrix , having entries in the quaternion algebra . We present a heuristic polynomial-time algorithm which, upon input of two such matrices , finds a "connecting matrix" representing a polarized isogeny of smooth degree between the corresponding surfaces. Our algorithm should be thought of as a two-dimensional analog of the KLPT algorithm from 2014 due to Kohel, Lauter, Petit and Tignol for finding a connecting ideal of smooth norm between two given maximal orders in . The KLPT algorithm has proven to be a versatile tool in isogeny-based cryptography, and our analog has similar applications; we discuss two of them in detail. First, we show that it yields a polynomial-time solution to a two-dimensional analog of the so-called constructive Deuring correspondence: given a matrix representing a superspecial principally polarized abelian surface, realize the latter as the Jacobian of a genus- curve (or, exceptionally, as the product of two elliptic curves if it concerns a product polarization). Second, we show that, modulo a plausible assumption, Charles-Goren-Lauter style hash functions from superspecial principally polarized abelian surfaces require a trusted set-up. Concretely, if the matrix associated with the starting surface is known then collisions can be produced in polynomial time. We deem it plausible that all currently known methods for generating a starting surface indeed reveal the corresponding matrix. As an auxiliary tool, we present an explicit table for converting -isogenies into the corresponding connecting matrix, a step for which a previous method by Chu required super-polynomial (but sub-exponential) time.
Last updated:  2025-02-26
Functional Oblivious Transfer with Applications in Privacy-Preserving Machine Learning
Aydin Abadi and Mohammad Naseri
Oblivious Transfer (OT) is a fundamental cryptographic primitive introduced nearly four decades ago. OT allows a receiver to select and learn out of private messages held by a sender. It ensures that the sender does not learn which specific messages the receiver has chosen, while the receiver gains no information about the remaining messages. In this work, we introduce the notion of functional OT (FOT), for the first time. FOT adds a layer of security to the conventional OT by ensuring that the receiver only learns a function of the selected messages rather than the individual messages themselves. We propose several protocols that realize this concept. In particular, we propose concrete instantiations of FOT when the function to be executed on the selected message is mean, mode, addition, or multiplication. The schemes are efficient and unconditionally secure. We also propose a non-trivial protocol that supports arbitrary functions on the selected messages mainly using fully homomorphic encryption (FHE) and oblivious linear function evaluation, where the number of FHE invocations is constant with respect to . Our asymptotic and concrete cost analyses demonstrate the efficiency of our unconditionally secure FOT protocols. FOT can enhance the security of privacy-preserving machine learning, particularly in (i) K-Nearest Neighbors schemes and (ii) client selection in Federated Learning (FL).
Last updated:  2025-02-26
A Closer Look at Falcon
Phillip Gajland, Jonas Janneck, and Eike Kiltz
Falcon is a winner of NIST's six-year post-quantum cryptography standardisation competition. Based on the celebrated full-domain-hash framework of Gentry, Peikert and Vaikuntanathan (GPV) (STOC'08), Falcon leverages NTRU lattices to achieve the most compact signatures among lattice-based schemes. Its security hinges on a Rényi divergence-based argument for Gaussian samplers, a core element of the scheme. However, the GPV proof, which uses statistical distance to argue closeness of distributions, fails when applied naively to Falcon due to parameter choices resulting in statistical distances as large as . Additional implementation-driven deviations from the GPV framework further invalidate the original proof, leaving Falcon without a security proof despite its selection for standardisation. This work takes a closer look at Falcon and demonstrates that introducing a few minor, conservative modifications allows for the first formal proof of the scheme in the random oracle model. At the heart of our analysis lies an adaptation of the GPV framework to work with the Rényi divergence, along with an optimised method for parameter selection under this measure. Unfortunately, our analysis shows that despite our modification of Falcon-512 and Falcon-1024 we do not achieve strong unforgeability for either scheme. For plain unforgeability we are able to show that our modifications to Falcon-512 barely satisfy the claimed 120-bit security target and for Falcon-1024 we confirm the claimed security level. As such we recommend revisiting Falcon and its parameters.
Last updated:  2025-02-26
A Framework for Group Action-Based Multi-Signatures and Applications to LESS, MEDS, and ALTEQ
Giuseppe D'Alconzo, Andrea Flamini, Alessio Meneghetti, and Edoardo Signorini
A multi-signature scheme allows a list of signers to sign a common message. They are widely used in scenarios where the same message must be signed and transmitted by users, and, instead of concatenating individual signatures, employing a multi-signature can reduce the data to be sent. In recent years there have been numerous practical proposals in the discrete logarithm setting, such as MuSig2 (CRYPTO'21) for the Schnorr signature. Recently, these attempts have been extended to post-quantum assumptions, with lattice-based proposals such as MuSig-L (CRYPTO'22). Given the growth of group action-based signatures, a natural question is whether a multi-signature can be built on the same models. In this work, we present the first construction of such a primitive relying on group action assumptions. We obtain a 3-round scheme achieving concurrent security in the ROM. Moreover, we instantiate it using the three candidates to the additional post-quantum NIST's call, namely LESS, MEDS and ALTEQ, obtaining a good compression rate for different parameters sets.
Last updated:  2025-02-26
Practical Key-Extraction Attacks in Leading MPC Wallets
Nikolaos Makriyannis, Oren Yomtov, and Arik Galansky
Multi-Party Computation (MPC) has become a major tool for protecting hundreds of billions of dollars in cryptocurrency wallets. MPC protocols are currently powering the wallets of Coinbase, Binance, Zengo, BitGo, Fireblocks and many other fintech companies servicing thousands of financial institutions and hundreds of millions of end-user consumers. We present four novel key-extraction attacks on popular MPC signing protocols showing how a single corrupted party may extract the secret in full during the MPC signing process. Our attacks are highly practical (the practicality of the attack depends on the number of signature-generation ceremonies the attacker participates in before extracting the key). Namely, we show key-extraction attacks against different threshold-ECDSA protocols/implementations requiring , , , and *one signature*, respectively. In addition, we provide proof-of-concept code that implements our attacks.
Last updated:  2025-02-26
Differential Cryptanalysis of the Reduced Pointer Authentication Code Function used in Arm’s FEAT_PACQARMA3 Feature
Roberto Avanzi, Orr Dunkelman, and Shibam Ghosh
The Pointer Authentication Code () feature in the Arm architecture is used to enforce the Code Flow Integrity () of running programs. It does so by generating a short - called the - of the return address and some additional context information upon function entry, and checking it upon exit. An attacker that wants to overwrite the stack with manipulated addresses now faces an additional hurdle, as they now have to guess, forge, or reuse values. is deployed on billions of devices as a first line of defense to harden system software and complex programs against software exploitation. The original version of the feature uses a 12-round version the block cipher. The output is then truncated to between 3 and 32 bits, in order to be inserted into unused bits of 64-bit pointers. A later revision of the specification allows the use of an 8-round version of . This reduction may introduce vulnerabilities such as high-probability distinguishers, potentially enabling key recovery attacks. The present paper explores this avenue. A cryptanalysis of the computation function entails restricting the inputs to valid virtual addresses, meaning that certain most significant bits are fixed to zero, and considering only the truncated output. Within these constraints, we present practical attacks on various configurations. These attacks, while not presenting immediate threat to the mechanism, show that some versions of the feature do miss the security targets made for the original function. This offers new insights into the practical security of constructing from truncated block ciphers, expanding on the mostly theoretical understanding of creating PRFs from truncated PRPs. We note that the results do not affect the security of when used with the recommended number of rounds for general purpose applications.
Last updated:  2025-02-26
Strong Existential Unforgeability and BUFF Securities of MPC-in-the-Head Signatures
Mukul Kulkarni and Keita Xagawa
NIST started the standardization of additional post-quantum signatures in 2022. Among 40 candidates, a few showed stronger security than existential unforgeability, strong existential unforgeability, and BUFF (beyond unforgeability features) securities. Recently, Aulbach, Düzlü, Meyer, Struck, and Weishäupl (PQCrypto 2024) examined the BUFF securities of 17 out of 40 candidates. Unfortunately, on the so-called MPC-in-the-Head (MPCitH) signature schemes, we have no knowledge of strong existential unforgeability and BUFF securities. This paper studies the strong securities of all nine MPCitH signature candidates: AIMer, Biscuit, FAEST, MIRA, MiRitH, MQOM, PERK, RYDE, and SDitH. We show that the MPCitH signature schemes are strongly existentially unforgeable under chosen message attacks in the (quantum) random oracle model. To do so, we introduce a new property of the underlying multi-pass identification, which we call _non-divergency_. This property can be considered as a weakened version of the computational unique response for three-pass identification defined by Kiltz, Lyubashevsky, and Schaffner (EUROCRYPT 2018) and its extension to multi-pass identification defined by Don, Fehr, and Majenz (CRYPTO 2020). In addition, we show that the SSH11 protocol proposed by Sakumoto, Shirai, and Hiwatari (CRYPTO 2011) is _not_ computational unique response, while Don et al. (CRYPTO 2020) claimed it. We also survey BUFF securities of the nine MPCitH candidates in the quantum random oracle model. In particular, we show that Biscuit and MiRitH do not have some of the BUFF securities.
Last updated:  2025-02-26
Computationally Efficient Asynchronous MPC with Linear Communication and Low Additive Overhead
Akhil Bandarupalli, Xiaoyu Ji, Aniket Kate, Chen-Da Liu-Zhang, and Yifan Song
We explore the setting of asynchronous multi-party computation (AMPC) with optimal resilience , and develop an efficient protocol that optimizes both communication and computation. The recent work by Goyal, Liu-Zhang, and Song [Crypto' 24] was the first to achieve AMPC with amortized linear communication cost without using computationally heavy public-key cryptography. However, its additive communication overhead renders it impractical for most real-world applications. It is possible to reduce the communication overhead significantly by leveraging cryptographic tools such as %random oracle hash, homomorphic commitments, public-key cryptography, or zero-knowledge proofs; however, the corresponding AMPC protocols introduce computation overhead of public-key cryptographic operations that become bottleneck as grows. Overall, achieving AMPC with linear communication complexity, low additive communication overhead, and low computation overhead remains an open challenge. In this work, we resolve this efficiency challenge by utilizing the random oracle model. By relying solely on computationally efficient primitives such as random oracle hash and symmetric-key cryptography, our protocol is not only efficient in terms of computation and communication overhead but also post-quantum secure. For a circuit with multiplication gates, our protocol achieves communication per multiplication gate with an additive overhead of communication. In terms of computation, our protocol only introduces an additive overhead of hash computations independent of the circuit size.
Last updated:  2025-02-26
Higher Residuosity Attacks on Small RSA Subgroup Decision Problems
Xiaopeng Zhao, Zhenfu Cao, Xiaolei Dong, and Zhusen Liu
Secure two-party comparison, known as Yao's millionaires' problem, has been a fundamental challenge in privacy-preserving computation. It enables two parties to compare their inputs without revealing the exact values of those inputs or relying on any trusted third party. One elegant approach to secure computation is based on homomorphic encryption. Recently, building on this approach, Carlton et al. (CT-RSA 2018) and Bourse et al. (CT-RSA 2020) presented novel solutions for the problem of secure integer comparison. These protocols have demonstrated significantly improved performance compared to the well-known and frequently used DGK protocol (ACISP 2007 and Int. J. Appl. Cryptogr. 1(4),323–324, 2009). In this paper, we introduce a class of higher residuosity attacks, which can be regarded as an extension of the classical quadratic residuosity attack on the decisional Diffie-Hellman problem. We demonstrate that the small RSA subgroup decision problems, upon which both the CEK and BST protocols are based, are not difficult to solve when the prime base is small (e.g., ). Under these conditions, the protocols achieve optimal overall performance. Furthermore, we offer recommendations for precluding such attacks, including one approach that does not adversely affect performance. We hope that these attacks can be applied to analyze other number-theoretic hardness assumptions.
Last updated:  2025-02-26
Polynomial Secret Sharing Schemes and Algebraic Matroids
Amos Beimel, Oriol Farràs, and Adriana Moya
In a secret sharing scheme with polynomial sharing, the secret is an element of a finite field, and the shares are obtained by evaluating polynomials on the secret and some random field elements, i.e., for every party there is a set of polynomials that computes the share of the party. These schemes generalize the linear ones, adding more expressivity and giving room for more efficient schemes. To identify the access structures for which this efficiency gain is relevant, we need a systematic method to identify the access structure of polynomial schemes; i.e., to identify which sets can reconstruct the secret in the scheme. As a first step, we study ideal polynomial secret sharing schemes where there is a single polynomial for each party. Ideal schemes have optimal share size because the size of each share is the size of the secret. Our goal is to generalize results of linear secret sharing schemes, i.e., schemes in which the shares are computed by applying linear mappings and the linear dependency of these mappings determines their access structures. To achieve this goal, we study the connection between the algebraic dependency of the sharing polynomials and the access structure of the polynomial scheme. Our first result shows that if the degree of the sharing polynomials is not too big compared to the size of the field, then the algebraic dependence of the sharing polynomials determines the access structure of the scheme. This contributes to the characterization of ideal polynomial schemes and establishes a new connection between families of ideal schemes and algebraic matroids. Conversely, we ask the question: If we associate a polynomial with each party and the dealer, can we use these polynomials to realize the access structure determined by the algebraic dependency of the polynomials? Our second result shows that these access structures admit statistical schemes with small shares. Finally, we extend this result to the general case where each party may have more than one polynomial.
Last updated:  2025-02-26
OCash: Fully Anonymous Payments between Blockchain Light Clients
Adam Blatchley Hansen, Jesper Buus Nielsen, and Mark Simkin
We study blockchain-based provably anonymous payment systems between light clients. Such clients interact with the blockchain through full nodes, which can see what the light clients read and write. The goal of our work is to enable light clients to perform anonymous payments, while maintaining privacy even against the full nodes through which they interact with the blockchain. We formalize the problem in the UC model and present a provably secure solution. We show that a variation of tree ORAM gives obliviousness even when an adversary can follow how its own data elements move in the tree. We use this for anonymity via shuffling of payments on the blockchain, while at the same time allowing the light client to know a few positions among which to find its payment without knowing the current state of the blockchain. In comparison to existing works, we are the first ones that simultaneously provide strong anonymity guarantees, provable security, and anonymity with respect to full nodes. Along the way, we make several contributions that may be of independent interest. We define and construct anonymous-coin friendly encryption schemes and show how they can be used within anonymous payment systems. We define and construct efficient compressible randomness beacons, which produce unpredictable values in regular intervals and allow for storing all published values in a short digest.
Last updated:  2025-02-26
Zeromorph: Zero-Knowledge Multilinear-Evaluation Proofs from Homomorphic Univariate Commitments
Tohru Kohrita and Patrick Towa
A multilinear polynomial is a multivariate polynomial of degree at most one in each variable. This paper introduces a new scheme to commit to multilinear polynomials and to later prove evaluations thereof. The scheme exponentially improves on the added prover costs for evaluation proofs to be zero-knowledge. The construction of the scheme is generic and relies only on the additive homomorphic property of any scheme to commit to univariate polynomials, and on a protocol to prove that committed polynomials satisfy public degree bounds. As the construction requires to check that several committed univariate polynomials do not exceed given, separate bounds, the paper also gives a method to batch executions of any degree-check protocol on homomorphic commitments. For an n-linear polynomial, the instantiation of the scheme with a hiding version of KZG commitments (Kate, Zaverucha and Goldberg at Asiacrypt 2010) leads to a scheme with an evaluation prover that performs only n + 5 extra (i.e., compared to the variant of the same scheme that is not zero-knowledge) first-group operations to achieve the zero-knowledge property. In contrast, previous constructions require an extra 2^n multi-scalar multiplication. The instantiation does so without any concessions on the other performance measures compared to the state of the art.
Last updated:  2025-02-26
Partial Lattice Trapdoors: How to Split Lattice Trapdoors, Literally
Martin R. Albrecht, Russell W. F. Lai, Oleksandra Lapiha, and Ivy K. Y. Woo
Lattice trapdoor algorithms allow us to sample hard random lattices together with their trapdoors, given which short lattice vectors can be sampled efficiently. This enables a wide range of advanced cryptographic primitives. In this work, we ask: can we distribute lattice trapdoor algorithms non-interactively? We study a natural approach to sharing lattice trapdoors: splitting them into partial trapdoors for different lower-rank sublattices which allow the local sampling of short sublattice vectors. Given sufficiently many short sublattice vectors, these can then be combined to yield short vectors in the original lattice. Moreover, this process can be repeated an unbounded polynomial number of times without needing a party holding a full trapdoor to intervene. We further define one-wayness and indistinguishability properties for partial trapdoors. We establish that such objects exist that have non-trivial performance under standard assumptions. Specifically, we prove these properties for a simple construction from the κ-SIS and κ-LWE assumptions, which were previously shown to be implied by the plain SIS and LWE assumptions, respectively. The security proofs extend naturally to the ring or module settings under the respective analogues of these assumptions, which have been conjectured to admit similar reductions. Our partial trapdoors achieve non-trivial efficiency, with relevant parameters sublinear in the number of shareholders. Our construction is algebraic, without resorting to generic tools such as multiparty computation or fully homomorphic encryption. Consequently, a wide range of lattice-trapdoor-based primitives can be thresholdised non-interactively by simply substituting the trapdoor preimage sampling procedure with our partial analogue.
Last updated:  2025-02-26
Enabling Microarchitectural Agility: Taking ML-KEM & ML-DSA from Cortex-M4 to M7 with SLOTHY
Amin Abdulrahman, Matthias J. Kannwischer, and Thing-Han Lim
Highly-optimized assembly is commonly used to achieve the best performance for popular cryptographic schemes such as the newly standardized ML-KEM and ML-DSA. The majority of implementations today rely on hand-optimized assembly for the core building blocks to achieve both security and performance. However, recent work by Abdulrahman et al. takes a new approach, writing a readable base assembly implementation first and leaving the bulk of the optimization work to a tool named SLOTHY based on constraint programming. SLOTHY performs instruction scheduling, register allocation, and software pipelining simultaneously using constraints modeling the architectural and microarchitectural details of the target platform. In this work, we extend SLOTHY and investigate how it can be used to migrate already highly hand-optimized assembly to a different microarchitecture, while maximizing performance. As a case study, we optimize state-of-the-art Arm Cortex-M4 implementations of ML-KEM and ML-DSA for the Arm Cortex-M7. Our results suggest that this approach is promising: For the number-theoretic transform (NTT) – the core building block of both ML-DSA and ML-KEM – we achieve speed-ups of and , respectively. For Keccak – the permutation used by SHA-3 and SHAKE and also vastly used in ML-DSA and ML-KEM – we achieve speed-ups of 30% compared to the M4 code and 5% compared to hand-optimized M7 code. For many other building blocks, we achieve similarly significant speed-ups of up to . Overall, this results in 11 to 33% faster code for the entire cryptosystems.
Last updated:  2025-02-26
Lattice-Based Updatable Public-Key Encryption for Group Messaging
Joël Alwen, Georg Fuchsbauer, Marta Mularczyk, and Doreen Riepel
Updatable Public-Key Encryption (UPKE) augments the security of PKE with Forward Secrecy properties. While requiring more coordination between parties, UPKE enables much more efficient constructions than full-fledged Forward-Secret PKE. Alwen, Fuchsbauer and Mularczyk (AFM, Eurocrypt’24) presented the strongest security notion to date. It is the first to meet the needs of UPKE’s most important applications: Secure Group Messaging and Continuous Group Key Agreement. The authors provide a very efficient construction meeting their notion with classic security based on the Computational Diffie-Hellman (CDH) assumption in the Random Oracle Model (ROM). In this work we present the first post-quantum secure UPKE construction meeting (a slight relaxation of) the AFM security notion. Based on the Module LWE assumption, our construction is practically efficient. Moreover, public key sizes are about and ciphertext sizes around of those of the state-of-the-art lattice-based UPKE scheme in the ROM by Abou Haidar, Passelègue and Stehlé – despite only being shown to satisfy a significantly weaker security notion. As the AFM proofs relies on random self-reducibility of CDH, which has no analogue for lattices, we develop a new proof technique for strong UPKE, identifying the core properties required from the underlying (lattice-based) encryption scheme.
Last updated:  2025-02-26
Traitor Tracing in Multi-sender Setting (: Traceable Multi-client Functional Encryption)
Xuan Thanh Do, Dang Truong Mac, Ky Nguyen, Duong Hieu Phan, and Quoc-Huy Vu
Traitor tracing is a traditional cryptographic primitive designed for scenarios with multiple legitimate receivers. When the plaintext - that is, the output of decryption - is leaked and more than one legitimate receiver exists, it becomes imperative to identify the source of the leakage, a need that has motivated the development of traitor tracing techniques. Recent advances in standard encryption have enabled decryption outcomes to be defined in a fine-grained manner through the introduction of Functional Encryption (FE). Constructing FE schemes is intriguing, and achieving the tracing property adds an additional layer of complexity. Traitor tracing techniques have been actively developed for more than three decades, yet they have always remained within the same framework - a single sender responsible for encrypting all the data. However, fine-grained decryption is particularly useful when data originates from multiple sources, allowing for joint computation on personal data. This leads to the concept of multi-client functional encryption (MCFE), where multiple concurrent senders independently encrypt their data while agreeing on the decryption of a specific function (e.g., a statistical measure) computed on the aggregated data, without revealing any additional information. In the era of cloud computing and big data, privacy-preserving joint computation is crucial, and tracing the source of any breach by dishonest participants becomes essential. Thus, in this paper we take the first step toward addressing the tracing problem in the general context of joint computation with multiple senders. Our contributions are twofold: - We propose the first tracing model in the context of multi-sender encryption, namely (), which allows a pirate to extract secret information from both receivers and senders. Our model supports strong and naturally admissible decoders, removing artificial restrictions on the pirate decoder and thus addressing the shortcomings of existing traceable functional encryption schemes designed for the single-sender setting. - To achieve our conceptual objective, we build upon the recently introduced notion of strong admissibility for MCFE. Our main technical contribution is a generic compiler that transforms a large class of MCFE schemes with weak admissibility into schemes with strong admissibility. This compiler not only helps overcome existing challenges but may also be of general interest within the functional encryption domain. Finally, we present a concrete lattice-based scheme for inner-product functionalities that achieves post-quantum security under standard assumptions.
Last updated:  2025-02-26
How to Lose Some Weight - A Practical Template Syndrome Decoding Attack
Sebastian Bitzer, Jeroen Delvaux, Elena Kirshanova, Sebastian Maaßen, Alexander May, and Antonia Wachter-Zeh
We study the hardness of the Syndrome Decoding problem, the base of most code-based cryptographic schemes, such as Classic McEliece, in the presence of side-channel information. We use ChipWhisperer equipment to perform a template attack on Classic McEliece running on an ARM Cortex-M4, and accurately classify the Hamming weights of consecutive 32-bit blocks of the secret error vector. With these weights at hand, we optimize Information Set Decoding algorithms. Technically, we show how to speed up information set decoding via a dimension reduction, additional parity-check equations, and an improved information set search, all derived from the Hamming weight information. Consequently, using our template attack, we can practically recover an error vector in dimension n=2197 in a matter of seconds. Without side-channel information, such an instance has a complexity of around 88 bit. We also estimate how our template attack affects the security of the proposed McEliece parameter sets. Roughly speaking, even an error-prone leak of our Hamming weight information leads for n=3488 to a security drop of 89 bits.
Last updated:  2025-02-26
Black-Box Non-Interactive Zero Knowledge from Vector Trapdoor Hash
Pedro Branco, Arka Rai Choudhuri, Nico Döttling, Abhishek Jain, Giulio Malavolta, and Akshayaram Srinivasan
We present a new approach for constructing non-interactive zero-knowledge (NIZK) proof systems from vector trapdoor hashing (VTDH) -- a generalization of trapdoor hashing [Döttling et al., Crypto'19]. Unlike prior applications of trapdoor hash to NIZKs, we use VTDH to realize the hidden bits model [Feige-Lapidot-Shamir, FOCS'90] leading to black-box constructions of NIZKs. This approach gives us the following new results: - A statistically-sound NIZK proof system based on the hardness of decisional Diffie-Hellman (DDH) and learning parity with noise (LPN) over finite fields with inverse polynomial noise rate. This gives the first statistically sound NIZK proof system that is not based on either LWE, or bilinear maps, or factoring. - A dual-mode NIZK satisfying statistical zero-knowledge in the common random string mode and statistical soundness in the common reference string mode assuming the hardness of learning with errors (LWE) with polynomial modulus-to-noise ratio. This gives the first black-box construction of such a dual-mode NIZK under LWE. This improves the recent work of Waters (STOC'24) which relied on LWE with super-polynomial modulus-to-noise ratio and required a setup phase with private coins. The above constructions are black-box and satisfy single-theorem zero-knowledge property. Building on the works of Feige et al.(FOCS'90) and Fishclin and Rohrback (PKC'21), we upgrade these constructions (under the same assumptions) to satisfy multi-theorem zero-knowledge property at the expense of making non-black-box use of cryptography.
Last updated:  2025-02-26
The Security of Hash-and-Sign with Retry against Superposition Attacks
Haruhisa Kosuge and Keita Xagawa
Considering security against quantum adversaries, while it is important to consider the traditional existential unforgeability (EUF-CMA security), it is desirable to consider security against adversaries making quantum queries to the signing oracle: Plus-one security (PO security) and blind unforgeability (BU security) proposed by Boneh and Zhandry (Crypto 2013) and Alagic et al. (EUROCRYPT 2020), respectively. Hash-and-sign is one of the most common paradigms for constructing EUF-CMA-secure signature schemes in the quantum random oracle model, employing a trapdoor function and a hash function. It is known that its derandomized version is PO- and BU-secure. A variant of hash-and-sign, known as hash-and-sign with retry (HSwR), formulated by Kosuge and Xagawa (PKC 2024), is widespread since it allows for weakening the security assumptions of a trapdoor function. Unfortunately, it has not been known whether HSwR can achieve PO- and BU-secure even with derandomization. In this paper, we apply a derandomization with bounded loops to HSwR. We demonstrate that HSwR can achieve PO and BU security through this approach. Since derandomization with bounded loops offers advantages in some implementations, our results support its wider adoption, including in NIST PQC candidates.
Last updated:  2025-02-26
Universally Composable Non-Interactive Zero-Knowledge from Sigma Protocols via a New Straight-line Compiler
Megan Chen, Pousali Dey, Chaya Ganesh, Pratyay Mukherjee, Pratik Sarkar, and Swagata Sasmal
Non-interactive zero-knowledge proofs (NIZK) are essential building blocks in threshold cryptosystems like multiparty signatures, distributed key generation, and verifiable secret sharing, allowing parties to prove correct behavior without revealing secrets. Furthermore, universally composable (UC) NIZKs enable seamless composition in the larger cryptosystems. A popular way to construct NIZKs is to compile interactive protocols using the Fiat-Shamir transform. Unfortunately, Fiat-Shamir transformed NIZK requires rewinding the adversary and is not , making it at odds with UC. Using Fischlin's transform gives straight-line extractability, but at the expense of many repetitions of the underlying protocol leading to poor concrete efficiency and difficulty in setting parameters. In this work, we propose a simple new transform that compiles a Sigma protocol for an algebraic relation into a UC-NIZK protocol . - Given a Sigma protocol for proving m algebraic statements over n witnesses, we construct a compiler to transform it into a protocol using an additively homomorphic encryption scheme AHE. Our prover executes the Sigma protocol's prover once and computes 2n encryptions. The verification process involves running the Sigma protocol verifier once and then computing n encryptions, which are homomorphically verified against the prover generated encryptions. - We apply the Fiat-Shamir transform to the above straight-line extractable Sigma protocol to obtain a UC-NIZK. We instantiate AHE using class group-based encryption where the public key of the encryption scheme is obliviously sampled using a suitable hash function. This yields a UC-NIZK protocol in the random oracle model.
Last updated:  2025-02-26
Adaptively Secure Fully Homomorphic Message Authentication Code with Pre-processable Verification
Jeongsu Kim and Aaram Yun
There has been remarkable progress in fully homomorphic encryption, ever since Gentry's first scheme. In contrast, fully homomorphic authentication primitives received relatively less attention, despite existence of some previous constructions. While there exist various schemes with different functionalities for fully homomorphic encryption, there are only a few options for fully homomorphic authentication. Moreover, there are even fewer options when considering two of the most important properties: adaptive security, and pre-processable verification. To our knowledge, except for some concurrent works, achieving both properties requires the use of nested construction, which involves homomorphically authenticating a homomorphic authentication tag of a message, making the scheme costly and complicated. In this work, we propose a dedicated scheme for (leveled) fully homomorphic message authentication code that is adaptively secure and has pre-processable verification. Leveraging the secrecy of the primitive, we demonstrate that a slight modification of a selectively secure (leveled) fully homomorphic signature scheme yields an adaptively secure (leveled) fully homomorphic message authentication code with pre-processable verification. Additionally, we introduce a novel notion and generic transform to enhance the security of a homomorphic message authentication code, which also exploits the secrecy of the primitive.
Last updated:  2025-02-26
: A Unified Framework for Computational Verifiable Secret Sharing
Karim Baghery
An -Verifiable Secret Sharing (VSS) scheme allows a dealer to share a secret among parties, s.t. all the parties can verify the validity of their shares and only a set of them, i.e., more than , can access the secret. In this paper, we present , as a unified framework for constructing computational VSS schemes in the honest-majority setting, based on Shamir secret sharing. Notably, does not rely on homomorphic commitments; instead requires a random oracle and any commitment scheme that extra to its core attributes hiding and binding, it might be homomorphic and/or Post-Quantum (PQ) secure. - When employing Discrete Logarithm (DL)-based commitments, enables the construction of two novel VSS schemes in the RO model, named and . Compared to the well-known Pedersen and Feldman VSS schemes, both and require (resp. ) exponentiations in the verification (resp. reconstruction) process, as opposed to (resp. ), albeit at the expense of a constant factor slower sharing and increased communication. - By instantiating with a hash-based commitment, we obtain a novel PQ-secure VSS scheme, labeled . outperforms the recent protocol by Atapoor, Baghery, Cozzo, and Pedersen from Asiacrypt'23 by a constant factor in all metrics. can also be seen as an amplified version of the simple VSS scheme, proposed by Gennaro, Rabin, and Rabin at PODC'98. - Building upon , we construct a Publicly VSS (PVSS) scheme, labeled , that can be seen as a new variant of Schoenmakers' scheme from Crypto'99. To this end, we first define the Polynomial Discrete Logarithm (PDL) problem, as a generalization of DL and then build a variant of the Schnorr Proof of Knowledge (PoK) scheme based on the new hardness assumption. We think the PDL relation and the associated PoK scheme can be independently interesting for Shamir-based threshold protocols. We believe is general enough to be employed in various contexts such as lattices, isogenies, and an extensive array of practical use cases.
Last updated:  2025-02-26
The Round Complexity of Black-Box Post-Quantum Secure Computation
Rohit Chatterjee, Xiao Liang, Omkant Pandey, and Takashi Yamakawa
We study the round-complexity of secure multi-party computation (MPC) in the post-quantum regime where honest parties and communication channels are classical but the adversary can be a quantum machine. Our focus is on the black-box setting where both the construction as well as the security reduction are black-box in nature. In this context, Chia, Chung, Liu, and Yamakawa [FOCS'22] demonstrated the infeasibility of achieving standard simulation-based security within constant rounds, unless . This outcome leaves crucial feasibility questions unresolved. Specifically, it remains unknown whether black-box constructions are achievable within polynomial rounds; additionally, the existence of constant-round constructions with respect to -, a relaxed yet useful alternative to the standard simulation notion, remains unestablished. This work provides positive answers to the aforementioned questions. We introduce the first black-box construction for post-quantum MPC in polynomial rounds, from the minimal assumption of post-quantum semi-honest oblivious transfers. In the two-party scenario, our construction requires only rounds. These results have already found application in the oracle separation between classical-communication quantum MPC and in the recent work of Kretschmer, Qian, and Tal [STOC'25]. As for -simulation, Chia, Chung, Liang, and Yamakawa [CRYPTO'22] resolved the issue for the two-party setting, leaving the general multi-party setting as an open question. We complete the picture by presenting the first black-box and constant-round construction in the multi-party setting. Our construction can be instantiated using various standard post-quantum primitives including lossy public-key encryption, linearly homomorphic public-key encryption, or dense cryptosystems. En route, we obtain a black-box and constant-round post-quantum commitment that achieves a weaker version of the standard 1-many non-malleability, from the minimal assumption of post-quantum one-way functions. Besides its utility in our post-quantum MPC construction, this commitment scheme also reduces the assumption used in the lower bound of quantum parallel repetition recently established by Bostanci, Qian, Spooner, and Yuen [STOC'24]. We anticipate that it will find more applications in the future.
Last updated:  2025-02-26
An Efficient Quantum Algorithm for the Traveling Salesman Problem
Anant Sharma, Nupur Deshpande, Sanchita Ghosh, Sreetama Das, and Shibdas Roy
The traveling salesman problem is the problem of finding out the shortest route in a network of cities, that a salesman needs to travel to cover all the cities, without visiting the same city more than once. This problem is known to be -hard with a brute-force complexity of or for number of cities. This problem is equivalent to finding out the shortest Hamiltonian cycle in a given graph, if at least one Hamiltonian cycle exists in it. Quantum algorithms for this problem typically provide with a quadratic speedup only, using Grover's search, thereby having a complexity of or . We present a bounded-error quantum polynomial-time (BQP) algorithm for solving the problem, providing with an exponential speedup. The overall complexity of our algorithm is , where the errors are , and is the not-too-large condition number of the matrix encoding all Hamiltonian cycles.
Last updated:  2025-02-26
Universal Computational Extractors and Multi-Bit AIPO from Lattice Assumptions
Yilei Chen and Xinyu Mao
We put forth a new primitive called obliviously programmable function (OPF) to construct two random-oracle-like primitives: • Universal computational extractors (UCEs), introduced by Bellare, Hoang, and Keelveedhi [BHK13], can securely replace random oracles in various applications, including KDMsecure encryption, deterministic encryption, RSA-OAEP, universal hardcore bits, etc. • Multi-bit point obfuscation with auxiliary input (MB-AIPO). It enables upgrading CPAsecure public-key encryption (PKE) into a CCA-secure one [MH14] and serves as a tool to instantiate the random oracles used in the Fujisaki-Okamoto transform for lossy PKEs [MOZ23]. Despite their usefulness, constructing UCEs and MB-AIPO in the standard model is challenging. The existing constructions of both primitives [BM14a, BM14b] use indistinguishability obfuscation (iO) plus point functions with auxiliary input (AIPO). OPF can replace the use iO in the constructions of UCE and MB-AIPO. We use OPF plus AIPO to construct • UCE with one query for strongly unpredictable sources, • MB-AIPO for strongly unpredictable distributions and • PKE scheme that is IND-CPA secure in the presence of computationally uninvertible leakage on the secret key. We then construct OPF for NC1 circuits from lattice assumptions based on the GGH15 encodings [GGH15], without using iO. In sum, we give new constructions of the above three primitives under the following assumptions: (1) LWE with subexponential hardness; (2) private-coin evasive LWE assumption for specific samplers; (3) the existence of AIPO in NC1. As a byproduct, we construct an ‘NC1-universal AIPO’ under the assumptions (1) and (2).
Last updated:  2025-02-26
Predicate Encryption from Lattices: Enhanced Compactness and Refined Functionality
Uncategorized
Yuejun Wang, Baocang Wang, Qiqi Lai, and Huaxiong Wang
Show abstract
Uncategorized
In this work, we explore the field of lattice-based Predicate Encryption (PE), with a focus on enhancing compactness and refining functionality. First, we present a more compact bounded collusion predicate encryption scheme compared to previous constructions, significantly reducing both the per-unit expansion and fixed overhead, while maintaining an optimal linear blow-up proportional to . Next, we propose a Predicate Inner Product Functional Encryption (P-IPFE) scheme based on our constructed predicate encryption scheme. P-IPFE preserves the attribute-hiding property while enabling decryption to reveal only the inner product between the key and message vectors, rather than the entire message as in traditional PE. Our P-IPFE scheme also achieves bounded collusion resistance while inheriting the linear compactness optimized in the underlying PE scheme. Additionally, it supports any polynomial-sized and bounded-depth circuits, thereby extending beyond the inner-product predicate class in prior works. Furthermore, all the proposed schemes achieve selective fully attribute-hiding security in the simulation-based model, therefore, can further attain semi-adaptive security by adopting existing upgrading techniques.
Last updated:  2025-02-26
A Robust Variant of ChaCha20-Poly1305
Tim Beyne, Yu Long Chen, and Michiel Verbauwhede
The ChaCha20-Poly1305 AEAD scheme is widely used as an alternative for AES-GCM on platforms without AES hardware instructions. Although recent analysis by Degabriele et al. shows that ChaCha20-Poly1305 provides adequate security in the conventional multiuser model, the construction is totally broken when a single nonce is repeated – a real-word scenario that can occur due to faulty implementations or the desire to use random nonces. We present a new nonce-misuse resistant and key-committing authenticated encryption scheme, called ChaCha20-Poly1305-PSIV, that is based on carefully combining the ChaCha20-Poly1305 building blocks into the NSIV paradigm proposed by Peyrin and Seurin (CRYPTO 2016) without performance loss. We analyze the security of the underlying mode PSIV in the multi-user faulty-nonce model assuming that the underlying permutation is ideal, and prove its key-committing security in the cmt-1 model. Rust and C implementations are provided, and benchmarks confirm that performance is comparable to the ChaCha20-Poly1305 implementation in libsodium. In terms of security and efficiency (without hardware support), our proposal compares favorably to AES-GCM-SIV. Since we reuse the ChaCha20-Poly1305 building blocks, we expect ChaCha20-Poly1305-PSIV to benefit from existing analysis and to be easy to deploy in practice.
Last updated:  2025-02-25
Vanishing Short Integer Solution, Revisited: Reductions, Trapdoors, Homomorphic Signatures for Low-Degree Polynomials
Kalle Jyrkinen and Russell W. F. Lai
The vanishing short integer solution (vSIS) assumption [Cini-Lai-Malavolta, Crypto'23], at its simplest form, asserts the hardness of finding a polynomial with short coefficients which vanishes at a given random point. While vSIS has proven to be useful in applications such as succinct arguments, not much is known about its theoretical hardness. Furthermore, without the ability to generate a hard instance together with a trapdoor, the applicability of vSIS is significantly limited. We revisit the vSIS assumption focusing on the univariate single-point constant-degree setting, which can be seen as a generalisation of the (search) NTRU problem. In such a setting, we show that the vSIS problem is as hard as finding the shortest vector in certain ideal lattices. We also show how to generate a random vSIS instance together with a trapdoor, under the (decision) NTRU assumption. Interestingly, a vSIS trapdoor allows to sample polynomials of short coefficients which evaluate to any given value at the public point. By exploiting the multiplicativity of the polynomial ring, we use vSIS trapdoors to build a new homomorphic signature scheme for low-degree polynomials.
Last updated:  2025-02-25
A Note on Zero-Knowledge Simulator of the CROSS Identification Protocol
Shai Levin
We point out flaw in zero-knowledge of the CROSS identification protocol, , which allows a distinguisher to distinguish real and simulated transcripts given access to the witness. Moreover, we show that the real and simulated transcripts are not statistically indistinguishable, and therefore the protocol can only satisfy weak computational (rather than strong, statistical or perfect) Honest Verifier Zero-knowledge. This issue is still present in version 2.0 updated on January 31, 2025, which resolves the security losses attained via the attacks of [BLP+25]
Last updated:  2025-02-25
The Complexity of Memory Checking with Covert Security
Elette Boyle, Ilan Komargodski, and Neekon Vafa
A memory checker is an algorithmic tool used to certify the integrity of a database maintained on a remote, unreliable, computationally bounded server. Concretely, it allows a user to issue instructions to the server and after every instruction, obtain either the correct value or a failure (but not an incorrect answer) with high probability. A recent result due to Boyle, Komargodski, and Vafa (BKV, STOC '24) showed a tradeoff between the size of the local storage and the number of queries the memory checker makes to the server upon every logical instruction. Specifically, they show that every non-trivial memory checker construction with inverse-polynomial soundness and local storage at most must make queries, and this is tight up to constant factors given known constructions. However, an intriguing question is whether natural relaxations of the security guarantee could allow for more efficient constructions. We consider and adapt the notion of covert security to the memory checking context, wherein the adversary can effectively cheat while taking the risk of being caught with constant probability. Notably, BKV's lower bound does not apply in this setting. We close this gap and prove that overhead is unavoidable even in the covert security setting. Our lower bound applies to any memory checker construction, including ones that use randomness and adaptivity and ones that rely on cryptographic assumptions and/or the random oracle model, as long as they satisfy a natural "read-only reads" property. This property requires a memory checker not to modify contents of the database or local storage in the execution of a logical read instruction.
Last updated:  2025-02-25
Efficient Permutation Correlations and Batched Random Access for Two-Party Computation
Stanislav Peceny, Srinivasan Raghuraman, Peter Rindal, and Harshal Shah
In this work we formalize the notion of a two-party permutation correlation s.t. for a random permutation of elements and vectors . This correlation can be viewed as an abstraction and generalization of the Chase et al. (Asiacrypt 2020) share translation protocol. We give a systematization of knowledge for how such a permutation correlation can be derandomized to allow the parties to perform a wide range of oblivious permutations of secret-shared data. This systematization immediately enables the translation of various popular honest-majority protocols to be efficiently instantiated in the two-party setting, e.g. collaborative filtering, sorting, database joins, graph algorithms, and many more. We give two novel protocols for efficiently generating a random permutation correlation. The first uses MPC-friendly PRFs to generate a correlation of elements, each of size bits, with bit-OTs, time, communication, and only 3 rounds including setup. Similar asymptotics previously required relatively expensive public-key cryptography, e.g. Paillier or LWE. Our protocol implementation for requires just 7 seconds & bits of communication, a respective 40 & improvement on the LWE solution of Juvekar at al. (CCS 2018). The second protocol is based on pseudo-random correlation generators and achieves an overhead that is sublinear in the string length , i.e. the communication and number of OTs is . The overhead of the latter protocol has larger hidden constants, and therefore is more efficient only when long strings are permuted, e.g. in graph algorithms. Finally, we present a suite of highly efficient protocols based on permutations for performing various batched random access operations. These include the ability to extract a hidden subset of a secret-shared list. More generally, we give ORAM-like protocols for obliviously reading and writing from a list in a batched manner. We argue that this suite of batched random access protocols should be a first class primitive in the MPC practitioner's toolbox.
Last updated:  2025-02-25
Random Number Generation from Pulsars
Hayder Tirmazi
Pulsars exhibit signals with precise inter-arrival times that are on the order of milliseconds to seconds depending on the individual pulsar. There is subtle variation in the timing of pulsar signals, primarily due to the presence of gravitational waves, intrinsic variance in the period of the pulsar, and errors in the realization of Terrestrial Time (TT). Traditionally, these variations are dismissed as noise in high-precision timing experiments. In this paper, we show that these variations serve as a natural entropy source for the creation of Random Number Generators (RNG). We also explore the effects of using randomness extractors to increase the entropy of random bits extracted from Pulsar timing data. To evaluate the quality of the Pulsar RNG, we model its entropy as a -source and use well-known cryptographic results to show its closeness to a theoretically ideal uniformly random source. To remain consistent with prior work, we also show that the Pulsar RNG passes well-known statistical tests such as the NIST test suite.
Last updated:  2025-02-25
Lattice-based Proof-Friendly Signatures from Vanishing Short Integer Solutions
Adrien Dubois, Michael Klooß, Russell W. F. Lai, and Ivy K. Y. Woo
Efficient anonymous credentials are typically constructed by combining proof-friendly signature schemes with compatible zero-knowledge proof systems. Inspired by pairing-based proof-friendly signatures such as Boneh- Boyen (BB) and Boneh-Boyen-Shacham (BBS), we propose a wide family of lattice-based proof-friendly signatures based on variants of the vanishing short integer solution (vSIS) assumption [Cini-Lai-Malavolta, Crypto'23]. In particular, we obtain natural lattice-based adaptions of BB and BBS which, similar to their pairing-based counterparts, admit nice algebraic properties. [Bootle-Lyubashevsky-Nguyen-Sorniotti, Crypto'23] (BLNS) recently proposed a framework for constructing lattice-based proof-friendly signatures and anonymous credentials, based on another new lattice assumption called parametrised by a fixed function , with focus on being the binary decomposition. We introduce a generalised framework, called , with a keyed and probabilistic function . For example, picking with key for short ring element leads to algebraic and thus proof-friendly signatures. To better gauge the robustness and proof-friendliness of , we consider what happens when the inputs to are chosen selectively (or even adaptively) by the adversary, and the behaviour under relaxed norm checks. While bit decomposition quickly becomes insecure, our proposed function families seem robust.
Last updated:  2025-02-25
Haven++: Batched and Packed Dual-Threshold Asynchronous Complete Secret Sharing with Applications
Nicolas Alhaddad, Mayank Varia, and Ziling Yang
Asynchronous complete secret sharing (ACSS) is a foundational primitive in the design of distributed algorithms and cryptosystems that require confidentiality. ACSS permits a dealer to distribute a secret to a collection of servers so that everyone holds shares of a polynomial containing the dealer's secret. This work contributes a new ACSS protocol, called Haven++, that uses packing and batching to make asymptotic and concrete advances in the design and application of ACSS for large secrets. Haven++ allows the dealer to pack multiple secrets in a single sharing phase, and to reconstruct either one or all of them later. For even larger secrets, we contribute a batching technique to amortize the cost of proof generation and verification across multiple invocations of our protocol. The result is an asymptotic improvement in the worst-case amortized communication and computation complexity, both for ACSS itself and for its application to asynchronous distributed key generation. Our ADKG based on Haven++ achieves, for the first time, an optimal worst case amortized communication complexity of without a trusted setup. To show the practicality of Haven++, we implement it and find that it outperforms the work of Yurek et al.\ (NDSS 2022) by more than an order of magnitude when there are malicious, faulty parties.
Last updated:  2025-02-25
Dynamic Decentralized Functional Encryptions from Pairings in the Standard Model
Duy Nguyen
Dynamic Decentralized Functional Encryption (DDFE), introduced by Chotard et al. (CRYPTO'20), represents a robust generalization of (Multi-Client) Functional Encryption. It allows users to dynamically join and contribute private inputs to individually controlled joint functions without requiring a trusted authority. Recently, Shi and Vanjani (PKC'23) proposed the first Multi-Client Functional Encryption scheme for function-hiding inner products (FH-IP) without relying on random oracles. Unfortunately, their construction still requires a trusted key authority, leaving open the question of whether a full-fledged FH-IP-DDFE can exist in the standard model. In this work, we answer this question affirmatively by introducing Updatable Pseudorandom Zero Sharing, a novel concept that provides both the critical functionality and security properties needed to construct secure DDFE schemes in the standard model. Our second contribution is a novel proof strategy, which preserves adaptive security when transforming any functional encryption scheme for FH-IP into FH-IP-DDFE. Together, these two techniques enable a modular construction of FH-IP-DDFE that is secure against adaptive message and key queries in the standard model. Additionally, our pseudorandom zero-sharing scheme is highly versatile, enabling the first DDFE for attribute-weighted sums in the standard model, complementing the recent ROM-based construction by Agrawal et al. (CRYPTO'23).
Last updated:  2025-02-25
Commit-and-Prove System for Vectors and Applications to Threshold Signing
Anja Lehmann and Cavit Özbay
Multi-signatures allow to combine several individual signatures into a compact one and verify it against a short aggregated key. Compared to threshold signatures, multi-signatures enjoy non-interactive key generation but give up on the threshold-setting. Recent works by Das et al. (CCS'23) and Garg et al. (S&P'24) show how multi-signatures can be turned into schemes that enable efficient verification when an ad hoc threshold -- determined only at verification -- is satisfied. This allows to keep the simple key generation of multi-signatures and support flexible threshold settings in the signing process later on. Both works use the same idea of combining BLS multi-signatures with inner-product proofs over committed keys. Das et al. give a somewhat generic proof from both building blocks, which we show to be flawed, whereas Garg et al. give a direct proof for the combined construction in the algebraic group model. In this work, we identify the common blueprint used in both works and abstract the proof-based approach through the building block of a commit-and-prove system for vectors (CP). We formally define a flexible set of security properties for the CP system and show how it can be securely combined with a multi-signature to yield a signature with ad hoc thresholds. Our scheme also lifts the threshold signatures into the multiverse setting recently introduced by Baird et al. (S&P'23), which allows signers to re-use their long-term keys across several groups. The challenge in the generic construction is to express -- and realize -- the combination of homomorphic proofs and commitments (needed to realize flexible thresholds over fixed group keys) and their simulation extractability (needed in the threshold signature security proof). We finally show that a CP instantiation closely following the ideas of Das et al. can be proven secure, but requires a new flexible-base DL-assumption to do so.
Last updated:  2025-02-25
Towards Leakage-Resilient Ratcheted Key Exchange
Daniel Collins, Simone Colombo, and Sina Schaeffler
Ratcheted key exchange (RKE) is at the heart of modern secure messaging, enabling protocol participants to continuously update their secret material to protect against full state exposure through forward security (protecting past secrets and messages) and post-compromise security (recovering from compromise). However, many practical attacks only provide the adversary with partial access to a party's secret state, an attack vector studied under the umbrella of leakage resilience. Existing models of RKE provide suboptimal guarantees under partial leakage due to inherent limitations in security under full state exposure. In this work, we initiate the study of leakage-resilient ratcheted key exchange that provides typical guarantees under full state exposure and additional guarantees under partial state exposure between ratchets of the protocol. We consider unidirectional ratcheted key exchange (URKE) where one party acts as the sender and the other as receiver. Building on the notions introduced by Balli, Rösler and Vaudenay (ASIACRYPT 2020), we formalise a key indistinguishability game under randomness manipulation and bounded leakage (KIND), which in particular enables the adversary to continually leak a bounded amount of the sender's state between honest send calls. We construct a corresponding protocol from a key-updatable key encapsulation mechanism (kuKEM) and a leakage-resilient one-time MAC. By instantiating this MAC in the random oracle model (ROM), results from Balli, Rösler and Vaudenay imply that in the ROM, kuKEM and KIND-secure URKE are equivalent, i.e., can be built from each other. To address the strong limitations that key indistinguishability imposes on the adversary, we formalise a one-wayness game that also permits leakage on the receiver. We then propose a corresponding construction from leakage-resilient kuKEM, which we introduce, and a leakage-resilient one-time MAC. We further show that leakage-resilient kuKEM and one-way-secure URKE are equivalent in the ROM, highlighting the cost that strong one-way security entails. Our work opens exciting directions for developing leakage-resilient messaging protocols.
Last updated:  2025-02-25
On Linear Equivalence, Canonical Forms, and Digital Signatures
Tung Chou, Edoardo Persichetti, and Paolo Santini
Given two linear codes, the code equivalence problem asks to find an isometry mapping one code into the other. The problem can be described in terms of group actions and, as such, finds a natural application in signatures derived from a Zero-Knowledge Proof system. A recent paper, presented at Asiacrypt 2023, showed how a proof of equivalence can be significantly compressed by describing how the isometry acts only on an information set. Still, the resulting signatures are far from being optimal, as the size for a witness to this relation is still significantly larger than the theoretical lower bound, which is twice the security parameter. In this paper, we fill this gap and propose a new notion of equivalence, which leads to a drastically reduced witness size. For many cases, the resulting size is exactly the optimal one given by the lower bound. We achieve this by introducing the framework of canonical representatives, that is, representatives for classes of codes which are equivalent under some notion of equivalence. We propose new notions of equivalence which encompass and further extend all the existing ones: this allows to identify broader classes of equivalent codes, for which the equivalence can be proved with a very compact witness. We associate these new notions to a specific problem, called Canonical Form Linear Equivalence Problem (CF-LEP), which we show to be as hard as the original one (when random codes are considered), providing reductions in both ways. As an added consequence, this reduction leads to a new solver for the code equivalence problem, which is the fastest solver when the finite field size is large enough. Finally, we show that our framework yields a remarkable reduction in signature size when compared to the LESS submission. Our variant is able to obtain very compact signatures, around 2 KB or less, which are among the smallest in the code-based setting.
Last updated:  2025-02-25
Delayed-Input Multi-Party Computation
Michele Ciampi, Jure Sternad, and Yu Xia
In this work, we consider the setting where the process of securely evaluating a multi-party functionality is divided into two phases: offline (or preprocessing) and online. The offline phase is independent of the parties’ inputs, whereas the online phase does require the knowledge of the inputs. We consider the problem of minimizing the round of communication required in the online phase and propose a round preserving compiler that can turn a big class of multi-party computation (MPC) protocols into protocols in which only the last two rounds are input-dependent. Our compiler can be applied to a big class of MPC protocols, and in particular to all existing round-optimal MPC protocols. All our results assume no setup and are proven in the dishonest majority setting with black-box simulation. As part of our contribution, we propose a new definition we call Multi-Party Computation with Adaptive-Input Selection, which allows the distinguisher to craft the inputs the honest parties should use during the online phase, adaptively on the offline phase. This new definition is needed to argue that not only are the messages of the offline phase input-independent but also that security holds even in the stronger (and realistic) adversarial setting where the inputs may depend on some of the offline-phase protocol messages. We argue that this is the definition that any protocol should satisfy to be securely used while preprocessing part of the rounds. We are the first to study this definition in a setting where there is no setup, and the majority of the parties can be corrupted. Prior definitions have been presented in the Universal Composable framework, which is unfortunately not well suited for our setting (i.e., no setup and dishonest majority). As a corollary, we obtain the first four-round (which is optimal) MPC protocol, where the first two rounds can be preprocessed, and its security holds against adaptive-input selection.
Last updated:  2025-02-25
Stronger Security for Threshold Blind Signatures
Uncategorized
Anja Lehmann, Phillip Nazarian, and Cavit Özbay
Show abstract
Uncategorized
Blind signatures allow a user to obtain a signature from an issuer in a privacy-preserving way: the issuer neither learns the signed message, nor can link the signature to its issuance. The threshold version of blind signatures further splits the secret key among n issuers, and requires the user to obtain at least t ≤ n of signature shares in order to derive the final signature. Security should then hold as long as at most t − 1 issuers are corrupt. Security for blind signatures is expressed through the notion of one-more unforgeability and demands that an adversary must not be able to produce more signatures than what is considered trivial after its interactions with the honest issuer(s). While one-more unforgeability is well understood for the single-issuer setting, the situation is much less clear in the threshold case: due to the blind issuance, counting which interactions can yield a trivial signature is a challenging task. Existing works bypass that challenge by using simplified models that do not fully capture the expectations of the threshold setting. In this work, we study the security of threshold blind signatures, and propose a framework of one-more unforgeability notions where the adversary can corrupt c < t issuers. Our model is generic enough to capture both interactive and non-interactive protocols, and it provides a set of natural properties with increasingly stronger guarantees, giving the issuers gradually more control over how their shares can be combined. As a point of comparison, we reconsider the existing threshold blind signature models and show that their security guarantees are weaker and less clearly comprehensible than they seem. We then re-assess the security of existing threshold blind signature schemes – BLS-based and Snowblind – in our framework, and show how to lift them to provide stronger security.
Last updated:  2025-02-25
Practical Zero-Trust Threshold Signatures in Large-Scale Dynamic Asynchronous Networks
Offir Friedman, Avichai Marmor, Dolev Mutzari, Yehonatan Cohen Scaly, and Yuval Spiizer
Threshold signatures have become a critical tool in cryptocurrency systems, offering enhanced security by distributing the signing process among multiple signers. In this work, we distribute this process between a client and a permissionless decentralized blockchain, and present novel protocols for ECDSA and EdDSA/Schnorr signatures in this setting. Typical threshold access architectures used by trusted custodians suffer from the honeypot problem, wherein the more assets the custodian holds, the greater the incentive of compromising it. Implementing threshold signatures over permissionless blockchains poses a few challenges. First, existing networks typically work over an asynchronous reliable broadcast communication channel. Accordingly, our protocol is implemented over such a channel. As a result, it also benefits from identifiable abort, public verifiability, and guaranteed output delivery, and the client benefits from censorship resistance of blockchain systems. Second, upon signing each block, the participating quorum may dynamically change and is post-determined. Therefore, we design a fluid protocol, that supports a post-determined dynamic quorum in each communication round, thereby complying with existing broadcast channel implementations. Third, in permissionless networks, parties may join, leave, and change their stake. Therefore, we offer protocols for network reconfiguration, with complexity independent of the number of clients in the system, and our protocol efficiently supports a weighted threshold access structure for the network. Specifically, the complexity of distributed key generation and presign only depends on the number of parties and not on the overall weight, and the amortized cost of sign only depends on the individual weight. Furthermore, our protocol introduces key improvements, including the removal of zero-knowledge proofs towards the client, and presigns with a non-interactive client. For Schnorr, the presigns are client-independent, and can be collected by the blockchain in a common pool, available for all clients in the system. These optimizations reduce communication overhead and improve the system's ability to handle traffic spikes during high-demand periods. Our protocol is UC-secure, and is therefore natively designed for multiple clients to use the system in parallel. Notably, we propose a novel assumption, Slightly-Enhanced ECDSA Unforgeability, offering concrete security for 256-bit elliptic curves for threshold ECDSA with support for parallel execution of presigns. In addition to securing cryptocurrency wallets, we demonstrate how our protocol enables various cross-chain applications, such as decentralized bridges, future transactions, andwallet transfer. Our system is designed for interoperability across multiple blockchains, enhancing security, scalability, and flexibility for decentralized finance (DeFi) ecosystems.
Last updated:  2025-02-25
Efficient NIZK Arguments with Straight-Line Simulation and Extraction
Michele Ciampi and Ivan Visconti
Non-interactive zero-knowledge (NIZK) arguments allow a prover to convince a verifier about the truthfulness of an NP-statement by sending just one message, without disclosing any additional information. In several practical scenarios, the Fiat-Shamir transform is used to convert an efficient constant-round public-coin honest-verifier zero-knowledge proof system into an efficient NIZK argument system. This approach is provably secure in the random oracle model, crucially requires the programmability of the random oracle and extraction works through rewinds. The works of Lindell [TCC 2015] and Ciampi et al. [TCC 2016] proposed efficient NIZK arguments with non-programmable random oracles along with a programmable common reference string. In this work we show an efficient NIZK argument with straight-line simulation and extraction that relies on features that alone are insufficient to construct NIZK arguments (regardless of efficiency). More specifically we consider the notion of quasi-polynomial time simulation proposed by Pass in [EUROCRYPT 2003] and combine it with simulation and extraction with non-programmable random oracles thus obtaining a NIZK argument of knowledge where neither the zero-knowledge simulator, nor the argument of knowledge extractor needs to program the random oracle. Still, both the simulator and the extractor are straight-line. Our construction uses as a building block a modification of the Fischlin’s transform [CRYPTO 2005] and combines it with the concept of dense puzzles introduced by Baldimtsi et al. [ASIACRYPT 2016]. We also argue that our NIZK argument system inherits the efficiency features of Fischlin’s transform, which represents the main advantage of Fischlin’s protocol over existing schemes.
Last updated:  2025-02-25
On Active Attack Detection in Messaging with Immediate Decryption
Khashayar Barooti, Daniel Collins, Simone Colombo, Loı̈s Huguenin-Dumittan, and Serge Vaudenay
The widely used Signal protocol provides protection against state exposure attacks through forward security (protecting past messages) and post-compromise security (for restoring security). It supports immediate decryption, allowing messages to be re-ordered or dropped at the protocol level without affecting correctness. In this work, we consider strong active attack detection for secure messaging with immediate decryption, where parties are able to immediately detect active attacks under certain conditions. We first consider in-band active attack detection, where participants who have been actively compromised but are still able to send a single message to their partner can detect the compromise. We propose two complementary notions to capture security, and present a compiler that provides security with respect to both notions. Our notions generalise existing work (RECOVER security) which only supported in-order messaging. We also study the related out-of-band attack detection problem by considering communication over out-of-band, authenticated channels and propose analogous security notions. We prove that one of our two notions in each setting imposes a linear communication overhead in the number of sent messages and security parameter using an information-theoretic argument. This implies that each message must information-theoretically contain all previous messages and that our construction, that essentially attaches the entire message history to every new message, is asymptotically optimal. We then explore ways to bypass this lower bound and highlight the feasibility of practical active attack detection compatible with immediate decryption.
Last updated:  2025-02-25
Thorough Power Analysis on Falcon Gaussian Samplers and Practical Countermeasure
Xiuhan Lin, Shiduo Zhang, Yang Yu, Weijia Wang, Qidi You, Ximing Xu, and Xiaoyun Wang
Falcon is one of post-quantum signature schemes selected by NIST for standardization. With the deployment underway, its implementation security is of great importance. In this work, we focus on the side-channel security of Falcon and our contributions are threefold. First, by exploiting the symplecticity of NTRU and a recent decoding technique, we dramatically improve the key recovery using power leakages within Falcon Gaussian samplers. Compared to the state of the art (Zhang, Lin, Yu and Wang, EUROCRYPT 2023), the amount of traces required by our attack for a full key recovery is reduced by at least 85%. Secondly, we present a complete power analysis for two exposed power leakages within Falcon’s integer Gaussian sampler. We identify new sources of these leakages, which have not been identified by previous works, and conduct detailed security evaluations within the reference implementation of Falcon on Chipwhisperer. Thirdly, we propose effective and easy-to-implement countermeasures against both two leakages to protect the whole Falcon’s integer Gaussian sampler. Configured with our countermeasures, we provide security evaluations on Chipwhisperer and report performance of protected implementation. Experimental results highlight that our countermeasures admit a practical trade-off between effciency and side-channel security.
Last updated:  2025-02-25
Bundled Authenticated Key Exchange: A Concrete Treatment of (Post-Quantum) Signal's Handshake Protocol
Keitaro Hashimoto, Shuichi Katsumata, and Thom Wiggers
The Signal protocol relies on a special handshake protocol, formerly X3DH and now PQXDH, to set up secure conversations. Prior analyses of these protocols (or proposals for post-quantum alternatives) have all used highly tailored models to the individual protocols and generally made ad-hoc adaptations to "standard" AKE definitions, making the concrete security attained unclear and hard to compare between similar protocols. Indeed, we observe that some natural Signal handshake protocols cannot be handled by these tailored models. In this work, we introduce Bundled Authenticated Key Exchange (BAKE), a concrete treatment of the Signal handshake protocol. We formally model prekey bundles and states, enabling us to define various levels of security in a unified model. We analyze Signal's classically secure X3DH and harvest-now-decrypt-later-secure PQXDH, and show that they do not achieve what we call optimal security (as is documented). Next, we introduce RingXKEM, a fully post-quantum Signal handshake protocol achieving optimal security; as RingXKEM shares states among many prekey bundles, it could not have been captured by prior models. Lastly, we provide a security and efficiency comparison of X3DH, PQXDH, and RingXKEM.
Last updated:  2025-02-25
Bootstrapping with RMFE for Fully Homomorphic Encryption
Khin Mi Mi Aung, Enhui Lim, Jun Jie Sim, Benjamin Hong Meng Tan, and Huaxiong Wang
There is a heavy preference towards instantiating BGV and BFV homomorphic encryption schemes where the cyclotomic order is a power of two, as this admits highly efficient fast Fourier transformations. Field Instruction Multiple Data (FIMD) was introduced to increase packing capacity in the case of small primes and improve amortised performance, using reverse multiplication-friendly embeddings (RMFEs) to encode more data into each SIMD slot. However, FIMD currently does not admit bootstrapping. In this work, we achieve bootstrapping for RMFE-packed ciphertexts with low capacity loss. We first adapt the digit extraction algorithm to work over RMFE-packed ciphertexts, by applying the recode map after every evaluation of the lifting polynomial. This allows us to follow the blueprint of thin bootstrapping, performing digit extraction on a single ciphertext. To achieve the low capacity loss, we introduce correction maps to the Halevi-Shoup digit extraction algorithm, to remove all but the final recode of RMFE digit extraction. We implement several workflows for bootstrapping RMFE-packed ciphertexts in HElib, and benchmark them against thin bootstrapping for . Our experiments show that the basic strategy of recoding multiple times in digit extraction yield better data packing, but result in very low remaining capacity and latencies of up to hundreds of seconds. On the other hand, using correction maps gives up to additional multiplicative depth and brings latencies often below seconds, at the cost of lower packing capacity.
Last updated:  2025-02-25
Efficient Distributed Randomness Generation from Minimal Assumptions where PArties Speak Sequentially Once
Chen-Da Liu-Zhang, Elisaweta Masserova, João Ribeiro, Pratik Soni, and Sri AravindaKrishnan Thyagarajan
We study efficient public randomness generation protocols in the PASSO (PArties Speak Sequentially Once) model for multi-party computation (MPC). PASSO is a variation of traditional MPC where parties are executed in sequence and each party ``speaks'' only once, broadcasting and sending secret messages only to parties further down the line. Prior results in this setting include information-theoretic protocols in which the computational complexity scales exponentially with the number of corruptions (CRYPTO 2022), as well as more efficient computationally-secure protocols either assuming a trusted setup phase or DDH (FC 2024). Moreover, these works only consider security against static adversaries. In this work, we focus on computational security against adaptive adversaries and from minimal assumptions, and improve on the works mentioned above in several ways: - Assuming the existence of non-interactive perfectly binding commitments, we design protocols with or parties that are efficient and secure whenever is small compared to the security parameter (e.g., is constant). This improves the resiliency of all previous protocols, even those requiring a trusted setup. It also shows that parties are necessary and sufficient for corruptions in the computational setting, while parties are required for information-theoretic security. - Under the same assumption, we design protocols with or parties (depending on the adversarial network model) which are efficient whenever . This improves on the existing DDH-based protocol both in terms of resiliency and the underlying assumptions. - We design efficient protocols with or parties (depending on the adversarial network model) assuming the existence of one-way functions. We complement these results by studying lower bounds for randomness generation protocols in the computational setting.
Last updated:  2025-02-25
Fine-Grained Complexity in a World without Cryptography
Josh Alman, Yizhi Huang, and Kevin Yeo
The study of fine-grained cryptography has proliferated in recent years due to its allure of potentially relying on weaker assumptions compared to standard cryptography. As fine-grained cryptography only requires polynomial gaps between the adversary and honest parties, it seems plausible to build primitives relying upon popular hardness assumptions about problems in such as - or --. The ultimate hope is that fine-grained cryptography could still be viable even if all current cryptographic assumptions are false, such as if or if we live in Pessiland where one-way functions do not exist. In our work, we consider whether this approach is viable by studying fine-grained complexity when all standard cryptographic assumptions are false. As our main result, we show that many popular fine-grained complexity problems are easy to solve in the average-case when one-way functions do not exist. In other words, many candidate hardness assumptions for building fine-grained cryptography are no longer options in Pessiland. As an example, we prove that the average-case - and -- conjectures are false for sufficiently large constant when no one-way functions exist. The average-case -- assumption was used to build fine-grained key-exchange by Lavigne et al. [CRYPTO'19]. One can also view the contrapositive of our result as providing an explicit construction of one-way functions assuming average-case hardness of - or -- for all constant . We also show that barriers for reductions in fine-grained complexity may be explained by problems in cryptography. First, we show that finding faster algorithms for computing discrete logarithms is equivalent to designing average-case equivalence between - and - (an extension of - to cyclic groups). In particular, finding such a reduction from - to - could potentially lead to breakthrough algorithms for the discrete logarithm, factoring, RSA and quadratic residuosity problems. Finally, we show that discrete logarithms with preprocessing may be reduced to the -- problem, and we present faster algorithms for average-case -- and --.
Last updated:  2025-02-25
Juicebox Protocol: Distributed Storage and Recovery of Secrets Using Simple PIN Authentication
Nora Trapp and Diego Ongaro
Existing secret management techniques demand users memorize complex passwords, store convoluted recovery phrases, or place their trust in a specific service or hardware provider. We have designed a novel protocol that combines existing cryptographic techniques to eliminate these complications and reduce user complexity to recalling a short PIN. Our protocol specifically focuses on a distributed approach to secret storage that leverages Oblivious Pseudorandom Functions (OPRFs) and a Secret-Sharing Scheme (SSS) combined with self-destructing secrets to minimize the trust placed in any singular server. Additionally, our approach allows for servers distributed across organizations, eliminating the need to trust a singular service operator. We have built an open-source implementation of the client and server sides of this new protocol, the latter of which has variants for running on commodity hardware and secure hardware.
Last updated:  2025-02-25
Helix: Scalable Multi-Party Machine Learning Inference against Malicious Adversaries
Yansong Zhang, Xiaojun Chen, Qinghui Zhang, Ye Dong, and Xudong Chen
With the growing emphasis on data privacy, secure multi-party computation has garnered significant attention for its strong security guarantees in developing privacy-preserving machine learning (PPML) schemes. However, only a few works address scenarios with a large number of participants. The state of the art by Liu et al. (LXY24, USENIX Security'24) first achieves a practical PPML protocol for up to 63 parties but is constrained to semi-honest security. Although naive extensions to the malicious setting are possible, they would introduce significant overhead. In this paper, we propose Helix, a scalable framework for maliciously secure PPML in the honest majority setting, aiming to enhance both the scalability and practicality of maliciously secure protocols. In particular, we report a privacy leakage issue in LXY24 during prefix OR operations and introduce a round-optimized alternative based on a single-round vectorized three-layer multiplication protocol. Additionally, by exploiting reusability properties within the computation process, we propose lightweight compression protocols that substantially improve the efficiency of multiplication verification. We also develop a batch check protocol to reduce the computational complexity of revealing operations in the malicious setting. For 63-party neural network inference, compared to the semi-honest LXY24, Helix is only 1.9 (1.1) slower in the online phase and 1.2 (1.1) slower in preprocessing under LAN (WAN) in the best case.
Last updated:  2025-02-25
Homomorphic Encryption for Large Integers from Nested Residue Number Systems
Dan Boneh and Jaehyung Kim
Existing fully homomorphic encryption (FHE) schemes primarily support a plaintext space defined over a relatively small prime. However, in some important applications of FHE one needs arithmetic over a large prescribed prime. In this paper we construct a new FHE system that is specifically designed for this purpose. Our system composes three layers of residue systems to enable much better performance than was previously possible. Our experiments show that for arithmetic modulo a 256-bit integer, when compared to the TFHE-rs implementation of 256-bit arithmetic, our new system achieves a factor of a thousand better multiplication throughput and a factor of ten better latency. Moreover, for a 2048-bit prime modulus we achieve far better performance than was previously possible.
Last updated:  2025-02-25
Publicly Verifiable Threshold Proxy Re-encryption and Its Application in Data Rights Confirmation
Uncategorized
Tao Liu, Liang Zhang, Haibin Kan, and Jiheng Zhang
Show abstract
Uncategorized
Proxy re-encryption (PRE) has been regarded as an effective cryptographic primitive in data sharing systems with distributed proxies. However, no literature considers the honesty of data owners, which is critical in the age of big data. In this paper, we fill the gap by introducing a new proxy re-encryption scheme, called publicly verifiable threshold PRE (PVTPRE). Briefly speaking, we innovatively apply a slightly modified publicly verifiable secret sharing (PVSS) scheme to distribute the re-encryption keys to multiple proxies. Consequently, we achieve publicly verifiability of data owners non-interactively. Then, the correctness of data users in decryption and public verifiability of proxies in re-encryption are guaranteed seamlessly through execution of the PVSS reconstruction algorithms. We further prove that PVTPRE satisfies IND-CPA security. Besides, we put forward a privacy-preserving data rights confirmation framework by providing clear principles for data ownership and usage, based on the PVTPRE scheme and blockchain. Blockchain plays the role of data bank and smart contract engine, providing reliable storage and verification for all framework. To our knowledge, we are the first to systematically investigate data rights confirmation considering privacy as well as public verifiability, addressing the growing need for robust mechanisms to protect data rights and ensure transparency. Finally, we conduct comprehensive experiments to illustrate the correctness, feasibility and effectiveness. The experimental results show that our PVTPRE outperforms other PREs in many aspects.
Last updated:  2025-02-24
Non-Interactive Distributed Point Functions
Elette Boyle, Lalita Devadas, and Sacha Servan-Schreiber
Distributed Point Functions (DPFs) are a useful cryptographic primitive enabling a dealer to distribute short keys to two parties, such that the keys encode additive secret shares of a secret point function. However, in many applications of DPFs, no single dealer entity has full knowledge of the secret point function, necessitating the parties to run an interactive protocol to emulate the setup. Prior works have aimed to minimize complexity metrics of such distributed setup protocols, e.g., round complexity, while remaining black-box in the underlying cryptography. We construct Non-Interactive DPFs (NIDPF), which have a one-round (simultaneous-message, semi-honest) setup protocol, removing the need for a trusted dealer. Specifically, our construction allows each party to publish a special "public key" to a public channel or bulletin board, where the public key encodes the party's secret function parameters. Using the public key of another party, any pair of parties can locally derive a DPF key for the point function parameterized by the two parties' joint inputs. We realize NIDPF from an array of standard assumptions, including DCR, SXDH, QR, and LWE. Each party's public key is of size , for point functions with a domain of size , which leads to a sublinear communication setup protocol. The only prior approach to realizing such a non-interactive setup required using multi-key fully-homomorphic encryption or indistinguishability obfuscation. As immediate applications of our construction, we get "public-key setups" for several existing constructions of pseudorandom correlation generators and round-efficient protocols for secure comparisons.
Last updated:  2025-02-24
Simultaneous-Message and Succinct Secure Computation
Elette Boyle, Abhishek Jain, Sacha Servan-Schreiber, and Akshayaram Srinivasan
We put forth and instantiate a new primitive we call simultaneous-message and succinct (SMS) secure computation. An SMS scheme enables a minimal communication pattern for secure computation in the following scenario: Alice has a large private input X, Bob has a small private input y, and Charlie wants to learn for some public function . Given a common reference string (CRS) setup phase, an SMS scheme for a function f is instantiated with two parties holding inputs and , and has the following structure: - The parties simultaneously exchange a single message. - Communication is succinct, scaling sublinearly in the size of and the output . - Without further interaction, the parties can locally derive additive secret shares of . Indeed, Alice and Bob simultaneously send each other a message using the CRS and their private inputs. Using the transcript and their private state, the parties locally derive additive secret shares of , which they can send to Charlie. As such, an SMS scheme incurs a communication cost to Charlie that is only twice that of the function output length. Importantly, the size of Alice’s message does not grow with the size of her input , and both Alice’s and Bob’s first-round messages grow sublinearly in the size of the output. Additionally, Alice’s or Bob’s view provides no information about the other party’s input besides the output of , even if colluding with Charlie. We obtain the following results: - Assuming Learning With Errors (LWE), we build an SMS scheme supporting evaluation of depth- circuits, where Alice's message is of size · poly(λ, d), Bob's message is of size · poly(λ, d), and λ is the security parameter. We can further extend this to support all functions by assuming the circular security of LWE. - Assuming sub-exponentially secure indistinguishability obfuscation, in conjunction with other standard assumptions, we build an SMS scheme supporting arbitrary polynomial-sized batch functions of the form , for . The size of Alice's and Bob's messages in this construction is poly(λ) and poly(λ, |f|, log L), respectively. We show that SMS schemes have several immediate applications. An SMS scheme gives: - A direct construction of trapdoor hash functions (TDH) (Döttling et al., Crypto'19) for the same class of functions as the one supported by the SMS scheme. - A simple and generic compiler for obtaining compact, rate-1 fully homomorphic encryption (FHE) from any non-compact FHE scheme. - A simple and generic compiler for obtaining correlation-intractable (CI) hash functions that are secure against all efficiently-searchable relations. In turn, under the LWE assumption, we obtain the first construction of TDH for all functions and generic approaches for obtaining rate-1 FHE and CI hashing. We also show that our iO-based construction gives an alternative approach for two-round secure computation with communication succinctness in the output length (Hubáček and Wichs, ITCS'15).
Last updated:  2025-02-24
Multi-Key Homomorphic Secret Sharing
Geoffroy Couteau, Lalita Devadas, Aditya Hegde, Abhishek Jain, and Sacha Servan-Schreiber
Homomorphic secret sharing (HSS) is a distributed analogue of fully homomorphic encryption (FHE) where following an input-sharing phase, two or more parties can locally compute a function over their private inputs to obtain shares of the function output. Over the last decade, HSS schemes have been constructed from an array of different assumptions. However, all existing HSS schemes, except ones based on assumptions known to imply multi-key FHE, require a public-key infrastructure (PKI) or a correlated setup between parties. This limitation carries over to many applications of HSS. In this work, we construct multi-key homomorphic secret sharing (MKHSS), where given only a common reference string (CRS), two parties can secret share their inputs to each other and then perform local computations as in HSS. We present the first MKHSS schemes supporting all NC1 computations from either the decisional Diffie-Hellman (DDH), decisional composite residuosity (DCR), or class group assumptions. Our constructions imply the following applications in the CRS model: - Succinct two-round secure computation. Under the same assumptions as our MKHSS schemes, we construct succinct, two-round secure two-party computation for NC1 circuits. Previously, such a result was only known from the learning with errors assumption. - Attribute-based NIKE. Under DCR or class group assumptions, we construct non-interactive key exchange (NIKE) protocols where two parties agree on a key if and only if their secret attributes satisfy a public NC1 predicate. This significantly generalizes the existing notion of password-based NIKE. - Public-key PCFs. Under DCR or class group assumptions, we construct public-key pseudorandom correlation functions (PCFs) for any NC1 correlation. This yields the first public-key PCFs for Beaver triples (and more) from non-lattice assumptions. - Silent MPC. Under DCR or class group assumptions, we construct a p-party secure computation protocol in the silent preprocessing model where the preprocessing phase has communication O(p), ignoring polynomial factors. All prior protocols that do not rely on spooky encryption require communication.
Last updated:  2025-02-24
Homomorphic Encryption with Authority
Joohee Lee and Joon-Woo Lee
Fully homomorphic encryption enables computations over encrypted data, which allows privacy-preserving services to be held between a server and a client. However, real-world applications demand practical considerations, especially concerning public safety and legal investigations. Existing FHE schemes focus solely on privacy, neglecting the societal risks posed by criminal activities utilizing privacy-preserving services. This paper introduces Homomorphic Encryption with Authority (HEwA), a novel framework that balances data privacy with public safety by incorporating an "authority" party. The proposed HEwA system operates in two phases: a normal phase, where client data privacy is protected, and an investigative phase, where the authority referring to a legally authorized entity such as government agencies exerts the right to recover suspicious client’s data. We formalize the security model for HEwA, ensuring that client privacy is protected during the normal phase while enabling authorities to recover encrypted data in the investigative phase. As a concrete example, we design an efficient HEwA system solely based on the CKKS homomorphic encryption scheme, which supports approximate computations over real-number data, making it highly suitable for fruitful applications in AI such as secure genomic analysis. We further provide rigorous security proofs. This new approach addresses the tension between privacy and public safety in cloud services, paving the way for responsible use of homomorphic encryption in practice.
Last updated:  2025-02-24
Protecting Cryptographic Code Against Spectre-RSB
Santiago Arranz Olmos, Gilles Barthe, Chitchanok Chuengsatiansup, Benjamin Grégoire, Vincent Laporte, Tiago Oliveira, Peter Schwabe, Yuval Yarom, and Zhiyuan Zhang
It is fundamental that executing cryptographic software must not leak secrets through side-channels. For software-visible side-channels, it was long believed that "constant-time" programming would be sufficient as a systematic countermeasure. However, this belief was shattered in 2018 by attacks exploiting speculative execution—so called Spectre attacks. Recent work shows that language support suffices to protect cryptographic code with minimal overhead against one class of such attacks, Spectre v1, but leaves an open question of whether this result can be extended to also cover other classes of Spectre attacks. In this paper, we answer this question in the affirmative: We design, validate, implement, and verify an approach to protect cryptographic implementations against all known classes of Spectre attacks—the main challenge in this endeavor is attacks exploiting the return stack buffer, which are known as Spectre-RSB. Our approach combines a new value-dependent information-flow type system that enforces speculative constant-time in an idealized model of transient execution and a compiler transformation that realizes this idealized model on the generated low-level code. Using the Coq proof assistant, we prove that the type system is sound with respect to the idealized semantics and that the compiler transformation preserves speculative constant-time. We implement our approach in the Jasmin framework for high-assurance cryptography and demonstrate that the overhead incurred by full Spectre protections is below 2% for most cryptographic primitives and reaches only about 5–7% for the more complex post-quantum key-encapsulation mechanism Kyber.
Last updated:  2025-02-24
Good Things Come to Those Who Wait: Dishonest-Majority Coin-Flipping Requires Delay Functions
Joseph Bonneau, Benedikt Bünz, Miranda Christ, and Yuval Efron
We reconsider Cleve's famous 1986 impossibility result on coin-flipping without an honest majority. Recently proposed constructions have circumvented this limit by using cryptographic delay functions. We show that this is necessary: a (weak) notion of delay functions is in fact implied by the existence of a protocol circumventing Cleve's impossibility. However, such delay functions are weaker than those used in existing constructions. We complete our result by showing an equivalence, that these weaker delay functions are also sufficient to construct not just fair dishonest-majority coin-flipping protocols, but also the stronger notion of a distributed randomness beacon. We also show that this is possible in a weaker communication model than previously considered, without the assumption of reliable broadcast or a public bulletin board.
Last updated:  2025-02-24
Revisiting Keyed-Verification Anonymous Credentials
Michele Orrù
Keyed-verification anonymous credentials (KVACs) have demonstrated their practicality through large-scale deployments in privacy-critical systems like Signal and Tor. Despite their widespread adoption, the theoretical framework underlying KVACs lacks the flexibility needed to support diverse applications, which in general require different security properties. For instance, rate-limiting credentials only need a weaker unforgeability notion (one-more unforgeability), yet the framework cannot easily accommodate this relaxation. Similarly, identity-based applications require stronger properties than unforgeability -—specifically, extractability for security proofs when adversaries can observe other users' credentials. In this work, we address these limitations, introducing new notions of extractability and one-more unforgeability. We improve two foundational works in the space: - The scheme by Chase et al. (CCS 2014), commonly referred to as CMZ or PS MAC can be made statistically anonymous, and issuance cost reduced from to . We update the proof of Chase et al. in the algebraic group model. - The scheme by Barki et al. (SAC 2016), known as BBDT or BBS MAC can be issued more efficiently (one less group element). Finally, we note that for KVACs, designated-verifier proofs suffice since the verifier is known in advance. We introduce designated-verifier polynomial commitment schemes and instantiate a variant of the popular KZG commitment scheme without pairings. Any interactive oracle proof can be used in tandem with it, leading to designated-verifier fully-succinct zk-SNARKs without pairings for algebraic groups. Our model can improve the deployment of larger protocols relying on KVACs. We show this with some examples that benefit from our approach.
Last updated:  2025-02-24
Parallel Execution Fee Mechanisms
Uncategorized
Abdoulaye Ndiaye
Show abstract
Uncategorized
This paper investigates how pricing schemes can achieve efficient allocations in blockchain systems featuring multiple transaction queues under a global capacity constraint. I model a capacity-constrained blockchain where users submit transactions to different queues—each representing a submarket with unique demand characteristics—and decide to participate based on posted prices and expected delays. I find that revenue maximization tends to allocate capacity to the highest-paying queue, whereas welfare maximization generally serves all queues. Optimal relative pricing of different queues depends on factors such as market size, demand elasticity, and the balance between local and global congestion. My results have implications for the implementation of local congestion pricing for evolving blockchain architectures, including parallel transaction execution, directed acyclic graph (DAG)-based systems, and multiple concurrent proposers.
Last updated:  2025-02-24
Privacy-Preserving Epidemiological Modeling on Mobile Graphs
Daniel Günther, Marco Holz, Benjamin Judkewitz, Helen Möllering, Benny Pinkas, Thomas Schneider, and Ajith Suresh
The latest pandemic COVID-19 brought governments worldwide to use various containment measures to control its spread, such as contact tracing, social distance regulations, and curfews. Epidemiological simulations are commonly used to assess the impact of those policies before they are implemented. Unfortunately, the scarcity of relevant empirical data, specifically detailed social contact graphs, hampered their predictive accuracy. As this data is inherently privacy-critical, a method is urgently needed to perform powerful epidemiological simulations on real-world contact graphs without disclosing any sensitive information. In this work, we present RIPPLE, a privacy-preserving epidemiological modeling framework enabling standard models for infectious disease on a population’s real contact graph while keeping all contact information locally on the participants’ devices. As a building block of independent interest, we present PIR-SUM, a novel extension to private information retrieval for secure download of element sums from a database. Our protocols are supported by a proof-of-concept implementation, demonstrating a 2-week simulation over half a million participants completed in 7 minutes, with each participant communicating less than 50 KB.
Last updated:  2025-02-24
Tight Multi-challenge Security Reductions for Key Encapsulation Mechanisms
Lewis Glabush, Kathrin Hövelmanns, and Douglas Stebila
A key encapsulation mechanism (KEM) allows two parties to establish a shared secret key using only public communication. For post-quantum KEMs, the most widespread approach is to design a passively secure public-key encryption (PKE) scheme and then apply the Fujisaki–Okamoto (FO) transform that turns any such PKE scheme into an IND-CCA secure KEM. While the base security requirement for KEMs is typically IND-CCA security, adversaries in practice can sometimes observe and attack many public keys and/or ciphertexts, which is referred to as multi-challenge security. FO does not necessarily guarantee multi-challenge security: for example, FrodoKEM, a Round 3 alternate in NIST’s post-quantum project, used FO to achieve IND-CCA security, but was subsequently shown to be vulnerable to attackers that can target multiple ciphertexts. To avert this multi-ciphertext attack, the FrodoKEM team added a salt to the encapsulation procedure and proved that this does not degrade (single-ciphertext) IND-CCA security. The formal analysis of whether this indeed averts multi-ciphertext attacks, however, was left open, which we address in this work. Firstly, we formalize FrodoKEM's approach as a new variant of the FO transform, called the salted FO transform. Secondly, we give tight reductions from multi-challenge security of the resulting KEM to multi-challenge security of the underlying public key encryption scheme, in both the random oracle model (ROM) and the quantum-accessible ROM (QROM). Together these results justify the multi-ciphertext security of the salted FrodoKEM scheme, and can also be used generically by other schemes requiring multi-ciphertext security.
Last updated:  2025-02-24
Polynomial Time Cryptanalytic Extraction of Deep Neural Networks in the Hard-Label Setting
Nicholas Carlini, Jorge Chávez-Saab, Anna Hambitzer, Francisco Rodríguez-Henríquez, and Adi Shamir
Deep neural networks (DNNs) are valuable assets, yet their public accessibility raises security concerns about parameter extraction by malicious actors. Recent work by Carlini et al. (Crypto’20) and Canales- Martínez et al. (Eurocrypt’24) has drawn parallels between this issue and block cipher key extraction via chosen plaintext attacks. Leveraging differential cryptanalysis, they demonstrated that all the weights and biases of black-box ReLU-based DNNs could be inferred using a polynomial number of queries and computational time. However, their attacks relied on the availability of the exact numeric value of output logits, which allowed the calculation of their derivatives. To overcome this limitation, Chen et al. (Asiacrypt’24) tackled the more realistic hard-label scenario, where only the final classification label (e.g., "dog" or "car") is accessible to the attacker. They proposed an extraction method requiring a polynomial number of queries but an exponential execution time. In addition, their approach was applicable only to a restricted set of architectures, could deal only with binary classifiers, and was demonstrated only on tiny neural networks with up to four neurons split among up to two hidden layers. This paper introduces new techniques that, for the first time, achieve cryptanalytic extraction of DNN parameters in the most challenging hard-label setting, using both a polynomial number of queries and polynomial time. We validate our approach by extracting nearly one million parameters from a DNN trained on the CIFAR-10 dataset, comprising 832 neurons in four hidden layers. Our results reveal the surprising fact that all the weights of a ReLU-based DNN can be efficiently determined by analyzing only the geometric shape of its decision boundaries.
Last updated:  2025-02-24
Traceable Threshold Encryption without Trusted Dealer
Jan Bormet, Jonas Hofmann, and Hussien Othman
The fundamental assumption in -out-of- threshold encryption is that the adversary can only corrupt less than parties. Unfortunately, it may be unfounded in practical scenarios where shareholders could be incentivized to collude. Boneh, Partap, and Rotem (Crypto'24) recently addressed the setting where or more shareholders work together to decrypt illegally. Inspired by the well-established notion of traitor tracing in broadcast encryption, they added a traceability mechanism that guarantees identifying at least one of the colluders. They provide several constructions that enable traceability, all of which require a trusted dealer to distribute the secret shares. While the trusted dealer can be replaced with a DKG for conventional threshold encryption, it is unclear how to do so without compromising traceability. As thresholdizing is meant to mitigate a single point of failure, a natural question that remains is: Can we construct an efficient traceable threshold encryption scheme that does not rely on a trusted party to distribute the secret shares? In this paper, we achieve two dealerless traceable threshold encryption constructions with different merits by extending the PLBE primitive of Boneh et al. (Eurocrypt'06) and combining it with the silent setup threshold encryption construction of Garg et al. (Crypto'24). Our first construction achieves an amortized ciphertext of size (for ciphertexts). Our second construction achieves constant ciphertext size even in the worst case but requires a less efficient preprocessing phase as a tradeoff. Both our constructions enjoy a constant secret key size and do not require any interaction between the parties. An additional restriction in the constructions of Boneh et al. is that they can only guarantee to find at least one colluder, leaving techniques to identify more traitors as an open problem. In this paper, we take a first step towards solving this question by formalizing a technique and applying it to our first construction. Namely, our first construction enables tracing traitors.
Last updated:  2025-02-24
CCA-Secure Traceable Threshold (ID-based) Encryption and Application
Rishiraj Bhattacharyya, Jan Bormet, Sebastian Faust, Pratyay Mukherjee, and Hussien Othman
A recent work by Boneh, Partap, and Rotem [Crypto'24] introduced the concept of traceable threshold encryption, in that if or more parties collude to construct a decryption box, which performs decryptions, then at least one party's identity can be traced by making a few black-box queries to the box. This has important applications, e.g., in blockchain mempool privacy, where collusion yields high financial gain through MEVs without any consequence - the possibility of tracing discourages collusion. Nevertheless, their definitions leave room for exploitation as they only achieve CPA security and do not consider inconsistency in decryption via different participating sets. This paper proposes stronger definitions of traceable threshold encryption, which supports CCA-security and consistency. Our main approach considers identity-based variants of traceable encryption (which we also define). It converts that to a CCA-secure construction, adapting two generic transformations, first using a one-time signature and then a fingerprinting code. We put forward two efficient instantiations of our identity-based scheme with different merits: our first construction is based on Boneh-Franklin IBE [Crypto'01] and has constant size ciphertexts but quadratic size public keys - this is proven secure based on XDH and BDDH. Our second construction is based on Boneh-Boyen IBE [Eurocrypt'04]. It supports both constant-size ciphertexts and constant-size public keys - this is proven secure based on a variant of the uber assumption over bilinear pairings. Our concrete analysis shows that the first construction's ciphertext is much (~6x) smaller than the second construction. Finally, we extend the definitions to support consistency and achieve it by adjoining an efficient, non-interactive proof of correct encryption.
Last updated:  2025-02-24
Hollow LWE: A New Spin, Unbounded Updatable Encryption from LWE and PCE
Uncategorized
Martin R. Albrecht, Benjamin Benčina, and Russell W. F. Lai
Show abstract
Uncategorized
Updatable public-key encryption (UPKE) allows anyone to update a public key while simultaneously producing an update token, given which the secret key holder could consistently update the secret key. Furthermore, ciphertexts encrypted under the old public key remain secure even if the updated secret key is leaked -- a property much desired in secure messaging. All existing lattice-based constructions of UPKE update keys by a noisy linear shift. As the noise accumulates, these schemes either require super-polynomial-size moduli or an a priori bounded number of updates to maintain decryption correctness. Inspired by recent works on cryptography based on the lattice isomorphism problem, we propose an alternative way to update keys in lattice-based UPKE. Instead of shifting, we rotate them. As rotations do not induce norm growth, our construction supports an unbounded number of updates with a polynomial-size modulus. The security of our scheme is based on the LWE assumption over hollow matrices -- matrices which generate linear codes with non-trivial hull -- and the hardness of permutation code equivalence. Along the way, we also show that LWE over hollow matrices is as hard as LWE over uniform matrices, and that a leftover hash lemma holds for hollow matrices.
Last updated:  2025-02-24
Secret Sharing with Publicly Verifiable Deletion
Jonathan Katz and Ben Sela
Certified deletion, an inherently quantum capability, allows a party holding a quantum state to prove that they have deleted the information contained in that state. Bartusek and Raizes (Crypto 2024) recently studied certified deletion in the context of secret sharing schemes, and showed constructions with privately verifiable proofs of deletion that can be verified only by the dealer who generated the shares. We give two constructions of secret sharing schemes with publicly verifiable certified deletion. Our first construction is based on the post-quantum security of the LWE problem, and each share requires a number of qubits that is linear in the size of an underlying classical secret sharing scheme for the same set of authorized parties. Our second construction is based on a more general assumption—the existence of post quantum one-way functions— but requires an asymptotically larger number of qubits relative to the share size of the underlying classical scheme.
Last updated:  2025-02-24
Key-Homomorphic Computations for RAM: Fully Succinct Randomised Encodings and More
Damiano Abram, Giulio Malavolta, and Lawrence Roy
We propose a new method to construct a public-key encryption scheme, where one can homomorphically transform a ciphertext encrypted under a key into a ciphertext under , for any polynomial-time RAM program with runtime and memory . Combined with other lattice techniques, this allows us to construct: 1) Succinct-randomised encodings from RAM programs with encoder complexity and rate-1 encodings. 2) Laconic function evaluation for RAM programs, with encoder runtime bounded by and rate-1 encodings. 3) Key-policy attribute-based encryption for RAM programs, with ciphertexts of size . The same scheme can be converted to the register setting, obtaining linear CRS size in the number of parties. All of our schemes rely on the hardness of the \emph{decomposed learning with errors} (LWE) problem, along with other standard computational assumptions on lattices. The decomposed LWE problem can be interpreted as postulating the circular-security of a natural lattice-based public-key encryption scheme. To gain confidence in the assumption, we show that it is implied by the hardness of the succinct LWE problem of Wee (CRYPTO'24).
Last updated:  2025-02-24
CT-LLVM: Automatic Large-Scale Constant-Time Analysis
Zhiyuan Zhang and Gilles Barthe
Constant-time (CT) is a popular programming discipline to protect cryptographic libraries against micro-architectural timing attacks. One appeal of the CT discipline lies in its conceptual simplicity: a program is CT iff it has no secret-dependent data-flow, control-flow or variable-timing operation. Thanks to its simplicity, the CT discipline is supported by dozens of analysis tools. However, a recent user study demonstrates that these tools are seldom used due to poor usability and maintainability (Jancar et al. IEEE SP 2022). In this paper, we introduce CT-LLVM, a CT analysis tool designed for usability, maintainability and automatic large-scale analysis. Concretely, CT-LLVM is packaged as a LLVM plugin and is built as a thin layer on top of two standard LLVM analysis: def-use and alias analysis. Besides confirming known CT violations, we demonstrate the usability and scalability of CT-LLVM by automatically analyzing nine cryptographic libraries. On average, CT-LLVM can automatically and soundly analyze 36% of the functions in these libraries, proving that 61% of them are CT. In addition, the large-scale automatic analysis also reveals new vulnerabilities in these libraries. In the end, we demonstrate that CT-LLVM helps systematically mitigate compiler-introduced CT violations, which has been a long-standing issue in CT analysis.
Last updated:  2025-02-24
Efficient IP Masking with Generic Security Guarantees under Minimum Assumptions
Sebastian Faust, Loïc Masure, Elena Micheli, Hai Hoang Nguyen, Maximilian Orlt, and François-Xavier Standaert
Leakage-resilient secret sharing schemes are a fundamental building block for secure computation in the presence of leakage. As a result, there is a strong interest in building secret sharing schemes that combine resilience in practical leakage scenarios with potential for efficient computation. In this work, we revisit the inner-product framework, where a secret is encoded by two vectors , such that their inner product is equal to . So far, the most efficient inner-product masking schemes (in which is public but random) are provably secure with the same security notions (e.g., in the abstract probing model) as additive, Boolean masking, yet at the cost of a slightly more expensive implementation. Hence, their advantage in terms of theoretical security guarantees remains unclear, also raising doubts about their practical relevance. We address this question by showing the leakage resilience of inner-product masking schemes, in the bounded leakage threat model. It depicts well implementation contexts where the physical noise is negligible. In this threat model, we show that if bits are leaked from the shares of the encoding over an -bit field, then with probability at least over the choice of , the scheme is -leakage resilient. Furthermore, this result holds without assuming independent leakage from the shares, which may be challenging to enforce in practice. We additionally show that in large Mersenne-prime fields, a wise choice of the public coefficients can yield leakage resilience up to , in the case where one physical bit from each share is revealed to the adversary. The exponential rate of the leakage resilience we put forward significantly improves upon previous bounds in additive masking, where the past literature exhibited a constant exponential rate only.
Last updated:  2025-02-24
Succinct Oblivious Tensor Evaluation and Applications: Adaptively-Secure Laconic Function Evaluation and Trapdoor Hashing for All Circuits
Damiano Abram, Giulio Malavolta, and Lawrence Roy
We propose the notion of succinct oblivious tensor evaluation (OTE), where two parties compute an additive secret sharing of a tensor product of two vectors , exchanging two simultaneous messages. Crucially, the size of both messages and of the CRS is independent of the dimension of . We present a construction of OTE with optimal complexity from the standard learning with errors (LWE) problem. Then we show how this new technical tool enables a host of cryptographic primitives, all with security reducible to LWE, such as: 1)Adaptively secure laconic function evaluation for depth- functions with communication . 2) A trapdoor hash function for all functions. 3) An (optimally) succinct homomorphic secret sharing for all functions. 4) A rate- laconic oblivious transfer for batch messages, which is best possible. In particular, we obtain the first laconic function evaluation scheme that is adaptively secure from the standard LWE assumption, improving upon Quach, Wee, and Wichs (FOCS 2018). As a key technical ingredient, we introduce a new notion of adaptive lattice encodings, which may be of independent interest.
Last updated:  2025-02-24
uKNIT: Breaking Round-alignment for Cipher Design -- Featuring uKNIT-BC, an Ultra Low-Latency Block Cipher
Kai Hu, Mustafa Khairallah, Thomas Peyrin, and Quan Quan Tan
Automated cryptanalysis has seen a lot of attraction and success in the past decade, leading to new distinguishers or key-recovery attacks against various ciphers. We argue that the improved efficiency and usability of these new tools have been undervalued, especially for design processes. In this article, we break for the first time the classical iterative design paradigm for symmetric-key primitives, where constructions are built around the repetition of a round function. We propose instead a new design framework, so-called uKNIT, that allows a round-by-round optimization-led automated construction of the primitives and where each round can be entirely different from the others (the security/performance trade-off actually benefiting from this non-alignment). This new design framework being non-trivial to instantiate, we further propose a method for SPN ciphers using a genetic algorithm and leveraging advances in automated cryptanalysis: given a pool of good cipher candidates on rounds, our algorithm automatically generates and selects -round candidates by evaluating their security and performance. We emphasize that our design pipeline is also the first to propose a fully automated design process, with completely integrated implementation and security analysis. We finally exemplify our new design strategy on the important use-case of low-latency cryptography, by proposing the uKNIT-BC block cipher, together with a complete security analysis and benchmarks. Compared to the state-of-the-art in low-latency ciphers (PRINCEv2), uKNIT-BC improves on all crucial security and performance directions at the same time, reducing latency by 10\%, while increasing resistance against classical differential/linear cryptanalysis by more than 10\%. It also reduces area by 17\% and energy consumption by 44\% when fixing the latency of both ciphers. As a contribution of independent interest, we discovered a generalization of the Superposition-Tweakey (STK) construction for key schedules, unlocking its application to bit-oriented ciphers. We also discuss the benefits of uKNIT to many other possible use-cases.
Last updated:  2025-02-24
Privacy-Preserving Multi-Signatures: Generic Techniques and Constructions Without Pairings
Calvin Abou Haidar, Dipayan Das, Anja Lehmann, Cavit Özbay, and Octavio Perez Kempner
Multi-signatures allow a set of parties to produce a single signature for a common message by combining their individual signatures. The result can be verified using the aggregated public key that represents the group of signers. Very recent work by Lehmann and Özbay (PKC '24) studied the use of multi-signatures for ad-hoc privacy-preserving group signing, formalizing the notion of multi-signatures with probabilistic yet verifiable key aggregation. Moreover, they proposed new BLS-type multi-signatures, allowing users holding a long-term key pair to engage with different groups, without the aggregated key leaking anything about the corresponding group. This enables key-reuse across different groups in a privacy-preserving way. Unfortunately, their technique cannot be applied to Schnorr-type multi-signatures, preventing state-of-the-art multi-signatures to benefit from those privacy features. In this work, we revisit the privacy framework from Lehmann and Özbay. Our first contribution is a generic lift that adds privacy to any multi-signature with deterministic key aggregation. As our second contribution, we study two concrete multi-signatures, and give dedicated transforms that take advantage of the underlying structures for improved efficiency. The first one is a slight modification of the popular MuSig2 scheme, achieving the strongest privacy property for free compared to the original scheme. The second is a variant of the lattice-based multi-signature scheme DualMS, making our construction the first post-quantum secure multi-signature for ad-hoc privacy-preserving group signing. The light overhead incurred by the modifications in our DualMS variant still allow us to benefit from the competitiveness of the original scheme.
Last updated:  2025-02-24
OPPID: Single Sign-On with Oblivious Pairwise Pseudonyms
Maximilian Kroschewski, Anja Lehmann, and Cavit Özbay
Single Sign-On (SSO) allows users to conveniently authenticate to many Relying Parties (RPs) through a central Identity Provider (IdP). SSO supports unlinkable authentication towards the RPs via pairwise pseudonyms, where the IdP assigns the user an RP-specific pseudonym. This feature has been rolled out prominently within Apple's SSO service. While establishing unlinkable identities provides privacy towards RPs, it actually emphasizes the main privacy problem of SSO: with every authentication request, the IdP learns the RP that the user wants to access. Solutions to overcome this limitation exist, but either assume users to behave honestly or require them to manage long-term cryptographic keys. In this work, we propose the first SSO system that can provide such pseudonymous authentication in an unobservable yet strongly secure and convenient manner. That is, the IdP blindly derives the user's pairwise pseudonym for the targeted RP without learning the RP's identity and without requiring key material handled by the user. We formally define the desired security and privacy properties for such unlinkable, unobservable, and strongly secure SSO. In particular, our model includes the often neglected RP authentication: the IdP typically wants to limit its services to registered RPs only and thus must be able to (blindly) verify that it issues the token and pseudonym to such a registered RP. We propose a simple construction that combines signatures with efficient proofs-of-knowledge with a blind, yet verifiable, evaluation of the Hashed-Diffie-Hellman PRF. We prove the security of our construction and demonstrate its efficiency through a prototypical implementation, which requires a running time of 2-12ms per involved party.
Last updated:  2025-02-24
POKÉ: A Compact and Efficient PKE from Higher-dimensional Isogenies
Andrea Basso and Luciano Maino
We introduce a new PKE protocol, POKÉ, based on isogenies of unknown degree. The protocol relies on two new techniques: the first constructs an SIDH square while also working with higher-dimensional representations, whereas the second allows us to obtain a shared secret even when all curves in the commutative diagram are known. The resulting protocol is compact and extremely efficient. We provide a proof-of-concept implementation in SageMath of POKÉ that shows encryption and decryption taking about a hundred milliseconds at security level I: POKÉ is thus the most efficient encryption protocol from isogenies, and it outperforms existing protocols by more than an order of magnitude.
Last updated:  2025-02-24
How to Share an NP Statement or Combiners for Zero-Knowledge Proofs
Benny Applebaum and Eliran Kachlon
In Crypto'19, Goyal, Jain, and Sahai (GJS) introduced the elegant notion of *secret-sharing of an NP statement* (NPSS). Roughly speaking, a -out-of- secret sharing of an NP statement is a reduction that maps an instance-witness pair to instance-witness pairs such that any subset of reveals no information about the original witness, while any subset of allows full recovery of the original witness. Although the notion was formulated for general , the only existing construction (due to GJS) applies solely to the case where and provides only computational privacy. In this paper, we further explore NPSS and present the following contributions. 1. **Definition.** We introduce a refined definition of information-theoretically secure NPSS. This notion can be seen as a cryptographic variant of standard NP-reductions and can be compiled into the GJS definition using any one-way function. 2. **Construction.** We construct information-theoretic -out-of- NPSS for any values of with complexity polynomial in . Along the way, we present a new notion of secure multiparty computation that may be of independent interest. 3. **Applications.** Our NPSS framework enables the *non-interactive combination* of instances of zero-knowledge proofs, where only of them are sound and only are zero-knowledge, provided that . Our combiner preserves various desirable properties, such as the succinctness of the proof. Building on this, we establish the following results under the minimal assumption of one-way functions: (i) *Standard NIZK implies NIZK in the Multi-String Model* (Groth and Ostrovsky, J. Cryptology, 2014), where security holds as long as a majority of the common reference strings were honestly generated. Previously, such a transformation was only known in the common random string model, where the reference string is uniformly distributed. (ii) A *Designated-Prover NIZK in the Multi-String Model*, achieving a strong form of two-round Multi-Verifier Zero-Knowledge in the honest-majority setting. (iii) A *three-round secure multiparty computation protocol* for general functions in the honest-majority setting. The round complexity of this protocol is optimal, resolving a line of research that previously relied on stronger assumptions (Aharonov et al., Eurocrypt'12; Gordon et al., Crypto'15; Ananth et al., Crypto'18; Badrinarayanan et al., Asiacrypt'20; Applebaum et al., TCC'22).
Last updated:  2025-02-24
Leap: A Fast, Lattice-based OPRF With Application to Private Set Intersection
Lena Heimberger, Daniel Kales, Riccardo Lolato, Omid Mir, Sebastian Ramacher, and Christian Rechberger
Oblivious pseudorandom functions (OPRFs) are an important primitive in privacy-preserving cryptographic protocols. The growing interest in OPRFs, both in theory and practice, has led to the development of numerous constructions and variations. However, most of these constructions rely on classical assumptions. Potential future quantum attacks may limit the practicality of those OPRFs for real-world applications. To close this gap, we introduce Leap, a novel OPRF based on heuristic lattice assumptions. Fundamentally, Leap builds upon the Spring [BBL+15] pseudorandom function (PRF), which relies on the learning with rounding assumption, and integrates techniques from multi-party computation, specifically Oblivious Transfer (OT) and Oblivious Linear Evaluation (OLE). With this combination of oblivious protocols, we construct an OPRF that evaluates in less than a millisecond on a modern computer. Efficiency-wise, our prototype implementation achieves computation times of just 11 microseconds for the client and 750 microseconds for the server, excluding some base OT preprocessing overhead. Moreover, Leap requires an online communication cost of 23 kB per evaluation, where the client only has to send around 380 bytes online. To demonstrate the practical applicability of Leap, we present an efficient private set intersection (PSI) protocol built on top of Leap. This application highlights the potential for the integration of Leap into various privacy-preserving applications: We can compute an unbalanced set intersection with set sizes of 2^24 and 2^15 in under a minute of online time and just over two minutes overall.
Last updated:  2025-02-24
Multiple Group Action Dlogs with(out) Precomputation
Alexander May and Massimo Ostuzzi
Let be the action of a group of size on a set . Let be a group action dlog instance, where our goal is to compute the unknown group element from the known set elements . The Galbraith-Hess-Smart (GHS) collision finding algorithm solves the group action dlog in steps with polynomial memory. We show that group action dlogs are suitable for precomputation attacks. More precisely, for any our precomputation algorithm computes within steps a hint of space complexity , which allows to solve any group action dlog in an online phase within steps. A typical instantiation is , which gives precomputation time and space , and online time only . Moreover, we show that solving multiple group action dlog instances allows for speedups. Namely, our collision finding algorithm solves group action dlogs in steps, instead of the straight-forward steps required for running times GHS. Interestingly, our multi instance algorithm (with precomputation) can be seen as a special case of our precomputation algorithm. Our multiple instance approach can be freely combined with our precomputations, allowing for a variety of tradeoffs. Technically, our precomputation and multiple instance group action dlog attacks are adaptations of the techniques from the standard dlog setting in abelian groups. While such an adaptation seems natural, it is per se unclear which techniques transfer from the dlog to the more general group action dlog setting, for which does not offer a group structure. Our algorithms have direct implications for all group action based cryptosystems, such as CSIDH and its variants. We provide experimental evidence that our techniques work well in the CSIDH setting.
Last updated:  2025-02-24
Black-box Collision Attacks on Widely Deployed Perceptual Hash Functions
Diane Leblanc-Albarel and Bart Preneel
Perceptual hash functions identify multimedia content by mapping similar inputs to similar outputs. They are widely used for detecting copyright violations and illegal content but lack transparency, as their design details are typically kept secret. Governments are considering extending the application of these functions to Client-Side Scanning (CSS) for end-to-end encrypted services: multimedia content would be verified against known illegal content before applying encryption. In 2021, Apple presented a detailed proposal for CSS based on the NeuralHash perceptual hash function. After strong criticism pointing out privacy and security concerns, Apple has withdrawn the proposal, but the NeuralHash software is still present on Apple devices. Brute force collisions for NeuralHash (with a 96-bit result) require evaluations. Shortly after the publication of NeuralHash, it was demonstrated that it is easy to craft two colliding inputs for NeuralHash that are perceptually dissimilar. In the context of CSS, this means that it is easy to falsely incriminate someone by sending an innocent picture with the same hash value as illegal content. This work shows a more serious weakness: when inputs are restricted to a set of human faces, random collisions are highly likely to occur in input sets of size . Unlike the targeted attack, our attacks are black-box attacks: they do not require knowledge of the design of the perceptual hash functions. In addition, we show that the false negative rate is high as well. We demonstrate the generality of our approach by applying a similar attack to PhotoDNA, a widely deployed perceptual hash function proposed by Microsoft with a hash result of 1152 bits. Here we show that specific small input sets result in near-collisions, with similar impact. These results imply that the current designs of perceptual hash function are completely unsuitable for large-scale client scanning, as they would result in an unacceptably high false positive rate. This work underscores the need to reassess the security and feasibility of perceptual hash functions, particularly for large-scale applications where privacy risks and false positives have serious consequences.
Last updated:  2025-02-24
Improved Cryptanalysis of SNOVA
Ward Beullens
SNOVA is a multivariate signature scheme submitted to the NIST project for additional signature schemes by Cho, Ding, Kuan, Li, Tseng, Tseng, and Wang. With small key and signature sizes good performance, SNOVA is one of the more efficient schemes in the competition, which makes SNOVA an important target for cryptanalysis. In this paper, we observe that SNOVA implicitly uses a structured version of the ``whipping'' technique developed for the MAYO signature scheme. We show that the extra structure makes the construction vulnerable to new forgery attacks. Concretely, we formulate new attacks that reduce the security margin of the proposed SNOVA parameter sets by a factor between and . Furthermore, we show that large fractions of public keys are vulnerable to more efficient versions of our attack. For example, for SNOVA-37-17-2, a parameter set targeting NIST's first security level, we show that roughly one out of every public keys is vulnerable to a universal forgery attack with bit complexity , and roughly one out of every public keys is even breakable in practice within a few minutes.
Last updated:  2025-02-24
Committing Authenticated Encryption: Generic Transforms with Hash Functions
Shan Chen and Vukašin Karadžić
Recent applications and attacks have highlighted the need for authenticated encryption (AE) schemes to achieve the so-called committing security beyond privacy and authenticity. As a result, several generic solutions have been proposed to transform a non-committing AE scheme to a committing one, for both basic unique-nonce security and advanced misuse-resistant (MR) security. We observe that all existing practical generic transforms are subject to at least one of the following limitations: (i) not committing to the entire encryption context, (ii) involving non-standard primitives, (iii) not being a black-box transform, (iv) providing limited committing security. Furthermore, so far, there has been no generic transform that can directly elevate a basic AE scheme to a committing AE scheme that offers MR security. Our work fills these gaps by developing black-box generic transforms that crucially rely on hash functions, which are well standardized and widely deployed. First, we construct three basic transforms that combine AE with a single hash function, which we call and . They all guarantee strong security, and can be applied to both AE and basic privacy-only encryption schemes. Next, for MR security, we propose two advanced hash-based transforms that we call and . is an MRAE-preserving transform that adds committing security to an MR-secure AE scheme. is the first generic transform that can directly elevate basic AE to one with both committing and MR security; moreover, also works with arbitrary privacy-only encryption schemes. Both of them feature a simple design and ensure strong security. For performance evaluation, we compare our transforms to similar existing ones, both in theory and through practical implementations. The results show that our achieves the highest practical efficiency among basic transforms, while excels in MRAE-preserving transforms. Our MRAE-lifting transform demonstrates comparable performance to MRAE-preserving ones and surpasses them for messages larger than approximately bytes; for longer messages, it even outperforms the benchmark, non-committing standardized .
Last updated:  2025-02-24
Efficient Instances of Docked Double Decker With AES, and Application to Authenticated Encryption
Christoph Dobraunig, Krystian Matusiewicz, Bart Mennink, and Alexander Tereschenko
A tweakable wide blockcipher is a construction which behaves in the same way as a tweakable blockcipher, with the difference that the actual block size is flexible. Due to this feature, a tweakable wide blockcipher can be directly used as a strong encryption scheme that provides full diffusion when encrypting plaintexts to ciphertexts and vice versa. Furthermore, it can be the basis of authenticated encryption schemes fulfilling the strongest security notions. In this paper, we present three instantiations of the docked double decker tweakable wide blockcipher: , , and . These instances exclusively use similar building blocks as AES-GCM (AES and finite field multiplication), are designed for maximal parallelism, and hence, can make efficient use of existing hardware accelerators. is a birthday bound secure scheme, and is an immediate generalization to allow for variable length tweaks. achieves security beyond the birthday bound provided that the same tweak is not used too often. Moreover, builds upon a novel conditionally beyond birthday bound secure pseudorandom function, a tweakable variant of the XOR of permutations, facilitating in the need to include a tweak in the AES evaluations without sacrificing flexibility in docked double decker. We furthermore introduce an authenticated encryption mode specifically tailored to be instantiated with and , where special attention is given to how the nonce and associated data can be processed. We prove that this mode is secure in the nonce-respecting setting, in the nonce-misuse setting, as well as in the setting where random nonces are used.
Last updated:  2025-02-24
Partial and Fully Homomorphic Matching of IP Addresses Against Blacklists for Threat Analysis
William J Buchanan and Hisham Ali
In many areas of cybersecurity, we require access to Personally Identifiable Information (PII), such as names, postal addresses and email addresses. Unfortunately, this can lead to data breaches, especially in relation to data compliance regulations such as GDPR. An IP address is a typical identifier which is used to map a network address to a person. Thus, in applications which are privacy-aware, we may aim to hide the IP address while aiming to determine if the address comes from a blacklist. One solution to this is to use homomorphic encryption to match an encrypted version of an IP address to a blacklisted network list. This matching allows us to encrypt the IP address and match it to an encrypted version of a blacklist. In this paper, we use the OpenFHE library \cite{OpenFHE} to convert network addresses into the BFV homomorphic encryption method. In order to assess the performance impact of BFV, it implements a matching method using the OpenFHE library and compares this against the partial homomorphic methods of Paillier, Damgard-Jurik, Okamoto-Uchiyama, Naccache-Stern and Benaloh. The main findings are that the BFV method compares favourably against the partial homomorphic methods in most cases.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.