Papers updated in last 365 days (Page 4 of 2928 results)
Committing Authenticated Encryption: Generic Transforms with Hash Functions
Recent applications and attacks have highlighted the need for authenticated encryption (AE) schemes to achieve the so-called committing security beyond privacy and authenticity. As a result, several generic solutions have been proposed to transform a non-committing AE scheme to a committing one, for both basic unique-nonce security and advanced misuse-resistant (MR) security. We observe that all existing practical generic transforms are subject to at least one of the following limitations: (i) not committing to the entire encryption context, (ii) involving non-standard primitives, (iii) not being a black-box transform, (iv) providing limited committing security. Furthermore, so far, there has been no generic transform that can directly elevate a basic AE scheme to a committing AE scheme that offers MR security. Our work fills these gaps by developing black-box generic transforms that crucially rely on hash functions, which are well standardized and widely deployed.
First, we construct three basic transforms that combine AE with a single hash function, which we call and . They all guarantee strong security, and can be applied to both AE and basic privacy-only encryption schemes. Next, for MR security, we propose two advanced hash-based transforms that we call and . is an MRAE-preserving transform that adds committing security to an MR-secure AE scheme. is the first generic transform that can directly elevate basic AE to one with both committing and MR security; moreover, also works with arbitrary privacy-only encryption schemes. Both of them feature a simple design and ensure strong security.
For performance evaluation, we compare our transforms to similar existing ones, both in theory and through practical implementations. The results show that our achieves the highest practical efficiency among basic transforms, while excels in MRAE-preserving transforms. Our MRAE-lifting transform demonstrates comparable performance to MRAE-preserving ones and surpasses them for messages larger than approximately bytes; for longer messages, it even outperforms the benchmark, non-committing standardized .
Efficient Instances of Docked Double Decker With AES, and Application to Authenticated Encryption
A tweakable wide blockcipher is a construction which behaves in the same way as a tweakable blockcipher, with the difference that the actual block size is flexible. Due to this feature, a tweakable wide blockcipher can be directly used as a strong encryption scheme that provides full diffusion when encrypting plaintexts to ciphertexts and vice versa. Furthermore, it can be the basis of authenticated encryption schemes fulfilling the strongest security notions. In this paper, we present three instantiations of the docked double decker tweakable wide blockcipher: , , and . These instances exclusively use similar building blocks as AES-GCM (AES and finite field multiplication), are designed for maximal parallelism, and hence, can make efficient use of existing hardware accelerators. is a birthday bound secure scheme, and is an immediate generalization to allow for variable length tweaks. achieves security beyond the birthday bound provided that the same tweak is not used too often. Moreover, builds upon a novel conditionally beyond birthday bound secure pseudorandom function, a tweakable variant of the XOR of permutations, facilitating in the need to include a tweak in the AES evaluations without sacrificing flexibility in docked double decker. We furthermore introduce an authenticated encryption mode specifically tailored to be instantiated with and , where special attention is given to how the nonce and associated data can be processed. We prove that this mode is secure in the nonce-respecting setting, in the nonce-misuse setting, as well as in the setting where random nonces are used.
Partial and Fully Homomorphic Matching of IP Addresses Against Blacklists for Threat Analysis
In many areas of cybersecurity, we require access to Personally Identifiable Information (PII), such as names, postal addresses and email addresses. Unfortunately, this can lead to data breaches, especially in relation to data compliance regulations such as GDPR. An IP address is a typical identifier which is used to map a network address to a person. Thus, in applications which are privacy-aware, we may aim to hide the IP address while aiming to determine if the address comes from a blacklist. One solution to this is to use homomorphic encryption to match an encrypted version of an IP address to a blacklisted network list. This matching allows us to encrypt the IP address and match it to an encrypted version of a blacklist. In this paper, we use the OpenFHE library \cite{OpenFHE} to convert network addresses into the BFV homomorphic encryption method. In order to assess the performance impact of BFV, it implements a matching method using the OpenFHE library and compares this against the partial homomorphic methods of Paillier, Damgard-Jurik, Okamoto-Uchiyama, Naccache-Stern and Benaloh. The main findings are that the BFV method compares favourably against the partial homomorphic methods in most cases.
Private Multi-Party Neural Network Training over via Galois Rings
Secret-sharing-based multi-party computation provides effective solutions for privacy-preserving machine learning. In this paper, we present novel protocols for privacy-preserving neural network training using Shamir secret sharing scheme over Galois rings. The specific Galois ring we use is , which contains as a subring. The algebraic structure of enables us to benefit from Shamir scheme while performing modulo operations only on instead of a prime number, making our protocols more compatible with modern computer architectures. We achieve the parallel processing of training data by embedding different training samples into the different coefficients of the polynomial representing a single Galois ring element, and we show that this embedding can be performed with no additional communication overhead compared to processing only one sample at a time. To evaluate our methods, we conduct private training of neural networks on the MNIST dataset between different numbers of participants. The experimental results indicate the advantages of our protocols compared to existing -based implementations in this domain.
Honest Majority MPC with Communication in Minicrypt
In this work, we consider the communication complexity of MPC protocols in honest majority setting achieving malicious security in both information-theoretic setting and computational setting. On the one hand, we study the possibility of basing honest majority MPC protocols on oblivious linear evaluation (OLE)-hybrid model efficiently with information-theoretic security. More precisely, we instantiate preprocessing phase of the recent work Sharing Transformation (Goyal, Polychroniadou, and Song, CRYPTO 2022) assuming random OLE correlations. Notably, we are able to prepare packed Beaver triples with malicious security achieving amortized communication of field elements plus a number of OLE correlations per packed Beaver triple, which is the best known result. To further efficiently prepare random OLE correlations, we resort to IKNP-style OT extension protocols (Ishai et al., CRYPTO 2003) in random oracle model.
On the other hand, we derive a communication lower bound for preparing OLE correlations in the information-theoretic setting based on negative results due to Damgård, Larsen, and Nielsen (CRYPTO 2019).
Combining our positive result with the work of Goyal, Polychroniadou, and Song (CRYPTO 2022), we derive an MPC protocol with amortized communication of elements per gate in random oracle model achieving malicious security, where denotes the length of a field element and is the security parameter.
SNARKs for Virtual Machines are Non-Malleable
Cryptographic proof systems have a plethora of applications: from building other cryptographic tools (e.g., malicious security for MPC protocols) to concrete settings such as private transactions or rollups. In several settings it is important for proof systems to be non-malleable: an adversary should not to be able to modify a proof they have observed into another for a statement for which they do not know the witness.
Proof systems that have been deployed in practice should arguably satisfy this notion: it is crucial in settings such as transaction systems and in order to securely compose proofs with other cryptographic protocols. As a consequence, results on non-malleability should keep up with designs of proofs being deployed.
Recently, Arun et al. proposed (Eurocrypt 2024), arguably the first efficient proof system whose architecture is based on the lookup singularity approach (Barry Whitehat, 2022). This approach consists in representing a general computation as a series of table lookups. The final result is a SNARK for a Virtual Machine execution (or SNARK VM). Both SNARK VMs and lookup-singularity SNARKs are architectures with enormous potential and will probably be adopted more and more in the next years (and they already are).
As of today, however, there is no literature regarding the non-malleability of SNARK VMs. The goal of this work is to fill this gap by providing both concrete non-malleability results and a set of technical tools for a more general study of SNARK VMs security (as well as "modular" SNARKs in general). As a concrete result, we study the non-malleability of (an idealized version of) and its fundamental building block, the lookup argument . While connecting our new result on the non-malleability of to that of , we develop a set of tools that enable the composition of non-malleable SNARKs. We believe this toolbox to be valuable in its own right.
(Multi-Input) FE for Randomized Functionalities, Revisited
Randomized functional encryption (rFE) generalizes functional encryption (FE) by incorporating randomized functionalities. Randomized multi-input functional encryption (rMIFE) extends rFE to accommodate multi-input randomized functionalities.
In this paper, we reassess the framework of rFE/rMIFE enhancing our understanding of this primitive and laying the groundwork for more secure and flexible constructions in this field. Specifically, we make three key contributions:
- New definition: We identify critical gap in the existing indistinguishability-based (IND) security definition for rFE/rMIFE. Notably, current definition fails to adequately address security against malicious encryptors—a crucial requirement for rFE/rMIFE since their introduction. We propose a novel, robust IND security definition that not only addresses threats from malicious decryptors but also quantifies the security against malicious encryptors effectively.
- Counterexample: To illustrate the importance of this definitional gap, we provide a counterexample of an insecure rFE scheme that meets IND security under the previous definition but explicitly fails in a natural setting (and where this failure would be precluded by our enhanced definition). Our counterexample scheme is non-trivial and meticulously designed using standard cryptographic tools, namely FE for deterministic functions, pseudorandom function (PRF), public key encryption (PKE), and simulation-sound non-interactive zero-knowledge (NIZK) proof systems.
- Adaptive unbounded-message secure construction: The only viable prior construction of rMIFE by Goldwasser et al. [EUROCRYPT 2014] (which uses indistinguishability obfuscation (iO) and other standard assumptions) has significant limitations: it permits only a pre-defined number of messages per encryption slot and operates under selective-security constraints, requiring adversaries to declare challenge ciphertext queries and "corrupted" encryption keys in advance. We address these shortcomings by employing sub-exponentially secure iO. Technically, we build on and adapt methods developed by Goyal et al. [ASIACRYPT 2016] for deterministic MIFE.
Malleable SNARKs and Their Applications
Succinct non-interactive arguments of knowledge (SNARKs) are variants of non-interactive zero-knowledge proofs (NIZKs) in which complex statements can be proven in a compact way. SNARKs have had tremendous impact in several areas of cryptography, including verifiable computing, blockchains, and anonymous communication. A recurring concept in many applications is the concept of recursive SNARKs, in which a proof references a previous proof to show an evolved statement.
In this work, we investigate malleable SNARKs, a generalization of this concept of recursion. An adaptation of the existing concept of malleable NIZKs, malleable SNARKs allow to modify SNARK proofs to show related statements, but such that such mauled proofs are indistinguishable from “properly generated” fresh proofs of the related statement. We show how to instantiate malleable SNARKs for universal languages and relations, and give a number of applications: the first post-quantum RCCA-secure rerandomizable and updatable encryption schemes, a generic construction of reverse firewalls, and an unlinkable (i.e., computation-hiding) targeted malleable homomorphic encryption scheme.
Technically, our malleable SNARK construction relies on recursive proofs, but with a twist: in order to support the strong indistinguishability properties of mauled and fresh SNARK proofs, we need to allow an unbounded recursion depth. To still allow for a reasonable notion of extractability in this setting (and in particular to guarantee that extraction eventually finishes with a “proper” witness that does not refer to a previous SNARK proof), we rely on a new and generic computational primitive called adversarial one-way function (AOWF) that may be of independent interest. We give an AOWF candidate and prove it secure in the random oracle model.
Weakly Super-Invertible Matrices and Constant Communication Dishonest Majority MPC
In recent years, there has been tremendous progress in improving the concrete communication complexity of dishonest majority MPC. In the sub-optimal corruption threshold setting where for some constant , Sharing Transformation (Goyal , CRYPTO'22) and SuperPack (Escudero , EUROCRYPT'23) presented protocols with information-theoretic online phases requiring field elements of total communication per multiplication gate. However, Sharing Transformation assumes that their offline phase is instantiated by a trusted party, while SuperPack instantiates their offline phase with large communication of per multiplication gate assuming oblivious linear evaluation (OLE) correlations. The main bottleneck in instantiating the offline phases of both protocols is generating random packed beaver triples of the form , for random , and , where is the .
To address this bottleneck, our main technical contribution is introducing and constructing super-invertible matrices, a relaxation of super-invertible matrices in which sub-matrices have high (but not necessarily full) rank. This relaxation allows for matrices with only non-zero entries, enabling a first step towards generating packed beaver triples with total communication per underlying triple, assuming OLE correlations. As the second (and final) step, we use the efficient protocol of (Choudhury and Patra, Trans. Inform. Theory '17).
We also implement our packed beaver triple protocol and provide experimental results. Our new protocol obtains up to 38% smaller communication and 9% reduction in runtime compared to SuperPack's triple protocol. Additionally, by instantiating SuperPack's offline phase with our new protocol, we obtain up to 16% communication reductions.
Finally, we use our packed beaver triple protocol to instantiate the offline phase of Sharing Transformation, yielding a dishonest majority MPC protocol with total communication across both the offline and online phases.
Fully Asymmetric Anamorphic Homomorphic Encryption from LWE
As introduced by Persiano {\it et al.} (Eurocrypt'22), anamorphic encryption (AE) is a primitive enabling private communications against a dictator that forces users to surrender their decryption keys. In its fully asymmetric flavor (defined by Catalano {\it et al.}, Eurocrypt'24), anamorphic channels can work as hidden public-key mechanisms in the sense that anamorphic encryptors are not necessarily able to decrypt anamorphic ciphertexts. Unfortunately, fully asymmetric AE is hard to come by and even impossible to obtain from ordinary public-key encryption via black-box constructions. So far, only three schemes are known to rely on well-established assumptions. In this paper, we exhibit constructions from the standard LWE assumption based on Regev's cryptosystem and its dual version. In both cases, we retain the additive homomorphism of the schemes. We additionally show that dual Regev is public-key anamorphic in the sense of Persiano {\it et al.} (Crypto'24). In the FHE setting, we show that the dual GSW system provides fully asymmetric AE (while preserving its leveled homomorphism) when instantiated with binary/ternary secret keys. Along the way, we discuss the extent to which our schemes satisfy a generalization of Banfi {\it et al.}'s notion of robustness (Eurocrypt'24) to the case of homomorphically evaluated ciphertexts.
Unconditional foundations for supersingular isogeny-based cryptography
In this paper, we prove that the supersingular isogeny problem (Isogeny), endomorphism ring problem (EndRing) and maximal order problem (MaxOrder) are equivalent under probabilistic polynomial time reductions, unconditionally.
Isogeny-based cryptography is founded on the presumed hardness of these problems, and their interconnection is at the heart of the design and analysis of cryptosystems like the SQIsign digital signature scheme. Previously known reductions relied on unproven assumptions such as the generalized Riemann hypothesis. In this work, we present unconditional reductions, and extend this network of equivalences to the problem of computing the lattice of all isogenies between two supersingular elliptic curves (HomModule).
For cryptographic applications, one requires computational problems to be hard on average for random instances. It is well-known that if Isogeny is hard (in the worst case), then it is hard for random instances. We extend this result by proving that if any of the above-mentionned classical problems is hard in the worst case, then all of them are hard on average. In particular, if there exist hard instances of Isogeny, then all of Isogeny, EndRing, MaxOrder and HomModule are hard on average.
Real-world Universal zkSNARKs are non-malleable
Simulation extractability is a strong security notion of zkSNARKs that guarantees that an attacker who produces a valid proof must know the corresponding witness, even if the attacker had prior access to proofs generated by other users. Notably, simulation extractability implies that proofs are non-malleable and is of fundamental importance for applications of zkSNARKs in distributed systems. In this work, we study sufficient and necessary conditions for constructing simulation-extractable universal zkSNARKs via the popular design approach based on compiling polynomial interactive oracle proofs (PIOP). Our main result is the first security proof that popular universal zkSNARKs, such as PLONK and Marlin, as deployed in the real world, are simulation-extractable. Our result fills a gap left from previous work (Faonio et al. TCC’23, and Kohlweiss et al. TCC’23) which could only prove the simulation extractability of the “textbook” versions of these schemes and does not capture their optimized variants, with all the popular optimization tricks in place, that are eventually implemented and deployed in software libraries.
Improved Stock Market Structure Using Cryptography
The stock market has two primary functions, that of providing liquidity and price discovery. While historically, the market micro-structure was mostly ignored or assumed to function ideally for the purpose of asset pricing, O’Hara (Journal of Finance, 2003) established that both liquidity and price discovery affect asset pricing, and consequently asset returns. In this work, we extend the analysis of Easley and O’Hara (Journal of finance, 2004) to study common stock market mechanisms, including open-bid and sealed-bid auctions, as well as the fact that the auctioneer may not be trustworthy. Our analysis shows that open-bid auctions dis-incentivize stock research, leading to higher volatility. On the other hand, we also show that in sealed-bid auctions, the price and volume of an auction reveals only partial private research information, thus retaining incentive for traders to do research. We also show how secure computation can be used to build auctions with a distributed auctioneer, where the clearing and partial-fulfillment information of the auction does not reveal unfilled orders of others, thus maintaining private information
Bulletproofs for R1CS: Bridging the Completeness-Soundness Gap and a ZK Extension
Bulletproofs, introduced by Bünz, Bootle, Boneh, Poelstra, Wuille and Maxwell (IEEE S&P, 2018), is a highly efficient non-interactive argument system that does not require a trusted setup. Recently, Bünz (PhD Thesis, 2023) extended Bulletproofs to support arguments for rank-1 constraint satisfaction (R1CS) systems, a widely-used representation for arithmetic satisfiability problems. Although the argument system constructed by Bünz preserves the attractive properties of Bulletproofs, it presents a gap between its completeness and soundness guarantees: The system is complete for a restricted set of instances, but sound only for a significantly broader set. Although argument systems for such gap relations nevertheless provide clear and concrete guarantees, the gaps they introduce may lead to various inconsistencies or undesirable gaps within proofs of security, especially when used as building blocks within larger systems.
In this work we show that the argument system presented by Bünz can be extended to bridge the gap between its completeness and soundness, and to additionally provide honest-verifier zero-knowledge. For the extended argument system, we introduce a refined R1CS relation that captures the precise set of instances for which both completeness and soundness hold without resorting to a gap formulation. The extended argument system preserves the performance guarantees of the argument system presented by Bünz, and yields a non-interactive argument system using the Fiat-Shamir transform.
On the Adaptive Security of Free-XOR-based Garbling Schemes in the Plain Model
A Garbling Scheme is a fundamental cryptographic primitive, with numerous theoretical and practical applications. Since its inception by Yao (FOCS'82, '86), optimizing the communication and computation complexities of securely garbling circuits has been an area of active research. One such optimization, and perhaps the most fundamental, is the `Free-XOR' technique (Kolesnikov and Schneider, ICALP'08) which allows XOR gates in a function garbling to not require representation, and therefore communication.
Since then, several works have designed and analysed the security of schemes that adopt the Free-XOR optimisation. In particular: (1) Applebaum (JoC'16) proved that this can be securely instantiated assuming symmetric-key encryption satisfying a notion called RK-KDM security; and (2) Zahur, Rosulek and Evans (Eurocrypt'15) proposed the so-called `Half Gates' scheme, and proved that it can be instantiated assuming hash functions satisfying a notion called CCR security. Although both schemes have been proven selectively secure, prior work leaves it open to analyze whether they satisfy a stronger security notion -- adaptive security -- in the plain model.
In this work, we formally show that the selective security of these two schemes cannot be lifted to adaptive security under the same assumptions. To establish these barriers, we adopt techniques from the work of Kamath et al (Crypto'21), who proved similar negative results for Yao's garbling. We use that as a starting point and introduce new techniques tailored towards addressing Free-XOR-based schemes.
PKE and ABE with Collusion-Resistant Secure Key Leasing
Secure key leasing (SKL) is an advanced encryption functionality that allows a secret key holder to generate a quantum decryption key and securely lease it to a user. Once the user returns the quantum decryption key (or provides a classical certificate confirming its deletion), they lose their decryption capability. Previous works on public key encryption with SKL (PKE-SKL) have only considered the single-key security model, where the adversary receives at most one quantum decryption key. However, this model does not accurately reflect real-world applications of PKE-SKL. To address this limitation, we introduce collusion-resistant security for PKE-SKL (denoted as PKE-CR-SKL). In this model, the adversary can adaptively obtain multiple quantum decryption keys and access a verification oracle which validates the correctness of queried quantum decryption keys. Importantly, the size of the public key and ciphertexts must remain independent of the total number of generated quantum decryption keys. We present the following constructions:
- A PKE-CR-SKL scheme based on the learning with errors (LWE) assumption.
- An attribute-based encryption scheme with collusion-resistant SKL (ABE-CR-SKL), also based on the LWE assumption.
- An ABE-CR-SKL scheme with classical certificates, relying on multi-input ABE with polynomial arity.
Stateless Deterministic Multi-Party EdDSA Signatures with Low Communication
EdDSA is a standardized signing algorithm, by both the IRTF and NIST, that is widely used in blockchain, e.g., Hyperledger, Cardano, Zcash, etc. It is a variant of the well-known Schnorr signature scheme that leverages Edwards curves. It features stateless and deterministic nonce generation, meaning it does not rely on a reliable source of randomness or state continuity. Recently, NIST issued a call for multi-party threshold EdDSA signatures, with one approach verifying nonce generation through zero-knowledge (ZK) proofs. In this paper, we propose a new stateless and deterministic multi-party EdDSA protocol in the full-threshold setting, capable of tolerating all-but-one malicious corruption. Compared to the state-of-the-art multi-party EdDSA protocol by Garillot et al. (Crypto’21), our protocol reduces communication cost by a factor of 56x while maintaining the same three-round structure, albeit with a roughly 2.25x increase in computational cost. We utilize information-theoretic message authentication codes (IT-MACs) in a multi-verifier setting to authenticate values and transform them from the Boolean domain to the arithmetic domain by refining multi-verifier extended doubly-authenticated bits (mv-edabits). Additionally, we employ pseudorandom correlation functions (PCF) to generate IT-MACs in a stateless and deterministic manner. Combining these elements, we design a multi-verifier zero-knowledge (MVZK) protocol for stateless and deterministic nonce generation. Our protocol can be used to build secure blockchain wallets and custody solutions, enhancing key protection.
On Quantum Money and Evasive Obfuscation
We show a black box barrier against constructing public key quantum money from obfuscation for evasive functions. As current post-quantum obfuscators based on standard assumptions are all evasive, this shows a fundamental barrier to achieving public key quantum money from standard tools. Our impossibility applies to black box schemes where (1) obfuscation queries made by the mint are classical, and (2) the verifier only makes (possibly quantum) evaluation queries, but no obfuscation queries. This class seems to capture any natural method of using obfuscation to build quantum money.
HELM: Navigating Homomorphic Encryption through Gates and Lookup Tables
As cloud computing continues to gain widespread adoption, safeguarding the confidentiality of data entrusted to third-party cloud service providers becomes a critical concern. While traditional encryption methods offer protection for data at rest and in transit, they fall short when it comes to where it matters the most, i.e., during data processing.
To address this limitation, we present HELM, a framework for privacy-preserving data processing using homomorphic encryption. HELM automatically transforms arbitrary programs expressed in a Hardware Description Language (HDL), such as Verilog, into equivalent homomorphic circuits, which can then be efficiently evaluated using encrypted inputs. HELM features two modes of encrypted evaluation: a) a gate mode that consists of standard Boolean gates, and b) a lookup table mode which significantly reduces the size of the circuit by combining multiple gates into lookup tables. Finally, HELM introduces a scheduler that enables embarrassingly parallel processing in the encrypted domain. We evaluate HELM with the ISCAS'85 and ISCAS'89 benchmark suites as well as real-world applications such as AES and image filtering. Our results outperform prior works by up to .
Diamond iO: A Straightforward Construction of Indistinguishability Obfuscation from Lattices
Indistinguishability obfuscation (iO) has seen remarkable theoretical progress, yet it remains impractical due to its high complexity and inefficiency. A common bottleneck in recent iO schemes is the reliance on bootstrapping techniques from functional encryption (FE) into iO, which requires recursively invoking the FE encryption algorithm for each input bit—creating a significant barrier to practical iO schemes.
In this work, we propose diamond iO, a new lattice-based iO construction that replaces the costly recursive encryption process with lightweight matrix operations. Our construction is proven secure under the learning with errors (LWE) and evasive LWE assumptions, as well as our new assumption—all-product LWE—in the pseudorandom oracle model. By leveraging the FE scheme for pseudorandom functionalities introduced by Agrawal et al. (ePrint’24) in a non-black-box manner, we remove the reliance on prior FE-to-iO bootstrapping techniques and thereby significantly reduce complexity. A remaining challenge is to reduce our new assumption to standard assumptions such as LWE, further advancing the goal of a practical and sound iO construction.
A note on adding zero-knowledge to STARKs
We discuss zero-knowledge in the context of univariate argument systems which use the FRI proximity test for Reed-Solomon codes as polynomial commitment scheme.
We confine ourselves to small-field STARK, i.e. arguments with an arithmetization over a small finite field (the basefield), and we dwell on two techniques widely used in practice: Randomization by polynomials over the basefield, and decomposing the overall quotient into polynomials of smaller degree. In particular the latter is a source for mistakes, both in literature as well as in software implementations.
The current, updated version further includes a separate discussion on perfect zero-knowledge in permutation arguments.
A New Public Key Cryptosystem Based on the Cubic Pell Curve
Since its invention in 1978 by Rivest, Shamir and Adleman, the public key cryptosystem RSA has become a widely popular and a widely useful scheme in cryptography. Its security is related to the difficulty of factoring large integers which are the product of two large prime numbers. For various reasons, several variants of RSA have been proposed, and some have different arithmetics such as elliptic and singular cubic curves. In 2018, Murru and Saettone proposed another variant of RSA based on the cubic Pell curve with a modulus of the form . In this paper, we present a new public key cryptosystem based on the arithmetic of the cubic Pell curve with a modulus of the form . Its security is based on the hardness of factoring composite integers, and on Rabin's trapdoor one way function. In the new scheme, the arithmetic operations are performed on a cubic Pell curve which is known only to the sender and the recipient of a plaintext.
Bounded CCA2 Secure Proxy Re-encryption Based on Kyber
Proxy re-encryption (PRE) allows a semi-honest party (called a proxy) to convert ciphertexts under a public key into ciphertexts under another public key. Due to this functionality, there are various applications such as encrypted email forwarding, key escrow, and secure distributed file systems. On the other hand, post-quantum cryptography (PQC) is one of the most important research areas. However, there is no post-quantum PRE scheme with security against adaptive chosen ciphertext attacks (denoted by security) while many PRE schemes have been proposed so far.
In this paper, we propose a bounded secure PRE scheme based on CRYSTALS-Kyber (Kyber, for short) which is a selected algorithm in the NIST PQC competition. To this end, we present generic constructions of bounded secure PRE. Our generic constructions start from PRE with (a variant of) security against chosen plaintext attacks (denoted by security) and a new PRE's property introduced in this paper. In order to instantiate our generic constructions, we present a Kyber-based PRE scheme with the required property. As a result, we can construct a bounded secure PRE scheme from Kyber.
A Generic Approach to Adaptively-Secure Broadcast Encryption in the Plain Model
Broadcast encryption allows a user to encrypt a message to recipients with a ciphertext whose size scales sublinearly with . The natural security notion for broadcast encryption is adaptive security which allows an adversary to choose the set of recipients after seeing the public parameters. Achieving adaptive security in broadcast encryption is challenging, and in the plain model, the primary technique is the celebrated dual-systems approach, which can be implemented over groups with bilinear maps. Unfortunately, it has been challenging to replicate the dual-systems approach in other settings (e.g., with lattices or witness encryption). Moreover, even if we focus on pairing-based constructions, the dual-systems framework critically relies on decisional (and source-group) assumptions. We do not have constructions of adaptively-secure broadcast encryption from search (or target-group) assumptions in the plain model.
Gentry and Waters (EUROCRYPT 2009) described a compiler that takes any semi-statically-secure broadcast encryption scheme and transforms it into an adaptively-secure scheme in the random oracle model. While semi-static security is easier to achieve and constructions are known from witness encryption as well as search (and target-group) assumptions on pairing groups, the transformed scheme relies on random oracles. In this work, we show that using publicly-sampleable projective PRGs, we can achieve adaptive security in the plain model. We then show how to build publicly-sampleable projective PRGs from many standard number-theoretic assumptions (e.g., CDH, LWE, RSA).
Our compiler yields the first adaptively-secure broadcast encryption scheme from search assumptions as well as the first such scheme from witness encryption in the plain model. We also obtain the first adaptively-secure pairing-based scheme in the plain model with -size public keys and -size ciphertexts (where suppresses polynomial factors in the security parameter ). Previous adaptively-secure pairing-based schemes in the plain model with -size ciphertexts required -size public keys.
Post-Quantum Blind Signatures from Matrix Code Equivalence
We construct a novel code-based blind signature scheme, us- ing the Matrix Equivalence Digital Signature (MEDS) group action. The scheme is built using similar ideas to the Schnorr blind signature scheme and CSI-Otter, but uses additional public key and commitment informa- tion to overcome the difficulties that the MEDS group action faces: lack of module structure (present in Schnorr), lack of a quadratic twist (present in CSI-Otter), and non-commutativity of the acting group. We address security concerns related to public key validation, and prove the security of our protocol in the random oracle model, using the security framework of Kastner, Loss, and Xu, under a variant of the Inverse Matrix Code Equivalence problem and a mild heuristic assumption.
Arctic: Lightweight and Stateless Threshold Schnorr Signatures
Threshold Schnorr signatures are seeing increased adoption in practice, and offer practical defenses against single points of failure. However, one challenge with existing randomized threshold Schnorr signature schemes is that signers must carefully maintain secret state across signing rounds, while also ensuring that state is deleted after a signing session is completed. Failure to do so will result in a fatal key-recovery attack by re-use of nonces.
While deterministic threshold Schnorr signatures that mitigate this issue exist in the literature, all prior schemes incur high complexity and performance overhead in comparison to their randomized equivalents. In this work, we seek the best of both worlds; a deterministic and stateless threshold Schnorr signature scheme that is also simple and efficient.
Towards this goal, we present Arctic, a lightweight two-round threshold Schnorr signature that is deterministic, and therefore does not require participants to maintain state between signing rounds. As a building block, we formalize the notion of a Verifiable Pseudorandom Secret Sharing (VPSS) scheme, and define VPSS1, an efficient VPSS construction. VPSS1 is secure when the total number of participants is at least 2t−1 and the adversary is assumed to corrupt at most t−1; i.e., in the honest majority model.
We prove that Arctic is secure under the discrete logarithm assumption in the random oracle model, similarly assuming at minimum 2t−1 number of signers and a corruption threshold of at most t−1. For moderately sized groups (i.e., when n ≤ 20), Arctic is more than an order of magnitude more efficient than prior deterministic threshold Schnorr signatures in the literature. For small groups where n ≤ 10, Arctic is three orders of magnitude more efficient.
The Malice of ELFs: Practical Anamorphic-Resistant Encryption without Random Oracles
The concept of Anamorphic Encryption (Persiano, Phan and Yung, Eurocrypt '22), aims to enable private communication in settings where the usage of encryption is heavily controlled by a central authority (henceforth called the dictator) who can obtain users' secret keys.
Since then, various works have improved our understanding of AE in several aspects, including its limitations. To this regard, two recent works constructed various Anamorphic-Resistant Encryption (ARE) schemes, i.e., schemes admitting at most bits of covert communication.
However, those results are still unsatisfactory, each coming with at least one of the following issues: (1) use of cryptographic heavy hammers such as indistinguishability obfuscation (iO); (2) abuse of the original definition to define overly powerful dictators; (3) reliance on the Random Oracle Model (ROM). In particular, proofs in the ROM are controversial as they fail to account for anamorphic schemes making non-black-box usage of the hash function used to instantiate the Random Oracle.
In this work, we overcome all of these limitations.
First, we describe an anamorphic-resistant encryption (ARE) scheme approaching practicality by relying only on public-key encryption and Extremely Lossy Functions (ELFs), both known from the (exponential) DDH assumption. Moreover, further assuming Unique NIZKs (known from iO), we provide another construction, which we later use to realize the first ARE; that is, a scheme that achieves the strongest level of anamorphic resistance against each of the possible levels of anamorphic security.
Anamorphic Resistant Encryption: the Good, the Bad and the Ugly
Anamorphic encryption (AE), introduced by Persiano, Phan and Yung at Eurocrypt `22, allows to establish secure communication in scenarios where users might be forced to hand over their decryption keys to some hostile authority. Over the last few years, several works have improved our understanding of the primitive by proposing novel realizations, new security notions and studying inherent limitations.
This work makes progress, mainly, on this last line of research.
We show concrete realizations of public key encryption schemes that, provably, cannot be turned anamorphic. These were called Anamorphic Resistant Encryption (ARE, fort short) in a recent work of Dodis and Goldin.
We also show that, under certain conditions, anamorphic encryption is equivalent to algorithm substitution attacks. This allows to positively reinterpret our AREs as PKE schemes provably resistant to subversion attacks. To the best of our knowledge, these seem to be the first IND-CPA secure schemes achieving subversion resistance without trust assumptions or non-black-box decomposition techniques.
Our two AREs heavily rely, among other things, on a direct usage of extremely lossy functions: here the lossyness property is used in the constructions, rather than just in the proofs. The first construction is in the public parameters model and also requires iO. The second construction eliminates the need of both public parameters and iO, but is in the random oracle and relies on the novel concept of robust extremely lossy functions with group structure, a primitive that we define and (show how to) realize in this paper.
Single Trace Side-Channel Vulnerabilities Discovery Using Statistical Leakage Simulator
This paper presents a novel single-trace side-channel attack on FALCON—a lattice-based post-quantum digital signature protocol recently approved for standardization by NIST. We target the discrete Gaussian sampling operation within the FALCON key generation scheme and use a single power measurement trace to succeed. Notably, negating the ‘shift right 63-bit’ operation (for 64-bit values) leaks critical information about the ‘-1’ vs. ‘0’ assignments to intermediate coefficients. These leaks enable full recovery of the generated secret keys. The proposed attack is simulated on the ELMO simulator running both reference and optimized software implementation from FALCON’s NIST Round 3 package. Statistical analysis with 20k tests reveals a full key-recovery success rate of 100% for FALCON-512. This work highlights the vulnerability of current software solutions to single-trace attacks and underscores the urgent need to develop single-trace resilient software for embedded systems in the presilicon phase.
Traceable Verifiable Secret Sharing and Applications
A secret sharing scheme allows a trusted dealer to divide a secret among multiple parties so that a sufficient number of them can recover the secret, while a smaller group cannot. In CRYPTO'21, Goyal, Song, and Srinivasan introduced Traceable Secret Sharing (TSS), which enhances traditional secret sharing by enabling the identification of parties involved in secret reconstruction, deterring malicious behavior like selling shares. Recently, Boneh, Partap, and Rotem (CRYPTO'24) presented two more efficient TSS schemes. However, these existing TSS schemes assume that all distributed shares are valid and shareholders act honestly during the secret reconstruction phase. In this paper, we introduce Traceable Verifiable Secret Sharing (TVSS), a concept designed to ensure both traceability and verifiability in the face of malicious actions by either the dealer or shareholders. We propose a general strategy for transforming a Shamir-based, computationally secure Verifiable Secret Sharing (VSS) scheme into an efficient TVSS scheme. Building on this strategy, we construct two practical TVSS schemes in the honest-majority setting, based on well-known VSS schemes proposed by Feldman (SFCS'87) and Pedersen (CRYPTO'91). Our proposed TVSS schemes retain public shareholder indexes, enhancing flexibility in designing accountable threshold protocols (e.g., Distributed Key Generation protocols) using TVSS. Compared to the original VSS schemes, the individual share size in the new TVSS schemes increases by only a single field element and is just two or three times the size of the main secret.
Motivated by a recent study on Accountable Threshold Cryptosystems (ATCs) by Boneh, Partap, and Rotem (CRYPTO'24), and by leveraging our proposed Feldman-based TVSS scheme, we also introduce an efficient ATC based on ElGamal cryptosystem. This new ATC enables a tracer to uniquely identify the parties involved in the decryption process while introducing minimal overhead to existing actively secure (and/or robust) threshold protocols built on the ElGamal cryptosystem.
MPC with Publicly Identifiable Abort from Pseudorandomness and Homomorphic Encryption
Publicly identifiable abort is a critical feature for ensuring accountability in outsourced computations using secure multiparty computation (MPC). Despite its importance, no prior work has specifically addressed identifiable abort in the context of outsourced computations. In this paper, we present the first MPC protocol that supports publicly identifiable abort with minimal overhead for external clients. Our approach minimizes client-side computation by requiring only a few pseudorandom function evaluations per input. On the server side, the verification process involves lightweight linear function evaluations using homomorphic encryption. This results in verification times of a few nanoseconds per operation for servers, with client overhead being approximately two orders of magnitude lower. Additionally, the public verifiability of our protocol reduces client input/output costs compared to SPDZ-based protocols, on which we base our protocol. For example, in secure aggregation use cases, our protocol achieves over twice the efficiency during the offline phase and up to an 18 % speedup in the online phase, significantly outperforming SPDZ.
Accelerating the Delfs-Galbraith algorithm with fast subfield root detection
We give a new algorithm for finding an isogeny from a given supersingular elliptic curve to a subfield elliptic curve , which is the bottleneck step of the Delfs-Galbraith algorithm for the general supersingular isogeny problem. Our core ingredient is a novel method of rapidly determining whether a polynomial has any roots in a subfield , while crucially avoiding expensive root-finding algorithms. In the special case when , i.e. when is the -th modular polynomial evaluated at a supersingular -invariant, this provides a means of efficiently determining whether there is an -isogeny connecting the corresponding elliptic curve to a subfield curve. Together with the traditional Delfs-Galbraith walk, inspecting many -isogenous neighbours in this way allows us to search through a larger proportion of the supersingular set per unit of time. Though the asymptotic complexity of our improved algorithm remains unchanged from that of the original Delfs-Galbraith algorithm, our theoretical analysis and practical implementation both show a significant reduction in the runtime of the subfield search. This sheds new light on the concrete hardness of the general supersingular isogeny problem, the foundational problem underlying isogeny-based cryptography.
Minicrypt PIR for Big Batches
We present PIR protocols for offline/online two-server setting where a client wants to privately retrieve a batch of entries from database of size by interacting with a servers . The client has interacted with a server ahead of time, not colluding with . We present simple protocols based on one-way functions that substantially improve on the query complexity or runtime over existing works. Concrete instantiations of our general paradigm lead to batch PIR protocols with the following parameters:
- A protocol for batches of , where , and each spend a total of work and exchange bits of communication. This yields an amortized complexity of work and communication per query in the batch.
- A more balanced protocol for batches of size in which spends a total of work, and spend work, and the total communication is of size .
Our protocols have immediate applications such as Private Set Intersection (PSI) in the two-server setting with preprocessing and unbalanced set sizes.
Chosen-Ciphertext Security for Functional Encryption with Multiple Users: Definitions and Generic Concrete Constructions
Functional Encryption (FE) is a powerful cryptographic primitive that allows for fine- grained computation over encrypted data. In the age of modern computing in complex environments, where data comes from multiple independent sources to be later jointly analysed in a fine-grained computation manner, the notion of multi-user functional encryption is becoming increasingly important. In particular, since their introduction (Goldwasser et al. at Eurocrypt’14; Chotard et al. at Asiacrypt’18), Multi-Client and Multi-Input FE become the subjects of a plethora of works, which study on concrete function classes, improving security, and more. Among many properties, one of the most important security property for Multi-Client/Multi-Input FE is the confidentiality of users’ encrypted data. Due to the complexity of these primitives, modeling a strong security notion and at the same time providing efficient constructions is a challenging task.
However, all security notions considered so far for Multi-Client/Multi-Input FE are in the chosen- plaintext setting, whereas it is long settled that the chosen-ciphertext setting is the most relevant for practical security in classical public-key encryption. For FE, the only known works are on single-user context, namely by Benhamouda et al. (PKC’17), Gay (PKC’20), Castagnos et al. (TCS’22). This leaves open the questions, both conceptually and constructively, of attaining chosen-ciphertext security in the multi-user setting, notably for Multi-Client and Multi-Input FE.
This work tackles the above questions of chosen-ciphertext security in multi-user context for FE for the first time:
- We propose a new security notion for Multi-Client FE, and Multi-Input FE, in the chosen- ciphertext setting. Our notions extend the single-user notion that is studied in previous works and is robust against strong adversaries.
- For the class computing inner products, we demonstrate the feasibility of our new notions by providing nouvel generic constructions for Multi-Client FE and Multi-Input FE. Surprisingly, our contruction for Multi-Input FE attains the same efficiency as in the public key single-client setting of previous works, and can be instantiated from the Decisional Diffie-Hellman or Decision Composite Residuosity assumptions. On the other hand, our contruction for Multi-Client FE enjoys an orignal toolkit of techniques that is developed to bootstrap a MCFE with chosen- plaintext security to chosen-ciphertext security, in its secret key setting, and can be instantiated from Symmetric eXternal Diffie-Hellman and Decision Linear assumptions.
Revisiting Leakage-Resilient MACs and Succinctly-Committing AEAD: More Applications of Pseudo-Random Injections
Pseudo-Random Injections (PRIs) have been used in several applications in symmetric-key cryptography, such as in the idealization of Authenticated Encryption with Associated Data (AEAD) schemes, building robust AEAD, and, recently, in converting a committing AEAD scheme into a succinctly committing AEAD scheme. In Crypto 2024, Bellare and Hoang showed that if an AEAD scheme is already committing, it can be transformed into a succinctly committing scheme by encrypting part of the plaintext using a PRI. In this paper, we revisit the applications of PRIs in building Message Authentication Codes (MACs) and AEAD schemes. First, we look at some of the properties and definitions of PRIs, such as collision resistance and unforgeability when used as a MAC with small plaintext space, under different leakage models. Next, we show how they can be combined with collision-resistant hash functions to build a MAC for long plaintexts, offering flexible security depending on how the PRI and equality check are implemented. If both the PRI and equality check are leak-free, the MAC provides almost optimal security, but the security only degrades a little if the equality check is only leakage-resilient (rather than leak-free). If the equality check has unbounded leakage, the security drops to a baseline security, rather than being completely insecure. Next, we show how to use PRIs to build a succinctly committing online AEAD scheme dubbed as scoAEfrom scratch that achieves succinct CMT-4 security, privacy, and Ciphertext Integrity with Misuse and Leakage (CIML2) security. Last but not least, we show how to build a succinct nonce Misuse-Resistant (MRAE) AEAD scheme, dubbed as scMRAE. The construction combines the SIV paradigm with PRI-based encryption (e.g. the Encode-then-Encipher (EtE) framework).
Randomness Generation for Secure Hardware Masking - Unrolled Trivium to the Rescue
Masking is a prominent strategy to protect cryptographic implementations against side-channel analysis. Its popularity arises from the exponential security gains that can be achieved for (approximately) quadratic resource utilization. Many variants of the countermeasure tailored for different optimization goals have been proposed. The common denominator among all of them is the implicit demand for robust and high entropy randomness. Simply assuming that uniformly distributed random bits are available, without taking the cost of their generation into account, leads to a poor understanding of the efficiency vs. security tradeoff of masked implementations. This is especially relevant in case of hardware masking schemes which are known to consume large amounts of random bits per cycle due to parallelism. Currently, there seems to be no consensus on how to most efficiently derive many pseudo-random bits per clock cycle from an initial seed and with properties suitable for masked hardware implementations.
In this work, we evaluate a number of building blocks for this purpose and find that hardware-oriented stream ciphers like Trivium and its reduced-security variant Bivium B outperform most competitors when implemented in an \emph{unrolled} fashion. Unrolled implementations of these primitives enable the flexible generation of many bits per cycle, which is crucial for satisfying the large randomness demands of state-of-the-art masking schemes.
According to our analysis, only Linear Feedback Shift Registers (LFSRs), when also unrolled, are capable of producing long non-repetitive sequences of random-looking bits at a higher rate per cycle for the same or lower cost as Trivium and Bivium B. Yet, these instances do not provide black-box security as they generate only linear outputs. We experimentally demonstrate that using multiple output bits from an LFSR in the same masked implementation can violate probing security and even lead to harmful randomness cancellations. Circumventing these problems, and enabling an independent analysis of randomness generation and masking, requires the use of cryptographically stronger primitives like stream ciphers.
As a result of our studies, we provide an evidence-based estimate for the cost of securely generating fresh random bits per cycle. Depending on the desired level of black-box security and operating frequency, this cost can be as low as to ASIC gate equivalent (GE) or to FPGA look-up tables (LUTs), where is the number of random bits required. Our results demonstrate that the cost per bit is (sometimes significantly) lower than estimated in previous works, incentivizing parallelism whenever exploitable. This provides further motivation to potentially move low randomness usage from a primary to a secondary design goal in hardware masking research.
Making Protocol FSU Revocable
This paper examines whether a revocation function can be added to a protocol, protocol FSU, that has been adopted as an international standard, ISO/IEC11770-3. Protocol FSU is an IB-AKE protocol based on a mathematical problem, an asymmetric gap bilinear Diffie--Hellman (GBDH) problem.
To make protocol FSU revocable, a generic technique is applied, which converts an identity-based encryption scheme to a revocable identity-based encryption scheme by introducing a symmetric-key encryption scheme. In addition, to make the converted RIB-AKE protocol efficient, we reduce ephemeral information exchanged in the protocol, and introduce an additional parameter to the master public-key where the secret information of the additional parameter is not needed to include in the master secret-key.
We discuss the security of the resultant protocol, and prove that it is rid-eCK secure under the asymmetric GBDH assumption.
Cryptanalysis of Full SCARF
SCARF is a tweakable block cipher dedicated to cache address randomization, proposed at the USENIX Security conference. It has a 10-bit block, 48-bit tweak, and 240-bit key. SCARF is aggressively optimized to meet the harsh latency constraints of cache address randomization, and uses a dedicated model for its security claim.
The full version of SCARF has 8 rounds, and its designers claim security up to queries and computations. In this work we present a distinguisher against 6-round SCARF under the collision model with time and query complexity , and a key-recovery attack against the full 8-round SCARF under the encryption-decryption model with queries and time . As part of the attack, we present a novel method to compute the minimal number of right pairs following a differential characteristic when the input pairs are restricted to a subspace of the domain of the primitive.
Towards Optimally Secure Deterministic Authenticated Encryption Schemes
The public comments received for the review process for NIST (SP) 800-38A pointed out two important issues that most companies face: (1) the limited security that AES can provide due to its 128-bit block size and (2) the problem of nonce-misuse in practice. In this paper, we provide an alternative solution to these problems by introducing two optimally secure deterministic authenticated encryption (DAE) schemes, denoted as DENC1 and DENC2 respectively. We show that our proposed constructions improve the state-of-the-art in terms of security and efficiency. Specifically, DENC1 achieves a robust security level of , while DENC2 attains a near-optimal security level of , where is the total number of blocks, is maximum number of blocks in each query, and is a user-defined parameter closely related to the rate of the construction. Our research centers on the development of two IV-based encryption schemes, referred to as IV1 and IV2, which respectively offer security levels of and . Notably, both of our DAE proposals are nearly rate 1/2 constructions. In terms of efficiency, our proposals compare favorably with state-of-the-art AE modes on contemporary microprocessors.
New Methods for Bounding the Length of Impossible Differentials of SPN Block Ciphers
How to evaluate the security of Substitution-Permutation Network (SPN) block ciphers against impossible differential (ID) cryptanalysis is a valuable problem. In this paper, a series of methods for bounding the length of IDs of SPN block ciphers are proposed. Firstly, we propose the definitions of minimal representative set and partition table. Therefore, an improved partition-first implementation strategy for bounding the length of IDs is given. Secondly, we introduce a new definition of ladder and propose the ladder-first implementation strategy for bounding the length of IDs. In order to be able to apply ladder-first implementation strategy in practice, the methods for determining ladders and integrating a ladder into searching models are given. Thirdly, a heuristic algorithm called dynamic-ladder-partition implementation strategy is proposed. According to our experimental results, dynamic-ladder-partition implementation strategy is more suitable for SPN ciphers whose number of elements in partition tables is little. Fourthly, rotation-equivalence ID sets of ciphers are explored to reduce the number of models that need to be considered. As applications, we show that 9-round PRESENT, 5-round AES, 6-round Rijndael-160, 7-round Rijndael-192, 7-round Rijndael-224 and 7-round Rijndael-256 do not have any ID under the sole assumption that the round keys are uniformly random. What's more, we obtain that 8-round GIFT-64, 12-round GIFT-128 and 14-round SKINNY-128 do not have any ID under the assumptions that GIFT and SKINNY are Markov ciphers and the round keys are uniformly random. Our methods fill crucial gaps on bounding the length of IDs with the differential properties of S-boxes considered. They enhance our confidence in the security and are valuable, especially for designers.
Impossible Boomerang Distinguishers Revisited
The Impossible Boomerang Attack (IBA) has shown significant power in evaluating the security of block ciphers, such as AES. However, current studies still lack foundational theory, user guild and universal method for constructing IBDs. This paper addresses these gaps through comprehensive research. Theoretically, we establish a new framework for constructing a series of IBDs by differential propagation, state propagation, and generalized boomerang tables. We rigorously prove their inclusion relations, resulting in a complete theory and hierarchical apply strategy for both single-key and related-key settings. We further analyze IBD constructions in two types of related-key settings: two-related-keys with arbitrary schedules and four-related-keys with linear schedules, structurally in a unified way. Technically, we develop a scheduling algorithm and a general SAT-based method to search for IBDs across various block cipher designs, including SPN, Feistel, and ARX. Additionally, we propose several strategies to enhance the search process. As applications, we derive (RK-)IBDs for 10 block ciphers, almost for the first time. Compared to impossible differentials, our IBDs are at least as effective, such as DES and PRESENT. Notably, we achieve 1 more round on PRINTcipher48 in single-key setting; 2 more rounds on AES-128, and 1 or 2 more rounds on SPECK variants in two-related-keys settings; 1, 4, 2 more rounds on GIFT-64, CHAM-64/128 and CHAM-128/256 in four-related-keys settings. We also obtain full-round RK-IBDs on GOST. Compared to current IBDs, we achieve 1, 1 more rounds on SKINNY-64/192 and SKINNYee. Furthermore, as an applied case of derived IBDs, we present a 31-round IBA on SKINNYee, which is the first 31-round attack on SKINNYee and the best result to date.
Traceable Verifiable Random Functions
A threshold verifiable random function (threshold VRF) is a VRF where the evaluation key is secret shared among parties, and a quorum of parties is needed to evaluate the VRF. Threshold VRFs are used widely in practice in applications such as randomness beacons and deterministic wallets. Despite their long history, the question of accountability for leaking key shares in a threshold VRF has not been studied. Specifically, consider a set of parties who use their key shares to create an evaluation box that lets anyone evaluate the VRF at any point in the domain of the VRF. When is less than the threshold , this box must also take as input additional evaluation shares. Our goal is to design a threshold VRF where there is a tracing algorithm that can trace any such box to the coalition of parties that created it, using only blackbox access to . The risk of tracing should deter the coalition from selling such a box. Questions in this vein were previously explored in the context of threshold decryption and secret sharing. Here we define and study traceability for a threshold VRF.
Our traceable threshold VRF is built from a VRF based on Paillier encryption. The starting point for our tracing algorithm is the tracing technique of Boneh-Partap-Rotem (Crypto 2024) designed for tracing leaks in the context of secret sharing. However, there are multiple technical challenges in making this approach work, and we develop the necessary tools to overcome all these challenges. The end result is a threshold VRF with a provably secure tracing algorithm.
Minimal Symmetric PAKE and 1-out-of-N OT from Programmable-Once Public Functions
Symmetric password-authenticated key exchange (sPAKE) can be seen as an extension of traditional key exchange where two parties agree on a shared key if and only if they share a common secret (possibly low-entropy) password. We present the first sPAKE protocol to simultaneously achieve the following properties:
- only two exponentiations per party, the same as plain unauthenticated Diffie-Hellman key agreement (and likely optimal);
- optimal round complexity: a single flow (one message from each party that can be sent in parallel) to achieve implicit authentication, or two flows to achieve explicit mutual authentication;
- security in the random oracle model, rather than ideal cipher or generic group model;
- UC security, rather than game-based.
Our protocol is a generalization of the seminal EKE protocol of Bellovin & Merritt (S&P 1992).
We also present a UC-secure 1-out-of- oblivious transfer (OT) protocol, for random payloads. Its communication complexity is independent of , meaning that can even be exponential in the security parameter. Such a protocol can also be considered a kind of oblivious PRF (OPRF). Our protocol improves over the leading UC-secure 1-out-of- OT construction of Masny & Rindal (CCS 2019) for all , and has essentially the same cost for .
The new technique underlying these results is a primitive we call programmable-once public function (POPF). Intuitively, a POPF is a function whose output can be programmed by one party on exactly one point. All other outputs of the function are outside of any party's control, in a provable sense.
Update: dos Santos et al. (Eurocrypt 2023) showed that our POPF definition is not strong enough to prove security of EKE against a man-in-the-middle. See Januzelli et al. (Eurocrypt 2025) for a fixed POPF abstraction and EKE security proof.
Juggernaut: Efficient Crypto-Agnostic Byzantine Agreement
It is well known that a trusted setup allows one to solve the Byzantine agreement problem in the presence of corruptions, bypassing the setup-free barrier. Alas, the overwhelming majority of protocols in the literature have the caveat that their security crucially hinges on the security of the cryptography and setup, to the point where if the cryptography is broken, even a single corrupted party can violate the security of the protocol. Thus these protocols provide higher corruption resilience ( instead of ) for the price of increased assumptions. Is this trade-off necessary?
We further the study of crypto-agnostic Byzantine agreement among parties that answers this question in the negative. Specifically, let and denote two parameters such that (1) , and (2) . Crypto-agnostic Byzantine agreement ensures agreement among honest parties if (1) the adversary is computationally bounded and corrupts up to parties, or (2) the adversary is computationally unbounded and corrupts up to parties, and is moreover given all secrets of all parties established during the setup. We propose a compiler that transforms any pair of resilience-optimal Byzantine agreement protocols in the authenticated and information-theoretic setting into one that is crypto-agnostic. Our compiler has several attractive qualities, including using only bits over the two underlying Byzantine agreement protocols, and preserving round and communication complexity in the authenticated setting. In particular, our results improve the state-of-the-art in bit complexity by at least two factors of and provide either early stopping (deterministic) or expected constant round complexity (randomized). We therefore provide fallback security for authenticated Byzantine agreement for free for .
Non-Interactive Key Exchange: New Notions, New Constructions, and Forward Security
Non-interactive key exchange (NIKE) is a simple and elegant cryptographic primitive that allows two or more users to agree on a secret shared key without any interaction. NIKE schemes have been formalized in different scenarios (such as the public-key, or the identity-based setting), and have found many applications in cryptography.
In this work, we propose a NIKE variant that generalizes public-key and identity-based NIKE: a multi-authority identity-based NIKE (MA-ID-NIKE) is defined like an identity-based NIKE, only with several identity domains (i.e., several instances of an identity-based NIKE), and such that users from different identity domains can compute shared keys. This makes MA-ID-NIKE schemes more versatile than existing NIKE or identity-based NIKE schemes, for instance, in an application in which users from different (centrally managed) companies need to compute shared keys.
We show several results for MA-ID-NIKE schemes:
- We show that MA-ID-NIKE schemes generically imply public-key NIKEs, identity-based NIKEs, as well as forward-secure NIKE schemes, the latter of which are notoriously hard to construct.
- We propose two simple constructions of MA-ID-NIKE schemes from indistinguishability obfuscation (iO) and multilinear maps, respectively. These constructions achieve only selective security, but can be leveraged to adaptive security for small groups of users (that want to be able to agree on a joint shared key) in the random oracle model.
- We give a simple and elegant construction of MA-ID-NIKEs from identity-based encryption (IBE) and universal samplers. This construction achieves adaptive security also for large groups of users based on the adaptive security of the used universal samplers. Universal samplers, in turn, are known to be achievable using iO in the random oracle model. As a nice feature, the same construction yields hierarchical MA-ID-NIKEs or public-key NIKEs when instantiated with hierarchical IBE or public-key encryption instead of IBE schemes.
While these results are clearly only feasibility results, they do demonstrate the achievability of a concept that itself has very practical use cases.
K-Linkable Ring Signatures and Applications in Generalized Voting
A Tight Analysis of GHOST Consistency
The GHOST protocol was proposed as an improvement to
the Nakamoto consensus mechanism that underlies Bitcoin. In contrast
to the Nakamoto fork-choice rule, the GHOST rule justifies selection
of a chain with weights computed over subtrees rather than individual
paths. This fork-choice rule has been adopted by a variety of consensus
protocols and is featured in the currently deployed protocol supporting
Ethereum.
We establish an exact characterization of the consistency region of the
GHOST protocol, identifying the relationship between network delay, the
rate of honest block production, and the rate of adversarial block production
that guarantees that the protocol reaches consensus. In contrast
to the closely related Nakamoto consensus protocol, we find that the region
depends on the convention used by the protocol for tiebreaking: we
establish tight results for both adversarial tiebreaking, in which ties are
broken adversarially in order to frustrate consensus, and deterministic
tiebreaking, in which ties between pairs of blocks are broken consistently
throughout an execution. We provide explicit attacks for both conventions
which stall consensus outside of the consistency region.
Our results conclude that the consistency region of GHOST can be
strictly improved by incorporating a tiebreaking mechanism; in either
case, however, the final region of consistency is inferior to the region of
Nakamoto consensus.
ChiLow and ChiChi: New Constructions for Code Encryption
We study the problem of embedded code encryption, i.e., encryption for binary software code for a secure microcontroller that is stored in an insecure external memory. As every single instruction must be decrypted before it can be executed, this scenario requires an extremely low latency decryption. We present a formal treatment of embedded code encryption security definitions, propose three constructions, namely ACE1, ACE2 and ACE3, and analyze their security. Further, we present ChiLow, a family of tweakable block ciphers and a related PRF specifically designed for embedded code encryption. At the core of ChiLow, there is ChiChi, a new family of non-linear layers of even dimension based on the well-known χ function. Our fully unrolled hardware implementation of ChiLow, using the Nangate 15nm Open Cell Library, achieves a decryption latency of less than 280 picoseconds.
Quasi-Linear Indistinguishability Obfuscation via Mathematical Proofs of Equivalence and Applications
Indistinguishability obfuscation (\iO) is a powerful cryptographic primitive
and has been quoted as the ``swiss army-knife of modern cryptography''. Most prior works on \iO focused on theoretical feasibility, and paid less attention to the efficiency of the constructions. As a result, all prior constructions stopped at achieving polynomial efficiency without worrying about how large the polynomial is.
In fact, it has even been conjectured that a polynomial dependence on the input length is necessary.
In this work, we show that if the two circuits to be obfuscated enjoy a succinct propositional logic proof of equivalence, then we can
create obfuscated versions of these programs that are computationally indistinguishable; and importantly, the obfuscated program's efficiency is quasi-linear in the circuit size and proof size. We show that our quasi-linear \iO construction also leads to new applications. Specifically, we show how to achieve quasi-linear efficiency for 1) \iO for Turing Machines with unbounded inputs, and 2) multi-input functional encryption, also assuming succinct proofs of equivalence.
Khatam: Reducing the Communication Complexity of Code-Based SNARKs
Every linear code satisfies the property of ``correlated agreement", meaning that if are two vectors in and if is close in Hamming distance to some codeword in , then and each agree with a codeword in in positions indexed by elements of . In this work, we prove something stronger -- that if is close to , then and all agree with codewords at positions indexed by elements of , except with negligible probability over . Our result holds as long as , and with failure probability smaller than . Furthermore, our results extend to any finite field and any linear code.
We use this result to prove that BaseFold(Crypto 2024), an efficient multilinear polynomial commitment scheme, is secure in the \textit{list decoding regime}, which significantly reduces its communication complexity. Our result is agnostic with respect to both the field and code, and therefore can be used to reduce the communication complexity of a wide class of coding-based multilinear polynomial commitment schemes.
Dimensional e ion: Improving the Attack with Decomposition in Higher Bases
We revisit the polynomial attack to the problem modulo from [BLLOR22]. Our new algorithm achieves a polynomial time solution in dimension , extending the range of dimensions for which a polynomial attack is known beyond the previous bound of .
We also combine our new algorithm with Wagner's attack to improve the general attack complexity for some of the dimensions where a polynomial solution is still not known.
We implement our polynomial attack and break the one-more unforgeability of blind Schnorr signatures over 256-bit elliptic curves in a few seconds with 192 concurrent sessions.
Lattice-based Cryptography: A survey on the security of the lattice-based NIST finalists
This survey, mostly written in the years 2022-2023, is meant as an as short as possible description of the current state-of-the-art lattice attacks on lattice-based cryptosystems, without losing the essence of the matter.
The main focus is the security of the NIST finalists and
alternatives that are based on lattices, namely CRYSTALS-Kyber, CRYSTALS-Dilithium and Falcon. Instead of going through these cryptosystems case by case, this survey considers attacks on the underlying hardness assumptions: in the case of the mentioned lattice-based schemes, these are (variants of) LWE (Learning With Errors) and NTRU.
Halving differential additions on Kummer lines
We study differential additions formulas on Kummer lines that factorize through a degree isogeny . We call the resulting formulas half differential additions: from the knowledge of and , the half differential addition allows to recover . We explain how Mumford's theta group theory allows, in any model of Kummer lines, to find a basis of the half differential relations. This involves studying the dimension isogeny .
We then use the half differential addition formulas to build a new type of Montgomery ladder, called the half-ladder, using a time-memory trade-off. On a Montgomery curve with full rational -torsion, our half ladder first build a succession of isogeny images , which only depends on the base point and not the scalar , for a pre-computation cost of by bit. Then we use half doublings and half differential additions to compute any scalar multiplication , for a cost of by bit. The total cost is then , even when the base point is not normalized. By contrast, the usual Montgomery ladder costs by bit, for a normalized point.
In the appendix, we extend our approach to higher dimensional ladders in theta coordinates or twisted theta coordinates. In dimension , after a pre-computation step which depends on the base point , our half ladder only costs , compared to for the standard ladder.
Asynchronous Algorand: Reaching Agreement with Near Linear Communication and Constant Expected Time
The celebrated Algorand protocol solves validated byzantine agreement in a scalable manner in the synchronous setting. In this paper, we study the feasibility of similar solutions in the asynchronous setting. Our main result is an asynchronous validated byzantine agreement protocol that we call Asynchronous Algorand. As with Algorand, it terminates in an expected constant number of rounds, and honest parties send an expected bits, where is the number of parties. The protocol is resilient to a fully-asynchronous weakly-adaptive adversary that can corrupt a near-optimal number of parties ( ) and requires just a VRF setup and secure erasures.
A key innovation in Asynchronous Algorand is a rather simple but surprisingly effective method to do \textit{committee-based role assignment} for asynchronous verifiable secret sharing in the YOSO (You Only Speak Once) model. This method achieves near-optimal resilience and near-linear communication complexity while relying solely on a verifiable random function (VRF) setup and secure erasures.
FHE-SNARK vs. SNARK-FHE: From Analysis to Practical Verifiable Computation
Verifiable Computation over encrypted data (VC) faces a critical dilemma between two competing paradigms: SNARK-FHE (applying SNARKs to prove FHE operations) and FHE-SNARK (homomorphically evaluating SNARK proofs). There are two interesting questions remain open to solving such a dilemma: 1) Are they identical in terms of security? 2) How practically efficient can we get? This work answers these questions through the following results:
1) We establish a formal security analysis between VC and secure two-party computation (2PC). We investigate VC with server inputs and show the following: a) VC with server input has an exact 1-bit security loss compared to 2PC; b) SNARK-FHE aligns with 2PC while FHE-SNARK naturally falls in the VC category; c) Existing FHE-SNARK works is vulnerable in the VC with server input setting, for which we formalize a input-dependent attack.
2) We design an FHE-friendly SNARK that is: a) 3× lower multiplicative depth than FRI-based SNARKs; b) Compatible with FHE SIMD operations. Based on this novel SNARK, we construct an FHE-SNARK scheme that has: a) Stronger security: resistant against input-dependent attack; b) 8× speedup: 3.6-hour proof generation for -gate circuits on a single core CPU (vs. 29 hours in the state-of-the-art); c) Practical verification: 65.3 MB proofs with 2.7 seconds verification (single core).
Single-Input Functionality against a Dishonest Majority: Practical and Round-Optimal
In this work, we focus on Single-Input Functionality (SIF), which can be viewed as a special case of MPC. In a SIF, only one distinguished party called the dealer holds a private input. SIF allows the dealer to perform a computation task with other parties without revealing any additional information about the private input. SIF has diverse applications, including multiple-verifier zero-knowledge, and verifiable relation sharing.
As our main contribution, we propose the first 1-round SIF protocol against a dishonest majority in the preprocessing model, which is highly efficient. The prior works either require at least 2-round online communication (Yang and Wang, Asiacrypt 2022; Baum et al., CCS 2022; Zhou et al., Euro SP 2024) or are only feasibility results (Lepinski et al., TCC 2005; Applebaum et al., Crypto 2022). We show the necessity of using the broadcast channels, by formally proving that 1-round SIF is impossible to achieve in the preprocessing model, if there are no broadcast channels available. We implement our protocol and conduct extensive experiments to illustrate the practical efficiency of our protocol.
Pseudorandom Functions with Weak Programming Privacy and Applications to Private Information Retrieval
Although privately programmable pseudorandom functions (PPPRFs) are known to have numerous applications, so far, the only known constructions rely on Learning with Error (LWE) or indistinguishability obfuscation. We show how to construct a relaxed PPPRF with only one-way functions (OWF). The resulting PPPRF satisfies security and works for polynomially sized input domains. Using the resulting PPPRF, we can get new results for preprocessing Private Information Retrieval (PIR) that improve the state of the art. Specifically, we show that relying only on OWF, we can get a 2-server preprocessing PIR with polylogarithmic bandwidth while consuming client space and server space for an arbitrarily small constant . In the 1-server setting, we get a preprocessing PIR from OWF that achieves polylogarithmic online bandwidth and offline bandwidth, while preserving the same client and server space as before. Our result, in combination with the lower bound of Ishai, Shi, and Wichs (CRYPTO'24), establishes a tight understanding of the bandwidth and client space tradeoff for 1-server preprocessing PIR from Minicrypt assumptions. Interestingly, we are also the first to show non-trivial ways to combine client-side and server-side preprocessing to get improved results for PIR.
Breaking and Fixing Anonymous Credentials for the Cloud
In an attribute-based credential (ABC) system, users obtain a digital certificate on their personal attributes, and can later prove possession of such a certificate in an unlinkable way, thereby selectively disclosing chosen attributes to the service provider. Recently, the concept of encrypted ABCs (EABCs) was introduced by Krenn et al. at CANS 2017, where virtually all computation is outsourced to a semi-trusted cloud-provider called wallet, thereby overcoming existing efficiency limitations on the user’s side, and for the first time enabling “privacy-preserving identity management as a service”.
While their approach is highly relevant for bringing ABCs into the real world, we present a simple attack allowing the wallet to learn a user's attributes when colluding with another user -- a scenario which is not covered by their modeling but which needs to be considered in practice. We then revise the model and construction of Krenn et al. in various ways, such that the above attack is no longer possible. Furthermore, we also remove existing non-collusion assumptions between wallet and service provider or issuer from their construction. Our protocols are still highly efficient in the sense that the computational effort on the end user side consists of a single exponentiation only, and otherwise efficiency is comparable to the original work of Krenn et al.
Bypassing the characteristic bound in logUp
In this informal note, we describe how to bypass the characteristic bound in logUp [eprint 2022/1530] by abstracting the notion of (pole) multiplicity. The method applies as well to the GKR-variant from Papini and Haböck [eprint 2023/1284], and it moreover unlocks fractional decomposition lookups over binary fields.
Basefold in the List Decoding Regime
In this writeup we discuss the soundness of the Basefold multilinear polynomial commitment scheme [Zeilberger, Chen, Fisch 23] applied to Reed-Solomon codes, and run with proximity parameters up to the Johnson list decoding bound.
Our security analysis relies on a generalization of the celebrated correlated agreement theorem from [Ben-Sasson, et al., 20] to linear subcodes of Reed-Solomon codes, which turns out a by-product of the Guruswami-Sudan list decoder analysis.
We further highlight a non-linear variant of the subcode correlated
agreement theorem, which is flexible enough to apply to Basefold-like protocols such as recent optimizations of FRI-Binius [Diamond, Posen 24], and which we believe sufficient for proving the security of a recent multilinear version of STIR [Arnon, Chiesa, Fenzi, Yogev 24] in the list-decoding regime
A note on the G-FFT
For primes with being smooth, the G-FFT from Li and Xing [LX23] is an algebraic FFT, which at first glance seems equivalent to the circle FFT from [IACR eprint 2024/278]: It also uses the circle curve over (in other words the projective line) as underlying domain, and interpolates by low-degree functions with poles over the same set of points. However, their approach to control the degree of the FFT basis is fundamentally different.
The G-FFT makes use of punctured Riemann-Roch spaces, and the construction works with the group doubling map only, no projection onto the -axis involved.
In this note we give an elementary description of the G-FFT without using abstract algebra. We describe a variant which uses a simpler, and in our opinion more natural function space, and which treats the exceptional point of the domain (the group identity) differently. In comparison to the circle FFT, the G-FFT (both the original as well as our variant) has the following downsides. Interpolation and domain evaluation costs the double number of multiplications (the twiddle is not an ``odd'' function), and the function space is not invariant under the group action, causing additional overhead when applied in STARKs.
Circle STARKs
Traditional STARKs require a cyclic group of a smooth order in the field. This allows efficient interpolation of points using the FFT algorithm, and writing constraints that involve neighboring rows. The Elliptic Curve FFT (ECFFT, Part I and II) introduced a way to make efficient STARKs for any finite field, by using a cyclic group of an elliptic curve. We show a simpler construction in the lines of ECFFT over the circle curve . When is divisible by a large power of , this construction is as efficient as traditional STARKs and ECFFT. Applied to the Mersenne prime , which has been recently advertised in the IACR eprint 2023:824, our preliminary benchmarks indicate a speed-up by a factor of compared to a traditional STARK using the Babybear prime .
Improving logarithmic derivative lookups using GKR
In this informal note, we instantiate the Goldwasser-Kalai-Rothblum (GKR) protocol to prove fractional sumchecks as present in lookup arguments based on logarithmic derivatives, with the following impact on the prover cost of logUp (IACR eprint 2022/1530):
When looking up columns in a (for the sake of simplicity) single column table, the prover has to commit only to a single extra column, i.e. the multiplicities of the table entries.
In order to carry over the GKR fractional sumcheck to the univariate setting, we furthermore introduce a simple, yet (as far as we know) novel transformation for turning a univariate polynomial commitment scheme into a multilinear one. The transformation complements existing approaches and might be of independent interest for its elegant way to prove arbitrary powers of the lexicographic shift over the Boolean hypercube.
Multivariate lookups based on logarithmic derivatives
Logarithmic derivatives translate products of linear factors into sums of their reciprocals, turning zeroes into simple poles of same multiplicity. Based on this simple fact, we construct an interactive oracle proof for multi-column lookups over the boolean hypercube, which makes use of a single multiplicity function instead of working with a rearranged union of table and witnesses. For single-column lookups the performance is comparable to the well-known Plookup strategy used by Hyperplonk+. However, the real power of our argument unfolds in the case of batch lookups when multiple columns are subject to a single-table lookup: While the number of field operations is comparable to the Hyperplonk+ lookup (extended to multiple columns), the oracles provided by our prover are much less expensive. For example, for columns of length 2^12, paper-pencil operation counts indicate that the logarithmic derivative lookup is between 1.5 and 4 times faster, depending on the number of columns.
A summary on the FRI low degree test
This document is an informal summary on the FRI low degree test [BSBHR18a], [BSCI+20], and DEEP algebraic linking from [BSGKS20]. Based on its most recent soundness analysis [BSCI+20], we discuss parameter settings for practical security levels, how FRI is turned into a polynomial commitment scheme, and the soundness of DEEP sampling in the list decoding regime. In particular, we illustrate the DEEP method applied to proving satisfiability of algebraic intermediate representations and prove a soundness error bound which slightly improves the one in [Sta21].
Darlin: Recursive Proofs using Marlin
This document describes Darlin, a succinct zero-knowledge argument of knowledge based on the Marlin SNARK (Chiesa et al., Eurocrypt 2020) and the `dlog' polynomial commitment scheme from Bootle et al. EUROCRYPT 2016.
Darlin addresses recursive proofs by integrating the amortization technique from Halo (IACR eprint 2019/099) for the non-succinct parts of the dlog verifier, and we adapt their strategy for bivariate circuit encoding polynomials to aggregate Marlin's inner sumchecks across the nodes the recursive scheme.
We estimate the performance impact of inner sumcheck aggregation by about 30% in a tree-like scheme of in-degree 2, and beyond when applied to linear recursion.
(Un)breakable curses - re-encryption in the Fujisaki-Okamoto transform
The Fujisaki-Okamoto transform (FO) is the go-to method for achieving chosen-ciphertext (CCA) security for post-quantum key encapsulation mechanisms (KEMs). An important step in FO is augmenting the decryption/ decapsulation algorithm with a re-encryption step -- the decrypted message is re-encrypted to check whether the correct encryption randomness was used. While solving a security problem (ciphertext-malleability), re-encryption has turned out to introduce side-channel vulnerabilities and is computationally expensive, which has lead designers to searching for alternatives. In this work, we perform a comprehensive study of such alternatives. We formalize a central security property, computational rigidity, and show that it is sufficient for obtaining CCA security. We present a framework for analyzing algorithms that can replace re-encryption and still achieve rigidity, and analyze existing proposals in this framework.
Along the way, we pick up a novel QROM security statement for explicitly rejecting KEMs based on deterministic PKE schemes, something that so far only was possible when requiring a hard-to-ensure quantum property for the base PKE scheme.
Stateless Hash-Based Signatures for Post-Quantum Security Keys
The U.S. National Institute of Standards and Technology
recently standardized the first set of post-quantum cryptography algo-
rithms. These algorithms address the quantum threat, but also present
new challenges due to their larger memory and computational footprint.
Three of the four standardized algorithms are lattice based, offering good
performance but posing challenges due to complex implementation and
intricate security assumptions. A more conservative choice for quantum-
safe authentication are hash-based signature systems. However, due to
large signature sizes and low signing speeds, hash-based systems have
only found use in niche applications. The first NIST standardized, state-
less hash-based signature system is the SPHINCS+-based SLH-DSA.
In this work we combine different approaches to show that SPHINCS+
can be optimized in its parameters and implementation, to be high per-
forming, even when signing in an embedded setting. We demonstrate
this in the context of user authentication using hardware security keys
within FIDO. Our SPHINCS+-based implementation can even outper-
form lattice-based solutions while remaining highly portable. Due to con-
servative security assumptions, our solution does not require a hybrid
construction and can perform authentication on current security keys.
For reproducibility and to encourage further research we publish our
Cortex M4-based implementation.
Rondo: Scalable and Reconfiguration-Friendly Randomness Beacon
We present Rondo, a scalable and reconfiguration-friendly distributed randomness beacon (DRB) protocol in the partially synchronous model. Rondo is the first DRB protocol that is built from batched asynchronous verifiable secret sharing (bAVSS) and meanwhile avoids the high message cost, where is the number of nodes. Our key contribution lies in the introduction of a new variant of bAVSS called batched asynchronous verifiable secret sharing with partial output (bAVSS-PO). bAVSS-PO is a weaker primitive than bAVSS but allows us to build a secure and more scalable DRB protocol. We propose a bAVSS-PO protocol Breeze. Breeze achieves the optimal messages for the sharing stage and allows Rondo to offer better scalability than prior DRB protocols.
Additionally, to support the reconfiguration, we introduce Rondo-BFT, a dynamic and partially synchronous Byzantine fault-tolerant protocol inspired by Dyno (S\&P 2022). Unlike Dyno, Rondo-BFT provides a communication pattern that generates randomness beacon output periodically, making it well-suited for DRB applications.
We implement our protocols and evaluate the performance on Amazon EC2 using up to 91 instances. Our evaluation results show that Rondo achieves higher throughput than existing works and meanwhile offers better scalability, where the performance does not degrade as significantly as grows.
Lattice-based Zero-knowledge Proofs for Blockchain Confidential Transactions
We propose new zero-knowledge proofs for efficient and postquantum ring confidential transaction (RingCT) protocols based on lattice assumptions in Blockchain systems. First, we introduce an inner-product based linear equation satisfiability approach for balance proofs with a wide range (e.g., 64-bit precision). Unlike existing balance proofs (MatRiCT and MatRiCT+) that require additional proofs for some "corrector values", our approach avoids the corrector values for better efficiency. Furthermore, we design a ring signature scheme to efficiently hide a user’s identity in large anonymity sets. Different from existing approaches that adopt a one-out-of-many proof (MatRiCT and MatRiCT+), we show that a linear sum proof suffices in ring signatures, which could avoid the costly binary proof part. We further use the idea of "unbalanced" relations to build a logarithmic-size ring signature scheme. Finally, we show how to adopt these techniques in RingCT protocols and implement a prototype to compare the performance with existing approaches. The results show our solutions can reduce up to 50% and 20% proof size, 30% and 20% proving time, 20% and 20% verification time of MatRiCT and MatRiCT+, respectively. We also believe our techniques are of independent interest for other applications and are applicable in a generic setting.
Efficient Error-tolerant Side-channel Attacks on GPV Signatures Based on Ordinary Least Squares Regression
The Gentry-Peikert-Vaikuntanathan (GPV) framework is utilized for constructing practical digital signatures, which is proven to be secure in both the classical/quantum random-oracle models. Falcon is such a signature scheme, recognized as a compact and efficient signature among NIST-standardized signature schemes. Recently, Guerreau et al. (CHES 2022) and Zhang et al. (Eurocrypt 2023) proposed the secret key recovery attack on Falcon utilizing signatures filtered by simple power analysis (SPA) attacks. However, these attacks, which exploit the conditional signature distributions, require a large number of SPA attacks to obtain the filtered signatures. Furthermore, no existing attack considers general GPV signatures despite the importance of the GPV framework in modern digital signatures. Therefore, we address these problems as follows.
First, we introduce, for the first time, a concept of vulnerable partial information of GPV signatures and propose a non-filtering secret key recovery attack, called OLS attack, which effectively utilizes partial information without filtering. The proposed OLS attack is a linear estimator with computational complexity that scales linearly with the number of samples, making OLS attack highly practical. Furthermore, we prove that the secret key recovered by the OLS attack converges to the real secret key in probability as the number of samples increases.
Second, we leverage SPA to extract Gaussian leakage, which is used as partial information for the OLS attack on Falcon. As a result, the OLS attack shows a significantly higher success rate with the fewest samples than the state-of-the-art attacks. Furthermore, by incorporating the DDGR attack, the OLS attack can recover the secret key using much fewer samples with a success rate close to 100%. Moreover, we propose an OLS attack specialized for Falcon, which can even more reduce the number of required side-channel attacks.
Third, we propose an error-tolerant power analysis attack using MAP decoding, which effectively corrects the errors in samples to properly estimate Gaussian leakage. For concrete experimental validation, an ELMO simulator generates noisy power traces and ChipWhisperer measures power traces from the STM32F415 model. The proposed MAP decoding achieves high effectiveness for estimating Gaussian leakage, particularly when applied to power traces collected using low-resolution ChipWhisperer. In conclusion, it is important for future work to study countermeasures for OLS attacks.
DFS: Delegation-friendly zkSNARK and Private Delegation of Provers
Zero-Knowledge Succinct Non-interactive Arguments of Knowledge (zkSNARKs) lead to proofs that can be succinctly verified but require huge computational resources to generate. Prior systems outsource proof generation either through public delegation, which reveals the witness to the third party, or, more preferably, private delegation that keeps the witness hidden using multiparty computation (MPC). However, current private delegation schemes struggle with scalability and efficiency due to MPC inefficiencies, poor resource utilization, and suboptimal design of zkSNARK protocols.
In this paper, we introduce DFS, a new zkSNARK that is delegation-friendly for both public and private scenarios. Prior work focused on optimizing the MPC protocols for existing zkSNARKs, while DFS uses co-design between MPC and zkSNARK so that the protocol is efficient for both distributed computing and MPC. In particular, DFS achieves linear prover time and logarithmic verification cost in the non-delegated setting. For private delegation, DFS introduces a scheme with zero communication overhead in MPC and achieves malicious security for free, which results in logarithmic overall communication; while prior work required linear communication. Our evaluation shows that DFS is as efficient as state-of-the-art zkSNARKs in public delegation; when used for private delegation, it scales better than previous work. In particular, for constraints, the total communication of DFS is less than KB, while prior work incurs GB, which is linear to the circuit size. Additionally, we identify and address a security flaw in prior work, EOS (USENIX'23).
Stationary Syndrome Decoding for Improved PCGs
Syndrome decoding (SD), and equivalently Learning Parity with Noise (LPN), is a fundamental problem in cryptography, which states that for a field , some compressing public matrix , and a secret sparse vector sampled from some noise distribution, is indistinguishable from uniform. Recently, the SD has gained significant interest due to its use in pseudorandom correlation generators (PCGs).
In pursuit of better efficiency, we propose a new assumption called Stationary Syndrome Decoding (SSD). In SSD, we consider correlated noise vectors and associated instances where the noise vectors are restricted to having non-zeros in the same small subset of positions . That is, for all , is uniformly random, while for all other , .
Although naively reusing the noise vector renders SD and LPN insecure via simple Gaussian elimination, we observe known attacks do not extend to our correlated noise. We show SSD is unconditionally secure against so-called linear attacks, e.g., advanced information set decoding and representation techniques (Esser and Santini, Crypto 2024). We further adapt the state-of-the-art nonlinear attack (Briaud and Oygarden, Eurocrypt 2023) to SSD and demonstrate both theoretically and experimentally resistance to the attack.
We apply SSD to PCGs to amortize the cost of noise generation protocol. For OT and VOLE generation, each instance requires communication instead of . For suggested parameters, we observe a improvement in the running time or between 6 and reduction in communication. For Beaver triple generation using Ring LPN, our techniques have the potential for substantial amortization due to the high concrete overhead of the Ring LPN noise generation.
Improved Secure Two-party Computation from a Geometric Perspective
Multiplication and other non-linear operations are widely recognized as the most costly components of secure two-party computation (2PC) based on linear secret sharing. Multiplication and non-linear operations are well known to be the most expensive protocols in secure two-party computation (2PC). Moreover, the comparison protocol (or protocol) is essential for various operations such as truncation, signed extension, and signed non-uniform multiplication. This paper aims to optimize these protocols by avoiding invoking the costly comparison protocol, thereby improving their efficiency.
We propose a novel approach to study 2PC from a geometric perspective. Specifically, we interpret the two shares of a secret as the horizontal and vertical coordinates of a point in a Cartesian coordinate system, with the secret itself represented as the corresponding point. This reformulation allows us to address the comparison problem by determining the region where the point lies. Furthermore, we identify scenarios where the costly comparison protocol can be replaced by more efficient evaluating AND gate protocols within a constrained range. Using this method, we improve protocols for truncation, signed extension and signed non-uniform multiplication, all of which are fundamental to 2PC. In particular, for the one-bit error truncation protocol and signed extension protocols, we reduce the state-of-the-art communication complexities of Cheetah (USENIX’22) and SirNN (S\&P’21) from to in two rounds, where is the input length and is the security parameter. For signed multiplication with non-uniform bit-width, we reduce the communication cost of SirNN's by 40\% to 60\%.
Neo: Lattice-based folding scheme for CCS over small fields and pay-per-bit commitments
This paper introduces Neo, a new lattice-based folding scheme for CCS, an NP-complete relation that generalizes R1CS, Plonkish, and AIR. Neo's folding scheme can be viewed as adapting the folding scheme in HyperNova (CRYPTO'24), which assumes elliptic-curve based linearly homomorphic commitments, to the lattice setting. Unlike HyperNova, Neo can use “small” prime fields (e.g., over the Goldilocks prime). Additionally, Neo provides plausible post-quantum security.
Prior to Neo, folding schemes in the lattice setting, notably LatticeFold (ePrint 2024/257), worked with constraint systems defined over a cyclotomic polynomial ring. This structure allows packing a fixed batch of constraint systems over a small prime field into a single constraint system over a polynomial ring. However, it introduces significant overheads, both for committing to witnesses (e.g., the commitment scheme cannot take advantage of bit-width of values), and within the folding protocol itself (e.g., the sum-check protocol is run over cyclotomic polynomial rings). Additionally, the required ring structure places restrictions on the choice of primes (e.g., LatticeFold is not compatible with the Goldilocks field).
Neo addresses these problems, by drawing inspiration from both HyperNova and LatticeFold. A key contribution is a folding-friendly instantiation of Ajtai's commitments, with "pay-per-bit" commitment costs i.e., the commitment costs scale with the bit-width of the scalars (e.g., committing to a vector of bits is cheaper than committing to a vector of -bit values). This scheme commits to vectors over a small prime field. It does so by transforming the provided vector into a matrix and committing to that matrix. We prove that this commitment scheme provides the desired linear homomorphism for building a folding scheme. Additionally, like HyperNova, Neo runs a single invocation of the sum-check protocol, where in HyperNova it is over the scalar field of an elliptic curve and in Neo it is over an extension of a small prime field.
Anamorphic-Resistant Encryption; Or Why the Encryption Debate is Still Alive
Ever since the introduction of encryption, society has been divided over whether the government (or law enforcement agencies) should have the capability to decrypt private messages (with or without a war- rant) of its citizens. From a technical viewpoint, the folklore belief is that semantic security always enables some form of steganography. Thus, adding backdoors to semantically secure schemes is pointless: it only weakens the security of the “good guys”, while “bad guys” can easily circumvent censorship, even if forced to hand over their decryption keys.
In this paper we put a dent in this folklore. We formalize three worlds: Dictatoria (“dictator wins”: no convenient steganography, no user co- operation needed), Warrantland (“checks-and-balances”: no convenient steganography, but need user’s cooperation) and Privatopia (“privacy wins”: built-in, high-rate steganography, even if giving away the decryption key). We give strong evidence that all these worlds are possible, thus reopening the encryption debate on a technical level.
Our main novelty is the definition and design of special encryption schemes we call anamorphic-resistant (AR). In contrast to so called “anamorphic schemes”, — which were studied in the literature and form the basis of Privatopia, — any attempt to steganographically communicate over an AR-encryption scheme will be either impossible or hugely slow (depending on the definitional details).
Tight Lower Bounds and New Upper Bounds For Evolving CDS
Komargodski et. al. defined evolving secret-sharing schemes with an unbounded number of parties. In this model, parties arrive one after the other and the number of parties that will arrive is not known.
Another cryptographic primitive related to secret-sharing is conditional disclosure of secrets protocols (CDS) that defined by Gertner et. al.
A CDS protocol for a Boolean function involves servers and a referee. Each server holds a common secret , a common random string , and a private input ; using these , each server locally computes one message and sends it to the referee. The referee, knowing the inputs and the messages, should be able to compute if . Otherwise, the referee should not learn information about the secret. In a sense, this is a non-monotone version of secret sharing schemes.
Peter (ISC 23'), defines evolving CDS, implementing in particular evolving predicate (he handles somewhat more general predicates for larger input domains, but generalizing to other input domains is not hard, and we focus on boolean predicates). In this setting, the number of parties is unbounded, where the parties arrive in sequential order. Each party, when arrives, sends one random message to a referee, that depending on its input, the secret, and a common randomness. He also devise evolving CDS protocols for a general evolving predicate via a black-box reduction to evolving secret-sharing scheme for a related access structure.
He analyzes this construction for general access structures, as well as other classes of functions, which yields message complexity for the worst predicates.
In this work we provide new upper and lower bounds for evolving CDS.
Observing that CDS has the potential for improved efficiency, as it is not restricted to monotone operations, we devise a new evolving general CDS construction.
In particular, our construction relies on representing the evolving predicate via an infinite branching program - LINBP, generalizing the monotone infinite branching program based construction of Alon et. al of evolving secret sharing schemes.
We obtain nontrivial ( for ) message complexity for branching programs of larger width than Alon et al's construction (even when restricting attention to monotone predicates), as well as Peter's construction for certain (but not all) 's.
Indeed, we prove that our construction, as well as Peter's article (ISC 23’) is tight for a certain evolving predicate -- as for evolving secret-sharing, (so called strong) evolving CDS also requires share complexity of . This is unlike the state of affairs for the finite setting, where the best known CDS constructions are much more efficient than the best known secret sharing schemes (for the hardest monotone functions).
The latter bound is proved based on an adaptation of Mazor's lower bound (in turns based on Csirmaz's lower bounding technique) to the CDS setting. It relies on a reduction from secret sharing for a certain class of infinite access structures -- the so called partite access structures to evolving CDS for a related (not necessarily monotone) function. Then, a partite evolving access structure is crafted using the Csirmaz-type argument.
A Note on Adaptive Security in Hierarchical Identity-Based Encryption
We present the first construction for adaptively secure HIBE, that does not rely on bilinear pairings or random oracle heuristics. Notably, we design an adaptively secure HIBE from any selectively secure IBE system in the standard model. Combining this with known results, this gives the first adaptively secure HIBE system from a wide variety of standard assumptions such as CDH/Factoring/LWE/LPN. We also extend our adaptively secure HIBE system to satisfy full anonymity, giving the first adaptively secure anonymous HIBE under CDH/LWE assumption. All our HIBE systems support unbounded length identities as well as unbounded number of recursive delegation operations.
zk-promises: Anonymous Moderation, Reputation, and Blocking from Anonymous Credentials with Callbacks
Anonymity is essential for free speech and expressing dissent, but platform moderators need ways to police bad actors. For anonymous clients, this may involve banning their accounts, docking their reputation, or updating their state in a complex access control scheme. Frequently, these operations happen asynchronously when some violation, e.g., a forum post, is found well after the offending action occurred. Malicious clients, naturally, wish to evade this asynchronous negative feedback. This raises a challenge: how can multiple parties interact with private state stored by an anonymous client while ensuring state integrity and supporting oblivious updates?
We propose zk-promises, a framework supporting stateful anonymous credentials where the state machines are Turing-complete and support asynchronous callbacks. Client state is stored in what we call a zk-object held by the client, zero-knowledge proofs ensure the object can only be updated as programmed, and callbacks allow third party updates even for anonymous clients, e.g, for downvotes or banning. Clients scan for callbacks periodically and update their state. When clients authenticate, they anonymously assert some predicate on their state and that they have scanned recently (e.g, within the past 24 hours).
zk-promises allows us to build a privacy-preserving account model. State that would normally be stored on a trusted server can be privately outsourced to the client while preserving the server’s ability to update the account.
To demonstrate the feasibility of our approach, we design, implement, and benchmark an anonymous reputation system with better-than-state- of-the-art performance and features, supporting asynchronous reputation updates, banning, and reputation-dependent rate limiting to better protect against Sybil attacks.
ETK: External-Operations TreeKEM and the Security of MLS in RFC 9420
The Messaging Layer Security protocol MLS is standardized in IETF’s RFC 9420 and allows a group of parties to securely establish and evolve group keys even if the servers are malicious. Its core mechanism is based on the TreeKEM protocol, but has gained many additional features and modifications during the development of the MLS standard. Over the last years, several partial security analyses have appeared of incomplete drafts of the protocol. One of the major additions to the TreeKEM design in MLS RFC 9420 (the final version of the standard) are the external operations, i.e., external commits and proposals, which interact deeply with the core TreeKEM protocol. These operations have not been considered in any previous security analysis, leaving their impact on the protocol’s overall security unclear.
In this work, we formalize ETK: External-Operations TreeKEM that includes external commits and proposals. We develop a corresponding ideal functionality and prove that ETK indeed realizes .
Our work is the first cryptographic analysis that considers both the final changes to the standard’s version of TreeKEM as well as external proposals and external commits. Compared to previous works that considered MLS draft versions, our ETK protocol is by far the closest to the final MLS RFC 9420 standard. Our analysis implies that the core of MLS’s TreeKEM variant as defined in RFC 9420 is an ETK protocol that realizes , when used with an SUF-CMA secure signature scheme, such as the IETF variant of Ed25519. We show that contrary to previous claims, MLS does not realize [Crypto2022] when used with signature schemes that only guarantee EUF-CMA, such as ECDSA.
Moreover, we show that the security of the protocol could be further strengthened by adding a functionality to insert PSKs, allowing another form of healing, and give a corresponding construction ETK-PSK and ideal functionality .
Multi-Client Functional Encryption with Public Inputs and Strong Security
Recent years have witnessed a significant development for functional encryption (FE) in the multi-user setting, particularly with multi-client functional encryption (MCFE). The challenge becomes more important when combined with access control, such as attribute-based encryption (ABE), which was actually not covered syntactically by the public-key FE nor semantically by the secret-key MCFE frameworks. On the other hand, as for complex primitives, many works have studied the admissibility of adversaries to ensure that the security model encompasses all real threats of attacks.
1. At a conceptual level, by adding a public input to FE/MCFE, we cover many previous primitives, notably attribute-based function classes. Furthermore, with the strongest admissibility for inner-product functionality, our framework is quite versatile, as it encrypts multiple sub-vectors, allows repetitions and corruptions, and eventually also encompasses public-key FE and classical ABE, bridging the setting of MCFE with the setting of FE and ABE.
2. Finally, we propose an MCFE with public inputs with the class of functions that combines inner-products (on private inputs) and attribute-based access-control (on public inputs) for LSSS policies. We achieve the first AB-MCFE for inner products with strong admissibility (from Nguyen et al., ACNS'23) and with adaptive security.
In the end, our concrete MCFE leads to MIFE for inner products, public-key single-input inner-product FE with LSSS key-policy, and KP-ABE for LSSS, with adaptive security. Previous AB-MCFE constructions are either restricted in terms of weaker admissibility (Nguyen et al., ASIACRYPT'22)
or considers a slightly larger functionality of attribute-weighted sum but with only selective security (Agrawal et al., CRYPTO'23).
Dynamic Decentralized Functional Encryption: Generic Constructions with Strong Security
Dynamic Decentralized Functional Encryption (DDFE) is a generalization of Functional Encryption which allows multiple users to join the system dynamically without interaction and without relying on a trusted third party. Users can independently encrypt their inputs for a joint evaluation under functions embedded in functional decryption keys; and they keep control on these functions as they all have to contribute to the generation of the functional keys.
In this work, we present new generic compilers which, when instantiated with existing schemes from the literature, improve over the state-of-the-art in terms of security, computational assumptions and functionality. Specifically, we obtain the first adaptively secure DDFE schemes for inner products in both the standard and the stronger function-hiding setting which guarantees privacy not only for messages but also for the evaluated functions. Furthermore, we present the first DDFE for inner products whose security can be proven under the LWE assumption in the standard model. Finally, we give the first construction of a DDFE for the attribute-weighted sums functionality with attribute-based access control (with some limitations). All prior constructions guarantee only selective security, rely on group-based assumptions on pairings, and cannot provide access control.
Protection Against Subversion Corruptions via Reverse Firewalls in the Plain Universal Composability Framework
While many modern cryptographic primitives have stood the test of time, attackers started to expand beyond classic cryptanalysis by targeting implementations. Subversion attacks, where the attacker replaces the implementation of the cryptographic primitive to leak sensitive information about the user during a protocol execution, are among the most dangerous of such attacks. The revelations of Snowden have shown that these attacks are deployed by intelligence services. A very promising countermeasure uses cryptographic reverse firewalls that actively remove the covert channel leaking the secret. Chakraborty et al. (EUROCRYPT’22) presented the first model of such firewalls in the universal composability (UC) framework. However, using such a firewall also provides a possible new target for the attacker and in the case that an honest party uses a corrupted firewall, they were not able to prove any security guarantees. Furthermore, their model is quite complex and does not fit into the plain UC model as they restrict the environment. Hence, the authors needed to reprove fundamental theorems such as the composition theorem as well as the security of the underlying protocol. In this paper, we consider a slightly different model of subversion attacks that replace the used randomness, inspired by Dodis et al. (CRYPTO’16), that captures all known subversion attacks. Considering these realistic attacks allows us to use existing UC-secure protocols without the need to reprove their security. We also introduce additional notions of firewall properties, allowing us to reason about corrupted firewalls while maintaining strong security guarantees. To show the versatility of our model, we apply it to commitments and oblivious transfer. This demonstrates the usefulness of our plain UC model, as the only known previous subversion-resilient
OT, recently provided by Chakraborty et al. (ASIACRYPT’24), is much more complicated and involved, and was also in the non-plain UC model.
Structure-Preserving Compressing Primitives: Vector Commitments and Accumulators
Compressing primitives such as accumulators and vector commitments, allow to rep- resent large data sets with some compact, ideally constant-sized value. Moreover, they support operations like proving membership or non-membership with minimal, ideally also constant- sized, storage and communication overhead. In recent years, these primitives have found nu- merous practical applications, with many constructions based on various hardness assumptions. So far, however, it has been elusive to construct these primitives in a strictly structure-preserving setting, i.e., in a bilinear group in a way that messages, commitments and openings are all ele- ments of the two source groups. Interestingly, backed by existing impossibility results, not even conventional commitments with such constraints are known in this setting. However, in many practical applications it would be convenient or even required to be structure-preserving, e.g., to commit or accumulate group elements. In this paper we investigate whether strictly structure-preserving compressing primitives can be realized. We close this gap by presenting the first strictly structure-preserving commitment that is shrinking (and in particular constant-size). We circumvent existing impossibility results by employing a more structured message space, i.e., a variant of the Diffie-Hellman message space. Our main results are constructions of structure-preserving vector commitments as well as structure-preserving accumulators. We first discuss generic constructions and then present concrete constructions under the Diffie-Hellman Exponent assumption. To demonstrate the usefulness of our constructions, we discuss various applications.
Network agnostic consensus in constant time
Network agnostic protocols (Blum, Katz, Loss TCC `19) are consensus or MPC protocols that strike a balance between purely synchronous and asynchronous protocols. Given thresholds that satisfy and , they have the unique property of remaining secure against an adversary that either (1) corrupts up to parties in a synchronous execution where all messages are delivered within a known bound or (2) corrupts up to in an asynchronous execution where messages can be delayed arbitrarily. All existing network agnostic protocols follow a design pattern which first attempts to run a synchronous path, and then switches to an asynchronous path as a fallback option if the synchronous path times out after some time due to the network being asynchronous. Unfortunately, has to be set conservatively to account for the possibility that the synchronous path might take an unusually long time even when the network is synchronous. As a result, for various basic tasks including Byzantine Agreement or MPC, no existing network agnostic protocol is able to terminate for all honest parties within constant expected time in \emph{all possible executions}.
In this work, we introduce a new paradigm to construct network agnostic consensus that, for the first time, overcome this barrier. Using this new design pattern we first present simple protocols for reliable broadcast (RB) and binary agreement (BA)
that are \emph{responsive} when no more than parties are corrupted and run in expected constant time regardless of the network conditions.\footnote{The latter was already possible for RB, which by design requires only constantly many constant rounds, but is not guaranteed to terminate for all parties.} We then extend our results to asynchronous common subset (ACS) and MPC. Notably, our approach \emph{reverses the order} of the synchronous and asynchronous path by designing protocols that are first and foremost asynchronous and only fall back to the synchronous execution path when more than parties are corrupted.
Significantly Improved Cryptanalysis of Salsa20 With Two-Round Criteria
Over the past decade and a half, cryptanalytic techniques for Salsa20 have been increasingly refined, largely following the overarching concept of Probabilistically Neutral Bits (PNBs) by Aumasson et al. (FSE 2008). In this paper, we present a novel criterion for choosing key- pairs using certain 2-round criteria and connect that with clever tweaks of existing techniques related to Probabilistically Independent bits (earlier used for ARX ciphers, but not for Salsa20) and well-studied PNBs. Through a detailed examination of the matrix after initial rounds of Salsa20, we introduce the first-ever cryptanalysis of Salsa20 exceeding rounds. Specifically, Salsa20/ , consisting of secret key bits, can be cryptanalyzed with a time complexity of and data amounting to . Further, the sharpness of our attack can be highlighted by showing that Salsa20/ can be broken with time and data , which is a significant improvement over the best-known result of Coutinho et al. (Journal of Cryptology, 2023, time and data ). Here, the refinements related to backward biases for PNBs are also instrumental in achieving the improvements. We also provide certain instances of how these ideas improve the cryptanalysis on -bit versions. In the process, a few critical points are raised on some existing state-of-the-art works in this direction, and in those cases, their estimates of time and data are revisited to note the correct complexities, revising the incorrect numbers.
Verifiable random function from the Deuring correspondence and higher dimensional isogenies
In this paper, we introduce , a new Verifiable Unpredictable Function (VUF) protocol based on isogenies between supersingular curves.
The most interesting application of this VUF is a post-quantum Verifiable Random Function (VRF). The main advantage of this new scheme is its compactness, with combined public key and proof size of roughly 450 bytes, which is orders of magnitude smaller than other generic purpose post-quantum VRF constructions. This scheme is also the first post-quantum VRF satisfying unconditional uniqueness. We show that this scheme is practical by providing a first non-optimized C implementation that runs in roughly 20ms for verification and 175ms for evaluation.
The function at the heart of our construction is the one that computes the codomain of an isogeny of big prime degree from its kernel. The evaluation can be performed efficiently with the knowledge of the endomorphism ring using a new ideal-to-isogeny algorithm introduced recently by Basso, Dartois, De Feo, Leroux, Maino, Pope, Robert and Wesolowski that uses computation of dimension isogenies between elliptic products to compute effectively the translation through the Deuring correspondence of any ideal. On the other hand, without the knowledge of the endomorphism ring, this computation appears to be hard. The security of our holds under a new assumption call the one-more isogeny problem (OMIP).
Another application of is the first hash-and-sign signature based on isogenies in the standard model. While we don't expect the signature in itself to outperform the recent variants of SQIsign, it remains very competitive in both compactness and efficiency while providing a new framework to build isogeny-based signature that could lead to new interesting applications.
We also introduce several new algorithms for the effective Deuring correspondence. In particular, we introduce an algorithm to translate an ideal of norm a big power of a small prime into the corresponding isogeny of dimension using isogenies between abelian variety of dimension as a tool.
This algorithm can be used to improve the SQIsign signature scheme.
How to Securely Implement Cryptography in Deep Neural Networks
The wide adoption of deep neural networks (DNNs) raises the question of how can we equip them with a desired cryptographic functionality (e.g, to decrypt an encrypted input, to verify that this input is authorized, or to hide a secure watermark in the output). The problem is that cryptographic primitives are typically designed to run on digital computers that use Boolean gates to map sequences of bits to sequences of bits, whereas DNNs are a special type of analog computer that uses linear mappings and ReLUs to map vectors of real numbers to vectors of real numbers. This discrepancy between the discrete and continuous computational models raises the question of what is the best way to implement standard cryptographic primitives as DNNs, and whether DNN implementations of secure cryptosystems remain secure in the new setting, in which an attacker can ask the DNN to process a message whose "bits" are arbitrary real numbers.
In this paper we lay the foundations of this new theory, defining the meaning of correctness and security for implementations of cryptographic primitives as ReLU-based DNNs. We then show that the natural implementations of block ciphers as DNNs can be broken in linear time by using such nonstandard inputs. We tested our attack in the case of full round AES-128, and had success rate in finding randomly chosen keys. Finally, we develop a new method for implementing any desired cryptographic functionality as a standard ReLU-based DNN in a provably secure and correct way. Our protective technique has very low overhead (a constant number of additional layers and a linear number of additional neurons), and is completely practical.
On Algebraic Homomorphic Encryption and its Applications to Doubly-Efficient PIR
The Doubly-Efficient Private Information Retrieval (DEPIR) protocol of Lin, Mook, and Wichs (STOC'23) relies on a Homomorphic Encryption (HE) scheme that is algebraic, i.e., whose ciphertext space has a ring structure that matches the homomorphic operations. While early HE schemes had this property, modern schemes introduced techniques to manage noise growth. This made the resulting schemes much more efficient, but also destroyed the algebraic property.
In this work, we study the properties of algebraic HE and try to make progress in solving this problem. We first prove a lower bound of for the ciphertext ring size of algebraic HE schemes (in terms of the depth of the evaluated circuit), which demonstrates a gap between optimal algebraic HE and the existing schemes, which have a ciphertext ring size of . As we are unable to bridge this gap directly, we instead slightly relax the notion of being algebraic. This allows us to construct a practically more efficient \emph{relaxed-algebraic} HE scheme. We then show that this also leads to a more efficient instantiation and implementation of DEPIR.
We experimentally demonstrate run-time improvements of more than x and reduce memory queries by more than x compared to prior work. Notably, our relaxed-algebraic HE scheme relies on a new variant of the Ring Learning with Errors (RLWE) problem that we call -CRT RLWE. We give a formal security reduction to standard RLWE, and estimate its concrete security. Both the -CRT RLWE problem and the techniques used for the reduction may be of independent interest.
Tighter Security Notions for a Modular Approach to Private Circuits
To counteract side-channel attacks, a masking scheme splits each intermediate variable into shares and transforms each elementary operation (e.g., field addition and multiplication) to the masked correspondence called gadget, such that intrinsic noise in the leakages renders secret recovery infeasible in practice. A simple and efficient security notion is the probing model ensuring that any shares are independently distributed from the secret input. One requirement of the probing model is the noise in the leakages should increase with the number of shares, largely restricting the side-channel security in the low-noise scenario. Another security notion for masking, called the random probing model, allows each variable to leak with a probability . While this model reflects the physical reality of side channels much better, it brings significant overhead. At Crypto 2018, Ananth et al. proposed a modular approach that can provide random probing security for any security level by expanding small base gadgets with share recursively, such that the tolerable leakage probability decreases with while the security increases exponentially with the recursion depth of expansion. Then, Belaïd et al. provided a formal security definition called Random Probing Expandability (RPE) and an explicit framework using the modular approach to construct masking schemes at Crypto 2020.
In this paper, we investigate how to tighten the RPE definition via allowing the dependent failure probabilities of multiple inputs, which results in a new definition called related RPE. It can be directly used for the expansion of multiplication gates and reduce the complexity of the base multiplication gadget from proposed at Asiacrypt 2021 to and maintain the same security level. Furthermore, we describe a method to expand any gates (rather than only multiplication) with the related RPE gadgets.
Besides, we denote another new RPE definition called Multiple inputs RPE used for the expansion of multiple-input gates composed with any gates. Utilizing these methods, we reduce the complexity of 3-share circuit compiler to , where is the size of the unprotected circuit and the protection failure probability of the global circuit is . In comparison, the complexity of the state-of-the-art work, proposed at Eurocrypt 2021, is for the same value of . Additionally, we provide the construction of a 5-share circuit compiler with a complexity .
A reduction from Hawk to the principal ideal problem in a quaternion algebra
In this article we present a non-uniform reduction from rank-
2 module-LIP over Complex Multiplication fields, to a variant of the
Principal Ideal Problem, in some fitting quaternion algebra. This reduction
is classical deterministic polynomial-time in the size of the inputs. The
quaternion algebra in which we need to solve the variant of the principal
ideal problem depends on the parameters of the module-LIP problem,
but not on the problem’s instance. Our reduction requires the knowledge
of some special elements of this quaternion algebras, which is why it is
non-uniform.
In some particular cases, these elements can be computed in polynomial
time, making the reduction uniform. This is the case for the Hawk
signature scheme: we show that breaking Hawk is no harder than solving
a variant of the principal ideal problem in a fixed quaternion algebra
(and this reduction is uniform).
Verifiable Computation for Approximate Homomorphic Encryption Schemes
We address the problem of proving the validity of computation on ciphertexts of homomorphic encryption (HE) schemes, a feature that enables outsourcing of data and computation while ensuring both data privacy and integrity.
We propose a new solution that handles computations in RingLWE-based schemes, particularly the CKKS scheme for approximate arithmetic. Our approach efficiently handles ciphertext arithmetic in the polynomial ring without emulation overhead and manages ciphertexts maintenance operations, such as modulus switching, key switching, and rescaling, with small cost.
Our main result is a succinct argument that efficiently handles arithmetic computations and range checks over the ring . To build this argument system, we construct new polynomial interactive oracle proofs (PIOPs) and multilinear polynomial commitments supporting polynomials over , unlike prior work which focused on finite fields. We validate the concrete complexity of our approach through implementation and experimentation. Compared to the current state-of-the-art on verifiable HE for RNS schemes, we present similar performance for small circuits while being able to efficiently scale to larger ones, which was a major challenge for previous constructions as it requires verifying procedures such as relinearization.
Generic Anamorphic Encryption, Revisited: New Limitations and Constructions
The notion of Anamorphic Encryption (Persiano et al. Eurocrypt 2022) aims at establishing private communication against an adversary who can access secret decryption keys and influence the chosen messages. Persiano et al. gave a simple, black-box, rejection sampling-based technique to send anamorphic bits using any IND-CPA secure scheme as underlying PKE.
In this paper however we provide evidence that their solution is not as general as claimed: indeed there exists a (contrived yet secure) PKE which lead to insecure anamorphic instantiations. Actually, our result implies that such stateless black-box realizations of AE are impossible to achieve, unless weaker notions are targeted or extra assumptions are made on the PKE. Even worse, this holds true even if one resorts to powerful non-black-box techniques, such as NIZKs, or garbling.
From a constructive perspective, we shed light those required assumptions. Specifically, we show that one could bypass (to some extent) our impossibility by either considering a weaker (but meaningful) notion of AE or by assuming the underlying PKE to (always) produce high min-entropy ciphertexts.
Finally, we prove that, for the case of Fully-Asymmetric AE, can actually be used to overcome existing impossibility barriers.
We show how to use to build Fully-Asymmetric AE (with small anamorphic message space) generically from any IND-CPA secure PKE with sufficiently high min-entropy ciphertexts.
Put together our results provide a clearer picture of what black-box constructions can and cannot achieve.
Crypto Dark Matter on the Torus: Oblivious PRFs from shallow PRFs and FHE
Partially Oblivious Pseudorandom Functions (POPRFs) are 2-party protocols that allow a client to learn pseudorandom function (PRF) evaluations on inputs of its choice from a server. The client submits two inputs, one public and one private. The security properties ensure that the server cannot learn the private input, and the client cannot learn more than one evaluation per POPRF query. POPRFs have many applications including password-based key exchange and privacy-preserving authentication mechanisms. However, most constructions are based on classical assumptions, and those with post quantum security suffer from large efficiency drawbacks.
In this work, we construct a novel POPRF from lattice assumptions and the “Crypto Dark Matter” PRF candidate (TCC’18) in the random oracle model. At a conceptual level, our scheme exploits the alignment of this family of PRF candidates, relying on mixed modulus computations, and programmable bootstrapping in the torus fully homomorphic encryption scheme (TFHE). We show that our construction achieves malicious client security based on circuit-private FHE, and client privacy from the semantic security of the FHE scheme. We further explore a heuristic approach to extend our scheme to support verifiability, based on the difficulty of computing cheating circuits in low depth. This would yield a verifiable (P)OPRF. We provide a proof-of-concept implementation and preliminary benchmarks of our construction. For the core online OPRF functionality, we require amortised 10.0KB communication per evaluation and a one-time per-client setup communication of 2.5MB.
S2DV: Scalable and Secure DAO Voting
Decentralized Autonomous Organization operates without a central entity, being owned and governed collectively by its members. In this organization, decisions are carried out automatically through smart contracts for routine tasks, while members vote for unforeseen issues. Scalability in decision-making through voting on proposals is essential to accommodate a growing number of members without sacrificing security. This paper addresses this challenge by introducing a scalable and secure DAO voting system that ensures security through Groth16 zk-SNARKs and exponential ElGamal encryption algorithm while achieving scalability by verifiably delegating heavy computations to untrusted entities. While offline computation on the exponential ElGamal homomorphic encryption algorithm is enabled to reduce the computational cost of the blockchain, Groth16 is allowed to maintain robust off-chain calculation without revealing any further details. Specifically, the Groth16 proof guarantees that (i) the encrypted votes accurately reflect the voter's voting power, ensuring no unauthorized weight manipulation; (ii) only valid non-negative vote values are encrypted, preventing unintended or malicious vote tampering; and (iii) the homomorphic summation is performed correctly. The implementation shows that the proofs are verified remarkably fast, making the S2DV protocol highly suitable for scalable DAO voting, while preserving the security of the election.
Search and Verify Isogeny-Based Quantum Money with Rational Points
Quantum money is the cryptographic application of the quantum no-cloning theorem. It has recently been instantiated by Montgomery and Sharif (Asiacrypt'24) from class group actions on elliptic curves. In this work, we propose a novel method to forge a quantum banknote by leveraging the efficiency of evaluating division polynomials with the coordinates of rational points, offering a more efficient alternative to brute-force attack. Since our attack still requires exponential time, it remains impractical to forge a quantum banknote. Interestingly, due to the inherent properties of quantum money, our attack method also results in a more efficient verification procedure. Our algorithm leverages the properties of quadratic twists to utilize rational points in verifying the cardinality of the superposition of elliptic curves. We expect this approach to contribute to future research on elliptic-curve-based quantum cryptography.
Transparent SNARKs over Galois Rings
Recently, there is a growing need for SNARKs to operate over a broader range of algebraic structures, and one important structure is Galois ring. We present transparent SNARK schemes over arbitrary Galois rings. Compared with Rinocchio scheme in Ganesh et al. (J Cryptol 2023), our SNARK schemes do not require a trusted third party to establish a structured reference string (SRS).
In this paper, we present the expander code over arbitrary Galois rings, which can be encoded in time. Using this expander code, we then extend the Brakedown commitment scheme in Golovnev et al. (CRYPTO 2023) to Galois rings. By combining the Libra framework in Xie et al. (CRYPTO 2019), we present a transparent SNARK for log-space uniform circuits over Galois rings, achieving prover time, proof size, and verifier time. And by combining HyperPlonk in Chen et al. (EUROCRYPT 2023), we present a transparent SNARK for NP circuits over Galois rings, with prover time, proof size, and verifier time.
Grafting: Complementing RNS in CKKS
The Residue Number System (RNS) variant of the Cheon-Kim-Kim-Song (CKKS) scheme (SAC 2018) has been widely implemented due to its computational efficiency. However, state-of-the-art implementations fail to use the machine word size tightly, creating inefficiency. As rescaling moduli are chosen to be approximately equal to the scaling factors, the machine's computation budget can be wasted when the scaling factors are not close to the machine's word size.
To solve this problem, we present a novel moduli management system called Grafting, that fills the ciphertext moduli with word-sized factors while allowing arbitrary rescaling. Grafting speeds up computations, reduces memory footprints, and makes universal parameters that can be used regardless of target precision. The key ingredient of Grafting is the sprout, which can repeatedly used as a part of the ciphertext modulus. In general, when using an arbitrary ciphertext modulus, several copies of evaluation keys in various moduli are required. However, sprout allows rescaling by various amounts with the same keys, which can be extended to universal sprout that accommodates rescaling by an arbitrary amount. By breaking the dependency of the rescaling amount, scale factors, and RNS modulus, Grafting resolves restrictions in previous RNS-CKKS implementations. We revisit the polynomial approximation and multiplication techniques to reduce the modulus consumption or improve the performance, respectively.
We grafted a parameter preset in the HEaaN library and implemented a grafted and non-grafted RNS-CKKS library to measure the improvements with concrete experiments. Benchmark results demonstrate the significance of Grafting, showing a speed-up of for homomorphic multiplications and for bootstrapping compared to the non-grafted variant. The ciphertext and evaluation key sizes are also reduced at a similar rate, while the precision remains the same.
Proof of Time: A Method for Verifiable Temporal Commitments Without Timestamp Disclosure
This paper introduces a cryptographic method that enables users to prove that an event occurred in the past and that a specified amount of time has since elapsed, without disclosing the exact timestamp of the event. The method leverages zero-knowledge proofs and an on-chain Incremental Merkle Tree to store hash commitments. By utilizing the Poseidon hash function and implementing zero-knowledge circuits in Noir, this approach ensures both the integrity and confidentiality of temporal information.
Speeding Up Multi-Scalar Multiplications for Pairing-Based zkSNARKs
Multi-scalar multiplication (MSM) is one of the core components of many zero-knowledge proof systems, and a primary performance bottleneck for proof generation in these schemes. One major strategy to accelerate MSM is utilizing precomputation. Several algorithms (e.g., Pippenger and BGMW) and their variants have been proposed in this direction. In this paper, we revisit the recent precomputation-based MSM calculation method proposed by Luo, Fu and Gong at CHES 2023 and generalize their approach. In particular, we presented a general construction of optimal buckets. This improvement leads to significant performance improvements, which are verified by both theoretical analysis and experiments.