Papers updated in last 183 days (Page 14 of 1850 results)
Haven++: Batched and Packed Dual-Threshold Asynchronous Complete Secret Sharing with Applications
Asynchronous complete secret sharing (ACSS) is a foundational primitive in the design of distributed algorithms and cryptosystems that require confidentiality. ACSS permits a dealer to distribute a secret to a collection of servers so that everyone holds shares of a polynomial containing the dealer's secret.
This work contributes a new ACSS protocol, called Haven++, that uses packing and batching to make asymptotic and concrete advances in the design and application of ACSS for large secrets. Haven++ allows the dealer to pack multiple secrets in a single sharing phase, and to reconstruct either one or all of them later. For even larger secrets, we contribute a batching technique to amortize the cost of proof generation and verification across multiple invocations of our protocol.
The result is an asymptotic improvement in the worst-case amortized communication and computation complexity, both for ACSS itself and for its application to asynchronous distributed key generation. Our ADKG based on Haven++ achieves, for the first time, an optimal worst case amortized communication complexity of without a trusted setup. To show the practicality of Haven++, we implement it and find that it outperforms the work of Yurek et al.\ (NDSS 2022) by more than an order of magnitude when there are malicious, faulty parties.
Dynamic Decentralized Functional Encryptions from Pairings in the Standard Model
Dynamic Decentralized Functional Encryption (DDFE), introduced by Chotard et al. (CRYPTO'20), represents a robust generalization of (Multi-Client) Functional Encryption. It allows users to dynamically join and contribute private inputs to individually controlled joint functions without requiring a trusted authority.
Recently, Shi and Vanjani (PKC'23) proposed the first Multi-Client Functional Encryption scheme for function-hiding inner products (FH-IP) without relying on random oracles. Unfortunately, their construction still requires a trusted key authority, leaving open the question of whether a full-fledged FH-IP-DDFE can exist in the standard model.
In this work, we answer this question affirmatively by introducing Updatable Pseudorandom Zero Sharing, a novel concept that provides both the critical functionality and security properties needed to construct secure DDFE schemes in the standard model.
Our second contribution is a novel proof strategy, which preserves adaptive security when transforming any functional encryption scheme for FH-IP into FH-IP-DDFE. Together, these two techniques enable a modular construction of FH-IP-DDFE that is secure against adaptive message and key queries in the standard model.
Additionally, our pseudorandom zero-sharing scheme is highly versatile, enabling the first DDFE for attribute-weighted sums in the standard model, complementing the recent ROM-based construction by Agrawal et al. (CRYPTO'23).
Commit-and-Prove System for Vectors and Applications to Threshold Signing
Multi-signatures allow to combine several individual signatures into a compact one and verify it against a short aggregated key. Compared to threshold signatures, multi-signatures enjoy non-interactive key generation but give up on the threshold-setting. Recent works by Das et al. (CCS'23) and Garg et al. (S&P'24) show how multi-signatures can be turned into schemes that enable efficient verification when an ad hoc threshold -- determined only at verification -- is satisfied. This allows to keep the simple key generation of multi-signatures and support flexible threshold settings in the signing process later on. Both works use the same idea of combining BLS multi-signatures with inner-product proofs over committed keys. Das et al. give a somewhat generic proof from both building blocks, which we show to be flawed, whereas Garg et al. give a direct proof for the combined construction in the algebraic group model.
In this work, we identify the common blueprint used in both works and abstract the proof-based approach through the building block of a commit-and-prove system for vectors (CP). We formally define a flexible set of security properties for the CP system and show how it can be securely combined with a multi-signature to yield a signature with ad hoc thresholds. Our scheme also lifts the threshold signatures into the multiverse setting recently introduced by Baird et al. (S&P'23), which allows signers to re-use their long-term keys across several groups. The challenge in the generic construction is to express -- and realize -- the combination of homomorphic proofs and commitments (needed to realize flexible thresholds over fixed group keys) and their simulation extractability (needed in the threshold signature security proof). We finally show that a CP instantiation closely following the ideas of Das et al. can be proven secure, but requires a new flexible-base DL-assumption to do so.
Towards Leakage-Resilient Ratcheted Key Exchange
Ratcheted key exchange (RKE) is at the heart of modern secure messaging, enabling protocol participants to continuously update their secret material to protect against full state exposure through forward security (protecting past secrets and messages) and post-compromise security (recovering from compromise). However, many practical attacks only provide the adversary with partial access to a party's secret state, an attack vector studied under the umbrella of leakage resilience. Existing models of RKE provide suboptimal guarantees under partial leakage due to inherent limitations in security under full state exposure.
In this work, we initiate the study of leakage-resilient ratcheted key exchange that provides typical guarantees under full state exposure and additional guarantees under partial state exposure between ratchets of the protocol. We consider unidirectional ratcheted key exchange (URKE) where one party acts as the sender and the other as receiver. Building on the notions introduced by Balli, Rösler and Vaudenay (ASIACRYPT 2020), we formalise a key indistinguishability game under randomness manipulation and bounded leakage (KIND), which in particular enables the adversary to continually leak a bounded amount of the sender's state between honest send calls. We construct a corresponding protocol from a key-updatable key encapsulation mechanism (kuKEM) and a leakage-resilient one-time MAC. By instantiating this MAC in the random oracle model (ROM), results from Balli, Rösler and Vaudenay imply that in the ROM, kuKEM and KIND-secure URKE are equivalent, i.e., can be built from each other. To address the strong limitations that key indistinguishability imposes on the adversary, we formalise a one-wayness game that also permits leakage on the receiver. We then propose a corresponding construction from leakage-resilient kuKEM, which we introduce, and a leakage-resilient one-time MAC. We further show that leakage-resilient kuKEM and one-way-secure URKE are equivalent in the ROM, highlighting the cost that strong one-way security entails. Our work opens exciting directions for developing leakage-resilient messaging protocols.
On Linear Equivalence, Canonical Forms, and Digital Signatures
Given two linear codes, the code equivalence problem asks to find an isometry mapping one code into the other.
The problem can be described in terms of group actions and, as such, finds a natural application in signatures derived from a Zero-Knowledge Proof system.
A recent paper, presented at Asiacrypt 2023, showed how a proof of equivalence can be significantly compressed by describing how the isometry acts only on an information set. Still, the resulting signatures are far from being optimal, as the size for a witness to this relation is still significantly larger than the theoretical lower bound, which is twice the security parameter.
In this paper, we fill this gap and propose a new notion of equivalence, which leads to a drastically reduced witness size. For many cases, the resulting size is exactly the optimal one given by the lower bound. We achieve this by introducing the framework of canonical representatives, that is, representatives for classes of codes which are equivalent under some notion of equivalence. We propose new notions of equivalence which encompass and further extend all the existing ones: this allows to identify broader classes of equivalent codes, for which the equivalence can be proved with a very compact witness. We associate these new notions to a specific problem, called Canonical Form Linear Equivalence Problem (CF-LEP), which we show to be as hard as the original one (when random codes are considered), providing reductions in both ways. As an added consequence, this reduction leads to a new solver for the code equivalence problem, which is the fastest solver when the finite field size is large enough. Finally, we show that our framework yields a remarkable reduction in signature size when compared to the LESS submission.
Our variant is able to obtain very compact signatures, around 2 KB or less, which are among the smallest in the code-based setting.
Delayed-Input Multi-Party Computation
In this work, we consider the setting where the process of securely evaluating a multi-party functionality is divided into two phases: offline (or preprocessing) and online. The offline phase is independent of the parties’ inputs, whereas the online phase does require the knowledge of the inputs. We consider the problem of minimizing the round of communication required in the online phase and propose a round preserving compiler that can turn a big class of multi-party computation (MPC) protocols into protocols in which only the last two rounds are input-dependent. Our compiler can be applied to a big class of MPC protocols, and in particular to all existing round-optimal MPC protocols. All our results assume no setup and are proven in the dishonest majority setting with black-box simulation. As part of our contribution, we propose a new definition we call Multi-Party Computation with Adaptive-Input Selection, which allows the distinguisher to craft the inputs the honest parties should use during the online phase, adaptively on the offline phase. This new definition is needed to argue that not only are the messages of the offline phase input-independent but also that security holds even in the stronger (and realistic) adversarial setting where the inputs may depend on some of the offline-phase protocol messages. We argue that this is the definition that any protocol should satisfy to be securely used while preprocessing part of the rounds. We are the first to study this definition in a setting where there is no setup, and the majority of the parties can be corrupted. Prior definitions have been presented in the Universal Composable framework, which is unfortunately not well suited for our setting (i.e., no setup and dishonest majority). As a corollary, we obtain the first four-round (which is optimal) MPC protocol, where the first two rounds can be preprocessed, and its security holds against adaptive-input selection.
Stronger Security for Threshold Blind Signatures
Uncategorized
Uncategorized
Blind signatures allow a user to obtain a signature from an issuer in a privacy-preserving way: the issuer neither learns the signed message, nor can link the signature to its issuance. The threshold version of blind signatures further splits the secret key among n issuers, and requires the user to obtain at least t ≤ n of signature shares in order to derive the final signature. Security should then hold as long as at most t − 1 issuers are corrupt. Security for blind signatures is expressed through the notion of one-more unforgeability and demands that an adversary must not be able to produce more signatures than what is considered trivial after its interactions with the honest issuer(s). While one-more unforgeability is well understood for the single-issuer setting, the situation is much less clear in the threshold case: due to the blind issuance, counting which interactions can yield a trivial signature is a challenging task. Existing works bypass that challenge by using simplified models that do not fully capture the expectations of the threshold setting. In this work, we study the security of threshold blind signatures, and propose a framework of one-more unforgeability notions where the adversary can corrupt c < t issuers. Our model is generic enough to capture both interactive and non-interactive protocols, and it provides a set of natural properties with increasingly stronger guarantees, giving the issuers gradually more control over how their shares can be combined. As a point of comparison, we reconsider the existing threshold blind signature models and show that their security guarantees are weaker and less clearly comprehensible than they seem. We then re-assess the security of existing threshold blind signature schemes – BLS-based and Snowblind – in our framework, and show how to lift them to provide stronger security.
Efficient NIZK Arguments with Straight-Line Simulation and Extraction
Non-interactive zero-knowledge (NIZK) arguments allow a prover to convince a verifier about the truthfulness of an NP-statement by sending just one message, without disclosing any additional information. In several practical scenarios, the Fiat-Shamir transform is used to convert an efficient constant-round public-coin honest-verifier zero-knowledge proof system into an efficient NIZK argument system. This approach is provably secure in the random oracle model, crucially requires the programmability of the random oracle and extraction works through rewinds. The works of Lindell [TCC 2015] and Ciampi et al. [TCC 2016] proposed efficient NIZK arguments with non-programmable
random oracles along with a programmable common reference string. In this work we show an efficient NIZK argument with straight-line simulation and extraction that relies on features that alone are insufficient to construct NIZK arguments (regardless of efficiency). More specifically we consider the notion of quasi-polynomial time simulation proposed by Pass in [EUROCRYPT 2003] and combine it with simulation and extraction with non-programmable random
oracles thus obtaining a NIZK argument of knowledge where neither the zero-knowledge simulator, nor the argument of knowledge extractor needs to program the random oracle. Still, both the simulator and the extractor are straight-line. Our construction uses as a building block a modification of the Fischlin’s transform [CRYPTO 2005] and combines it with the concept of dense puzzles introduced by Baldimtsi et al. [ASIACRYPT 2016]. We also argue that our NIZK argument system inherits the efficiency features of Fischlin’s transform, which represents the main advantage of Fischlin’s protocol over existing schemes.
On Active Attack Detection in Messaging with Immediate Decryption
The widely used Signal protocol provides protection against state exposure attacks through forward security (protecting past messages) and post-compromise security (for restoring security). It supports immediate decryption, allowing messages to be re-ordered or dropped at the protocol level without affecting correctness. In this work, we consider strong active attack detection for secure messaging with immediate decryption, where parties are able to immediately detect active attacks under certain conditions. We first consider in-band active attack detection, where participants who have been actively compromised but are still able to send a single message to their partner can detect the compromise. We propose two complementary notions to capture security, and present a compiler that provides security with respect to both notions. Our notions generalise existing work (RECOVER security) which only supported in-order messaging. We also study the related out-of-band attack detection problem by considering communication over out-of-band, authenticated channels and propose analogous security notions. We prove that one of our two notions in each setting imposes a linear communication overhead in the number of sent messages and security parameter using an information-theoretic argument. This implies that each message must information-theoretically contain all previous messages and that our construction, that essentially attaches the entire message history to every new message, is asymptotically optimal. We then explore ways to bypass this lower bound and highlight the feasibility of practical active attack detection compatible with immediate decryption.
Thorough Power Analysis on Falcon Gaussian Samplers and Practical Countermeasure
Falcon is one of post-quantum signature schemes selected by NIST for standardization. With the deployment underway, its implementation security is of great importance. In this work, we focus on the side-channel security of Falcon and our contributions are threefold.
First, by exploiting the symplecticity of NTRU and a recent decoding technique, we dramatically improve the key recovery using power leakages within Falcon Gaussian samplers. Compared to the state of the art (Zhang, Lin, Yu and Wang, EUROCRYPT 2023), the amount of traces required by our attack for a full key recovery is reduced by at least 85%.
Secondly, we present a complete power analysis for two exposed power leakages within Falcon’s integer Gaussian sampler. We identify new sources of these leakages, which have not been identified by previous works, and conduct detailed security evaluations within the reference implementation of Falcon on Chipwhisperer.
Thirdly, we propose effective and easy-to-implement countermeasures against both two leakages to protect the whole Falcon’s integer Gaussian sampler. Configured with our countermeasures, we provide security evaluations on Chipwhisperer and report performance of protected implementation. Experimental results highlight that our countermeasures admit a practical trade-off between effciency and side-channel security.
Bundled Authenticated Key Exchange: A Concrete Treatment of (Post-Quantum) Signal's Handshake Protocol
The Signal protocol relies on a special handshake protocol, formerly X3DH and now PQXDH, to set up secure conversations. Prior analyses of these protocols (or proposals for post-quantum alternatives) have all used highly tailored models to the individual protocols and generally made ad-hoc adaptations to "standard" AKE definitions, making the concrete security attained unclear and hard to compare between similar protocols. Indeed, we observe that some natural Signal handshake protocols cannot be handled by these tailored models. In this work, we introduce Bundled Authenticated Key Exchange (BAKE), a concrete treatment of the Signal handshake protocol. We formally model prekey bundles and states, enabling us to define various levels of security in a unified model. We analyze Signal's classically secure X3DH and harvest-now-decrypt-later-secure PQXDH, and show that they do not achieve what we call optimal security (as is documented). Next, we introduce RingXKEM, a fully post-quantum Signal handshake protocol achieving optimal security; as RingXKEM shares states among many prekey bundles, it could not have been captured by prior models. Lastly, we provide a security and efficiency comparison of X3DH, PQXDH, and RingXKEM.
Bootstrapping with RMFE for Fully Homomorphic Encryption
There is a heavy preference towards instantiating BGV and BFV homomorphic encryption schemes where the cyclotomic order is a power of two, as this admits highly efficient fast Fourier transformations. Field Instruction Multiple Data (FIMD) was introduced to increase packing capacity in the case of small primes and improve amortised performance, using reverse multiplication-friendly embeddings (RMFEs) to encode more data into each SIMD slot. However, FIMD currently does not admit bootstrapping.
In this work, we achieve bootstrapping for RMFE-packed ciphertexts with low capacity loss. We first adapt the digit extraction algorithm to work over RMFE-packed ciphertexts, by applying the recode map after every evaluation of the lifting polynomial. This allows us to follow the blueprint of thin bootstrapping, performing digit extraction on a single ciphertext. To achieve the low capacity loss, we introduce correction maps to the Halevi-Shoup digit extraction algorithm, to remove all but the final recode of RMFE digit extraction.
We implement several workflows for bootstrapping RMFE-packed ciphertexts in HElib, and benchmark them against thin bootstrapping for . Our experiments show that the basic strategy of recoding multiple times in digit extraction yield better data packing, but result in very low remaining capacity and latencies of up to hundreds of seconds. On the other hand, using correction maps gives up to additional multiplicative depth and brings latencies often below seconds, at the cost of lower packing capacity.
Efficient Distributed Randomness Generation from Minimal Assumptions where PArties Speak Sequentially Once
We study efficient public randomness generation protocols in the PASSO (PArties Speak Sequentially Once) model for multi-party computation (MPC). PASSO is a variation of traditional MPC where parties are executed in sequence and each party ``speaks'' only once, broadcasting and sending secret messages only to parties further down the line. Prior results in this setting include information-theoretic protocols in which the computational complexity scales exponentially with the number of corruptions (CRYPTO 2022), as well as more efficient computationally-secure protocols either assuming a trusted setup phase or DDH (FC 2024). Moreover, these works only consider security against static adversaries.
In this work, we focus on computational security against adaptive adversaries and from minimal assumptions, and improve on the works mentioned above in several ways:
- Assuming the existence of non-interactive perfectly binding commitments, we design protocols with or parties that are efficient and secure whenever is small compared to the security parameter (e.g., is constant). This improves the resiliency of all previous protocols, even those requiring a trusted setup. It also shows that parties are necessary and sufficient for corruptions in the computational setting, while parties are required for information-theoretic security.
- Under the same assumption, we design protocols with or parties (depending on the adversarial network model) which are efficient whenever . This improves on the existing DDH-based protocol both in terms of resiliency and the underlying assumptions.
- We design efficient protocols with or parties (depending on the adversarial network model) assuming the existence of one-way functions.
We complement these results by studying lower bounds for randomness generation protocols in the computational setting.
Fine-Grained Complexity in a World without Cryptography
The study of fine-grained cryptography has proliferated in recent years due to its allure of potentially relying on weaker assumptions compared to standard cryptography. As fine-grained cryptography only requires polynomial gaps between the adversary and honest parties, it seems plausible to build primitives relying upon popular hardness assumptions about problems in such as - or - - . The ultimate hope is that fine-grained cryptography could still be viable even if all current cryptographic assumptions are false, such as if or if we live in Pessiland where one-way functions do not exist.
In our work, we consider whether this approach is viable by studying fine-grained complexity when all standard cryptographic assumptions are false. As our main result, we show that many popular fine-grained complexity problems are easy to solve in the average-case when one-way functions do not exist. In other words, many candidate hardness assumptions for building fine-grained cryptography are no longer options in Pessiland. As an example, we prove that the average-case - and - - conjectures are false for sufficiently large constant when no one-way functions exist. The average-case - - assumption was used to build fine-grained key-exchange by Lavigne et al. [CRYPTO'19]. One can also view the contrapositive of our result as providing an explicit construction of one-way functions assuming average-case hardness of - or - - for all constant .
We also show that barriers for reductions in fine-grained complexity may be explained by problems in cryptography. First, we show that finding faster algorithms for computing discrete logarithms is equivalent to designing average-case equivalence between - and - (an extension of - to cyclic groups). In particular, finding such a reduction from - to - could potentially lead to breakthrough algorithms for the discrete logarithm, factoring, RSA and quadratic residuosity problems. Finally, we show that discrete logarithms with preprocessing may be reduced to the - - problem, and we present faster algorithms for average-case - - and - - .
Juicebox Protocol: Distributed Storage and Recovery of Secrets Using Simple PIN Authentication
Existing secret management techniques demand users memorize complex passwords, store convoluted recovery phrases, or place their trust in a specific service or hardware provider. We have designed a novel protocol that combines existing cryptographic techniques to eliminate these complications and reduce user complexity to recalling a short PIN. Our protocol specifically focuses on a distributed approach to secret storage that leverages Oblivious Pseudorandom Functions (OPRFs) and a Secret-Sharing Scheme (SSS) combined with self-destructing secrets to minimize the trust placed in any singular server. Additionally, our approach allows for servers distributed across organizations, eliminating the need to trust a singular service operator. We have built an open-source implementation of the client and server sides of this new protocol, the latter of which has variants for running on commodity hardware and secure hardware.
Publicly Verifiable Threshold Proxy Re-encryption and Its Application in Data Rights Confirmation
Uncategorized
Uncategorized
Proxy re-encryption (PRE) has been regarded as an effective cryptographic primitive in data sharing systems with distributed proxies. However, no literature considers the honesty of data owners, which is critical in the age of big data. In this paper, we fill the gap by introducing a new proxy re-encryption scheme, called publicly verifiable threshold PRE (PVTPRE). Briefly speaking, we innovatively apply a slightly modified publicly verifiable secret sharing (PVSS) scheme to distribute the re-encryption keys to multiple proxies. Consequently, we achieve publicly verifiability of data owners non-interactively. Then, the correctness of data users in decryption and public verifiability of proxies in re-encryption are guaranteed seamlessly through execution of the PVSS reconstruction algorithms. We further prove that PVTPRE satisfies IND-CPA security. Besides, we put forward a privacy-preserving data rights confirmation framework by providing clear principles for data ownership and usage, based on the PVTPRE scheme and blockchain. Blockchain plays the role of data bank and smart contract engine, providing reliable storage and verification for all framework. To our knowledge, we are the first to systematically investigate data rights confirmation considering privacy as well as public verifiability, addressing the growing need for robust mechanisms to protect data rights and ensure transparency. Finally, we conduct comprehensive experiments to illustrate the correctness, feasibility and effectiveness. The experimental results show that our PVTPRE outperforms other PREs in many aspects.
Non-Interactive Distributed Point Functions
Distributed Point Functions (DPFs) are a useful cryptographic primitive enabling a dealer to distribute short keys to two parties, such that the keys encode additive secret shares of a secret point function. However, in many applications of DPFs, no single dealer entity has full knowledge of the secret point function, necessitating the parties to run an interactive protocol to emulate the setup. Prior works have aimed to minimize complexity metrics of such distributed setup protocols, e.g., round complexity, while remaining black-box in the underlying cryptography.
We construct Non-Interactive DPFs (NIDPF), which have a one-round (simultaneous-message, semi-honest) setup protocol, removing the need for a trusted dealer. Specifically, our construction allows each party to publish a special "public key" to a public channel or bulletin board, where the public key encodes the party's secret function parameters. Using the public key of another party, any pair of parties can locally derive a DPF key for the point function parameterized by the two parties' joint inputs.
We realize NIDPF from an array of standard assumptions, including DCR, SXDH, QR, and LWE. Each party's public key is of size , for point functions with a domain of size , which leads to a sublinear communication setup protocol. The only prior approach to realizing such a non-interactive setup required using multi-key fully-homomorphic encryption or indistinguishability obfuscation.
As immediate applications of our construction, we get "public-key setups" for several existing constructions of pseudorandom correlation generators and round-efficient protocols for secure comparisons.
Simultaneous-Message and Succinct Secure Computation
We put forth and instantiate a new primitive we call simultaneous-message and succinct (SMS) secure computation. An SMS scheme enables a minimal communication pattern for secure computation in the following scenario: Alice has a large private input X, Bob has a small private input y, and Charlie wants to learn for some public function .
Given a common reference string (CRS) setup phase, an SMS scheme for a function f is instantiated with two parties holding inputs and , and has the following structure:
- The parties simultaneously exchange a single message.
- Communication is succinct, scaling sublinearly in the size of and the output .
- Without further interaction, the parties can locally derive additive secret shares of .
Indeed, Alice and Bob simultaneously send each other a message using the CRS and their private inputs. Using the transcript and their private state, the parties locally derive additive secret shares of , which they can send to Charlie. As such, an SMS scheme incurs a communication cost to Charlie that is only twice that of the function output length. Importantly, the size of Alice’s message does not grow with the size of her input , and both Alice’s and Bob’s first-round messages grow sublinearly in the size of the output. Additionally, Alice’s or Bob’s view provides no information about the other party’s input besides the output of , even if colluding with Charlie.
We obtain the following results:
- Assuming Learning With Errors (LWE), we build an SMS scheme supporting evaluation of depth- circuits, where Alice's message is of size · poly(λ, d), Bob's message is of size · poly(λ, d), and λ is the security parameter. We can further extend this to support all functions by assuming the circular security of LWE.
- Assuming sub-exponentially secure indistinguishability obfuscation, in conjunction with other standard assumptions, we build an SMS scheme supporting arbitrary polynomial-sized batch functions of the form , for . The size of Alice's and Bob's messages in this construction is poly(λ) and poly(λ, |f|, log L), respectively.
We show that SMS schemes have several immediate applications. An SMS scheme gives:
- A direct construction of trapdoor hash functions (TDH) (Döttling et al., Crypto'19) for the same class of functions as the one supported by the SMS scheme.
- A simple and generic compiler for obtaining compact, rate-1 fully homomorphic encryption (FHE) from any non-compact FHE scheme.
- A simple and generic compiler for obtaining correlation-intractable (CI) hash functions that are secure against all efficiently-searchable relations.
In turn, under the LWE assumption, we obtain the first construction of TDH for all functions and generic approaches for obtaining rate-1 FHE and CI hashing. We also show that our iO-based construction gives an alternative approach for two-round secure computation with communication succinctness in the output length (Hubáček and Wichs, ITCS'15).
Homomorphic Encryption with Authority
Fully homomorphic encryption enables computations over encrypted data, which allows privacy-preserving services to be held between a server and a client. However, real-world applications demand practical considerations, especially concerning public safety and legal investigations. Existing FHE schemes focus solely on privacy, neglecting the societal risks posed by criminal activities utilizing privacy-preserving services. This paper introduces Homomorphic Encryption with Authority (HEwA), a novel framework that balances data privacy with public safety by incorporating an "authority" party. The proposed HEwA system operates in two phases: a normal phase, where client data privacy is protected, and an investigative phase, where the authority referring to a legally authorized entity such as government agencies exerts the right to recover suspicious client’s data. We formalize the security model for HEwA, ensuring that client privacy is protected during the normal phase while enabling authorities to recover encrypted data in the investigative phase. As a concrete example, we design an efficient HEwA system solely based on the CKKS homomorphic encryption scheme, which supports approximate computations over real-number data, making it highly suitable for fruitful applications in AI such as secure genomic analysis. We further provide rigorous security proofs. This new approach addresses the tension between privacy and public safety in cloud services, paving the way for responsible use of homomorphic encryption in practice.
Protecting Cryptographic Code Against Spectre-RSB
It is fundamental that executing cryptographic software must not leak secrets through side-channels. For software-visible side-channels, it was long believed that "constant-time" programming would be sufficient as a systematic countermeasure. However, this belief was shattered in 2018 by attacks exploiting speculative execution—so called Spectre attacks.
Recent work shows that language support suffices to protect cryptographic code with minimal overhead against one class of such attacks, Spectre v1, but leaves an open question of whether this result can be extended to also cover other classes of Spectre attacks.
In this paper, we answer this question in the affirmative: We design, validate, implement, and verify an approach to protect cryptographic implementations against all known classes of Spectre attacks—the main challenge in this endeavor is attacks exploiting the return stack buffer, which are known as Spectre-RSB. Our approach combines a new value-dependent information-flow type system that enforces speculative constant-time in an idealized model of transient execution and a compiler transformation that realizes this idealized model on the generated low-level code. Using the Coq proof assistant, we prove that the type system is sound with respect to the idealized semantics and that the compiler transformation preserves speculative constant-time.
We implement our approach in the Jasmin framework for high-assurance cryptography and demonstrate that the overhead incurred by full Spectre protections is below 2% for most cryptographic primitives and reaches only about 5–7% for the more complex post-quantum key-encapsulation mechanism Kyber.
Good Things Come to Those Who Wait: Dishonest-Majority Coin-Flipping Requires Delay Functions
We reconsider Cleve's famous 1986 impossibility result on coin-flipping without an honest majority. Recently proposed constructions have circumvented this limit by using cryptographic delay functions. We show that this is necessary: a (weak) notion of delay functions is in fact implied by the existence of a protocol circumventing Cleve's impossibility. However, such delay functions are weaker than those used in existing constructions. We complete our result by showing an equivalence, that these weaker delay functions are also sufficient to construct not just fair dishonest-majority coin-flipping protocols, but also the stronger notion of a distributed randomness beacon. We also show that this is possible in a weaker communication model than previously considered, without the assumption of reliable broadcast or a public bulletin board.
Revisiting Keyed-Verification Anonymous Credentials
Keyed-verification anonymous credentials (KVACs) have demonstrated their practicality through large-scale deployments in privacy-critical systems like Signal and Tor. Despite their widespread adoption, the theoretical framework underlying KVACs lacks the flexibility needed to support diverse applications, which in general require different security properties. For instance, rate-limiting credentials only need a weaker unforgeability notion (one-more unforgeability), yet the framework cannot easily accommodate this relaxation. Similarly, identity-based applications require stronger properties than unforgeability -—specifically, extractability for security proofs when adversaries can observe other users' credentials.
In this work, we address these limitations, introducing new notions of extractability and one-more unforgeability. We improve two foundational works in the space:
- The scheme by Chase et al. (CCS 2014), commonly referred to as CMZ or PS MAC can be made statistically anonymous, and issuance cost reduced from to . We update the proof of Chase et al. in the algebraic group model.
- The scheme by Barki et al. (SAC 2016), known as BBDT or BBS MAC can be issued more efficiently (one less group element).
Finally, we note that for KVACs, designated-verifier proofs suffice since the verifier is known in advance. We introduce designated-verifier polynomial commitment schemes and instantiate a variant of the popular KZG commitment scheme without pairings. Any interactive oracle proof can be used in tandem with it, leading to designated-verifier fully-succinct zk-SNARKs without pairings for algebraic groups.
Our model can improve the deployment of larger protocols relying on KVACs. We show this with some examples that benefit from our approach.
Parallel Execution Fee Mechanisms
Uncategorized
Uncategorized
This paper investigates how pricing schemes can achieve efficient allocations in blockchain systems featuring multiple transaction queues under a global capacity constraint. I model a capacity-constrained blockchain where users submit transactions to different queues—each representing a submarket with unique demand characteristics—and decide to participate based on posted prices and expected delays. I find that revenue maximization tends to allocate capacity to the highest-paying queue, whereas welfare maximization generally serves all queues. Optimal relative pricing of different queues depends on factors such as market size, demand elasticity, and the balance between local and global congestion. My results have implications for the implementation of local congestion pricing for evolving blockchain architectures, including parallel transaction execution, directed acyclic graph (DAG)-based systems, and multiple concurrent proposers.
Privacy-Preserving Epidemiological Modeling on Mobile Graphs
The latest pandemic COVID-19 brought governments worldwide to use various containment measures to control its spread, such as contact tracing, social distance regulations, and curfews. Epidemiological simulations are commonly used to assess the impact of those policies before they are implemented. Unfortunately, the scarcity of relevant empirical data, specifically detailed social contact graphs, hampered their predictive accuracy. As this data is inherently privacy-critical, a method is urgently needed to perform powerful epidemiological simulations on real-world contact graphs without disclosing any sensitive information.
In this work, we present RIPPLE, a privacy-preserving epidemiological modeling framework enabling standard models for infectious disease on a population’s real contact graph while keeping all contact information locally on the participants’ devices. As a building block of independent interest, we present PIR-SUM, a novel extension to private information retrieval for secure download of element sums from a database. Our protocols are supported by a proof-of-concept implementation, demonstrating a 2-week simulation over half a million participants completed in 7 minutes, with each participant communicating less than 50 KB.
Tight Multi-challenge Security Reductions for Key Encapsulation Mechanisms
A key encapsulation mechanism (KEM) allows two parties to establish a shared secret key using only public communication. For post-quantum KEMs, the most widespread approach is to design a passively secure public-key encryption (PKE) scheme and then apply the Fujisaki–Okamoto (FO) transform that turns any such PKE scheme into an IND-CCA secure KEM. While the base security requirement for KEMs is typically IND-CCA security, adversaries in practice can sometimes observe and attack many public keys and/or ciphertexts, which is referred to as multi-challenge security. FO does not necessarily guarantee multi-challenge security: for example, FrodoKEM, a Round 3 alternate in NIST’s post-quantum project, used FO to achieve IND-CCA security, but was subsequently shown to be vulnerable to attackers that can target multiple ciphertexts. To avert this multi-ciphertext attack, the FrodoKEM team added a salt to the encapsulation procedure and proved that this does not degrade (single-ciphertext) IND-CCA security. The formal analysis of whether this indeed averts multi-ciphertext attacks, however, was left open, which we address in this work.
Firstly, we formalize FrodoKEM's approach as a new variant of the FO transform, called the salted FO transform. Secondly, we give tight reductions from multi-challenge security of the resulting KEM to multi-challenge security of the underlying public key encryption scheme, in both the random oracle model (ROM) and the quantum-accessible ROM (QROM). Together these results justify the multi-ciphertext security of the salted FrodoKEM scheme, and can also be used generically by other schemes requiring multi-ciphertext security.
Polynomial Time Cryptanalytic Extraction of Deep Neural Networks in the Hard-Label Setting
Deep neural networks (DNNs) are valuable assets, yet their public accessibility raises security concerns about parameter extraction by malicious actors. Recent work by Carlini et al. (Crypto’20) and Canales- Martínez et al. (Eurocrypt’24) has drawn parallels between this issue and block cipher key extraction via chosen plaintext attacks. Leveraging differential cryptanalysis, they demonstrated that all the weights and biases of black-box ReLU-based DNNs could be inferred using a polynomial number of queries and computational time. However, their attacks relied on the availability of the exact numeric value of output logits, which allowed the calculation of their derivatives. To overcome this limitation, Chen et al. (Asiacrypt’24) tackled the more realistic hard-label scenario, where only the final classification label (e.g., "dog" or "car") is accessible to the attacker. They proposed an extraction method requiring a polynomial number of queries but an exponential execution time. In addition, their approach was applicable only to a restricted set of architectures, could deal only with binary classifiers, and was demonstrated only on tiny neural networks with up to four neurons split among up to two hidden layers.
This paper introduces new techniques that, for the first time, achieve cryptanalytic extraction of DNN parameters in the most challenging hard-label setting, using both a polynomial number of queries and polynomial time. We validate our approach by extracting nearly one million parameters from a DNN trained on the CIFAR-10 dataset, comprising 832 neurons in four hidden layers. Our results reveal the surprising fact that all the weights of a ReLU-based DNN can be efficiently determined by analyzing only the geometric shape of its decision boundaries.
Traceable Threshold Encryption without Trusted Dealer
The fundamental assumption in -out-of- threshold encryption is that the adversary can only corrupt less than parties. Unfortunately, it may be unfounded in practical scenarios where shareholders could be incentivized to collude. Boneh, Partap, and Rotem (Crypto'24) recently addressed the setting where or more shareholders work together to decrypt illegally. Inspired by the well-established notion of traitor tracing in broadcast encryption, they added a traceability mechanism that guarantees identifying at least one of the colluders. They provide several constructions that enable traceability, all of which require a trusted dealer to distribute the secret shares. While the trusted dealer can be replaced with a DKG for conventional threshold encryption, it is unclear how to do so without compromising traceability. As thresholdizing is meant to mitigate a single point of failure, a natural question that remains is: Can we construct an efficient traceable threshold encryption scheme that does not rely on a trusted party to distribute the secret shares?
In this paper, we achieve two dealerless traceable threshold encryption constructions with different merits by extending the PLBE primitive of Boneh et al. (Eurocrypt'06) and combining it with the silent setup threshold encryption construction of Garg et al. (Crypto'24). Our first construction achieves an amortized ciphertext of size (for ciphertexts). Our second construction achieves constant ciphertext size even in the worst case but requires a less efficient preprocessing phase as a tradeoff. Both our constructions enjoy a constant secret key size and do not require any interaction between the parties.
An additional restriction in the constructions of Boneh et al. is that they can only guarantee to find at least one colluder, leaving techniques to identify more traitors as an open problem. In this paper, we take a first step towards solving this question by formalizing a technique and applying it to our first construction. Namely, our first construction enables tracing traitors.
CCA-Secure Traceable Threshold (ID-based) Encryption and Application
A recent work by Boneh, Partap, and Rotem [Crypto'24] introduced the concept of traceable threshold encryption, in that if or more parties collude to construct a decryption box, which performs decryptions, then at least one party's identity can be traced by making a few black-box queries to the box. This has important applications, e.g., in blockchain mempool privacy, where collusion yields high financial gain through MEVs without any consequence - the possibility of tracing discourages collusion.
Nevertheless, their definitions leave room for exploitation as they only achieve CPA security and do not consider inconsistency in decryption via different participating sets.
This paper proposes stronger definitions of traceable threshold encryption, which supports CCA-security and consistency. Our main approach considers identity-based variants of traceable encryption (which we also define). It converts that to a CCA-secure construction, adapting two generic transformations, first using a one-time signature and then a fingerprinting code.
We put forward two efficient instantiations of our identity-based scheme with different merits: our first construction is based on Boneh-Franklin IBE [Crypto'01] and has constant size ciphertexts but quadratic size public keys - this is proven secure based on XDH and BDDH. Our second construction is based on Boneh-Boyen IBE [Eurocrypt'04]. It supports both constant-size ciphertexts and constant-size public keys - this is proven secure based on a variant of the uber assumption over bilinear pairings. Our concrete analysis shows that the first construction's ciphertext is much (~6x) smaller than the second construction. Finally, we extend the definitions to support consistency and achieve it by adjoining an efficient, non-interactive proof of correct encryption.
Hollow LWE: A New Spin, Unbounded Updatable Encryption from LWE and PCE
Uncategorized
Uncategorized
Updatable public-key encryption (UPKE) allows anyone to update a public key while simultaneously producing an update token, given which the secret key holder could consistently update the secret key. Furthermore, ciphertexts encrypted under the old public key remain secure even if the updated secret key is leaked -- a property much desired in secure messaging. All existing lattice-based constructions of UPKE update keys by a noisy linear shift. As the noise accumulates, these schemes either require super-polynomial-size moduli or an a priori bounded number of updates to maintain decryption correctness.
Inspired by recent works on cryptography based on the lattice isomorphism problem, we propose an alternative way to update keys in lattice-based UPKE. Instead of shifting, we rotate them. As rotations do not induce norm growth, our construction supports an unbounded number of updates with a polynomial-size modulus. The security of our scheme is based on the LWE assumption over hollow matrices -- matrices which generate linear codes with non-trivial hull -- and the hardness of permutation code equivalence. Along the way, we also show that LWE over hollow matrices is as hard as LWE over uniform matrices, and that a leftover hash lemma holds for hollow matrices.
Secret Sharing with Publicly Verifiable Deletion
Certified deletion, an inherently quantum capability, allows a party holding a quantum state to prove that they have deleted the information contained in that state. Bartusek and Raizes (Crypto 2024) recently studied certified deletion in the context of secret sharing schemes, and showed constructions with privately verifiable proofs of deletion that can be verified only by the dealer who generated the shares. We give two constructions of secret sharing schemes with publicly verifiable certified deletion. Our first construction is based on the post-quantum security of the LWE problem, and each share requires a number of qubits that is linear in the size of an underlying classical secret sharing scheme for the same set of authorized parties. Our second construction is based on a more general assumption—the existence of post quantum one-way functions— but requires an asymptotically larger number of qubits relative to the share size of the underlying classical scheme.
CT-LLVM: Automatic Large-Scale Constant-Time Analysis
Constant-time (CT) is a popular programming discipline to protect
cryptographic libraries against micro-architectural timing attacks.
One appeal of the CT discipline lies in its conceptual simplicity: a
program is CT iff it has no secret-dependent data-flow,
control-flow or variable-timing operation. Thanks to its simplicity,
the CT discipline is supported by dozens of analysis tools. However, a
recent user study demonstrates that these tools are seldom used due to
poor usability and maintainability (Jancar et al. IEEE SP 2022).
In this paper, we introduce CT-LLVM, a CT analysis tool designed for
usability, maintainability and automatic large-scale analysis.
Concretely, CT-LLVM is packaged as a
LLVM plugin and is built as a thin layer on top of two standard LLVM
analysis: def-use and alias analysis. Besides confirming known CT
violations, we demonstrate the usability and scalability of CT-LLVM by
automatically analyzing nine cryptographic libraries. On
average, CT-LLVM can automatically and soundly analyze 36% of the
functions in these libraries, proving that 61% of them are CT. In
addition, the large-scale automatic analysis also reveals new
vulnerabilities in these libraries. In the end, we demonstrate
that CT-LLVM helps systematically mitigate compiler-introduced CT
violations, which has been a long-standing issue in CT analysis.
Succinct Oblivious Tensor Evaluation and Applications: Adaptively-Secure Laconic Function Evaluation and Trapdoor Hashing for All Circuits
We propose the notion of succinct oblivious tensor evaluation (OTE), where two parties compute an additive secret sharing of a tensor product of two vectors , exchanging two simultaneous messages. Crucially, the size of both messages and of the CRS is independent of the dimension of .
We present a construction of OTE with optimal complexity from the standard learning with errors (LWE) problem. Then we show how this new technical tool enables a host of cryptographic primitives, all with security reducible to LWE, such as:
1)Adaptively secure laconic function evaluation for depth- functions with communication .
2) A trapdoor hash function for all functions.
3) An (optimally) succinct homomorphic secret sharing for all functions.
4) A rate- laconic oblivious transfer for batch messages, which is best possible.
In particular, we obtain the first laconic function evaluation scheme that is adaptively secure from the standard LWE assumption, improving upon Quach, Wee, and Wichs (FOCS 2018). As a key technical ingredient, we introduce a new notion of adaptive lattice encodings, which may be of independent interest.
Privacy-Preserving Multi-Signatures: Generic Techniques and Constructions Without Pairings
Multi-signatures allow a set of parties to produce a single signature for a common message by combining their individual signatures. The result can be verified using the aggregated public key that represents the group of signers. Very recent work by Lehmann and Özbay (PKC '24) studied the use of multi-signatures for ad-hoc privacy-preserving group signing, formalizing the notion of multi-signatures with probabilistic yet verifiable key aggregation. Moreover, they proposed new BLS-type multi-signatures, allowing users holding a long-term key pair to engage with different groups, without the aggregated key leaking anything about the corresponding group. This enables key-reuse across different groups in a privacy-preserving way. Unfortunately, their technique cannot be applied to Schnorr-type multi-signatures, preventing state-of-the-art multi-signatures to benefit from those privacy features.
In this work, we revisit the privacy framework from Lehmann and Özbay. Our first contribution is a generic lift that adds privacy to any multi-signature with deterministic key aggregation. As our second contribution, we study two concrete multi-signatures, and give dedicated transforms that take advantage of the underlying structures for improved efficiency. The first one is a slight modification of the popular MuSig2 scheme, achieving the strongest privacy property for free compared to the original scheme. The second is a variant of the lattice-based multi-signature scheme DualMS, making our construction the first post-quantum secure multi-signature for ad-hoc privacy-preserving group signing. The light overhead incurred by the modifications in our DualMS variant still allow us to benefit from the competitiveness of the original scheme.
OPPID: Single Sign-On with Oblivious Pairwise Pseudonyms
Single Sign-On (SSO) allows users to conveniently authenticate to many Relying Parties (RPs) through a central Identity Provider (IdP). SSO supports unlinkable authentication towards the RPs via pairwise pseudonyms, where the IdP assigns the user an RP-specific pseudonym. This feature has been rolled out prominently within Apple's SSO service. While establishing unlinkable identities provides privacy towards RPs, it actually emphasizes the main privacy problem of SSO: with every authentication request, the IdP learns the RP that the user wants to access. Solutions to overcome this limitation exist, but either assume users to behave honestly or require them to manage long-term cryptographic keys.
In this work, we propose the first SSO system that can provide such pseudonymous authentication in an unobservable yet strongly secure and convenient manner. That is, the IdP blindly derives the user's pairwise pseudonym for the targeted RP without learning the RP's identity and without requiring key material handled by the user. We formally define the desired security and privacy properties for such unlinkable, unobservable, and strongly secure SSO. In particular, our model includes the often neglected RP authentication: the IdP typically wants to limit its services to registered RPs only and thus must be able to (blindly) verify that it issues the token and pseudonym to such a registered RP. We propose a simple construction that combines signatures with efficient proofs-of-knowledge with a blind, yet verifiable, evaluation of the Hashed-Diffie-Hellman PRF. We prove the security of our construction and demonstrate its efficiency through a prototypical implementation, which requires a running time of 2-12ms per involved party.
POKÉ: A Compact and Efficient PKE from Higher-dimensional Isogenies
We introduce a new PKE protocol, POKÉ, based on isogenies of unknown degree. The protocol relies on two new techniques: the first constructs an SIDH square while also working with higher-dimensional representations, whereas the second allows us to obtain a shared secret even when all curves in the commutative diagram are known.
The resulting protocol is compact and extremely efficient. We provide a proof-of-concept implementation in SageMath of POKÉ that shows encryption and decryption taking about a hundred milliseconds at security level I: POKÉ is thus the most efficient encryption protocol from isogenies, and it outperforms existing protocols by more than an order of magnitude.
Multiple Group Action Dlogs with(out) Precomputation
Let be the action of a group of size on a set . Let be a group action dlog instance, where our goal is to compute the unknown group element from the known set elements .
The Galbraith-Hess-Smart (GHS) collision finding algorithm solves the group action dlog in steps with polynomial memory.
We show that group action dlogs are suitable for precomputation attacks. More precisely, for any our precomputation algorithm computes within steps a hint of space complexity , which allows to solve any group action dlog in an online phase within steps. A typical instantiation is , which gives precomputation time and space , and online time only .
Moreover, we show that solving multiple group action dlog instances allows for speedups. Namely, our collision finding algorithm solves group action dlogs in steps, instead of the straight-forward steps required for running times GHS.
Interestingly, our multi instance algorithm (with precomputation) can be seen as a special case of our precomputation algorithm.
Our multiple instance approach can be freely combined with our precomputations, allowing for a variety of tradeoffs.
Technically, our precomputation and multiple instance group action dlog attacks are adaptations of the techniques from the standard dlog setting in abelian groups. While such an adaptation seems natural, it is per se unclear which techniques transfer from the dlog to the more general group action dlog setting, for which does not offer a group structure.
Our algorithms have direct implications for all group action based cryptosystems, such as CSIDH and its variants. We provide experimental evidence that our techniques work well in the CSIDH setting.
Black-box Collision Attacks on Widely Deployed Perceptual Hash Functions
Perceptual hash functions identify multimedia content by mapping similar inputs to similar outputs. They are widely used for detecting copyright violations and illegal content but lack transparency, as their design details are typically kept secret.
Governments are considering extending the application of these functions to Client-Side Scanning (CSS) for end-to-end encrypted services: multimedia content would be verified against known illegal content before applying encryption.
In 2021, Apple presented a detailed proposal for CSS based on the NeuralHash perceptual hash function. After strong criticism pointing out privacy and security concerns, Apple has withdrawn the proposal, but the NeuralHash software is still present on Apple devices.
Brute force collisions for NeuralHash (with a 96-bit result) require evaluations. Shortly after the publication of NeuralHash, it was demonstrated that it is easy to craft two colliding inputs for NeuralHash that are perceptually dissimilar. In the context of CSS, this means that it is easy to falsely incriminate someone by sending an innocent picture with the same hash value as illegal content. This work shows a more serious weakness: when inputs are restricted to a set of human faces, random collisions are highly likely to occur in input sets of size . Unlike the targeted attack, our attacks are black-box attacks: they do not require knowledge of the design of the perceptual hash functions. In addition, we show that the false negative rate is high as well. We demonstrate the generality of our approach by applying a similar attack to PhotoDNA, a widely deployed perceptual hash function proposed by Microsoft with a hash result of 1152 bits. Here we show that specific small input sets result in near-collisions, with similar impact. These results imply that the current designs of perceptual hash function are completely unsuitable for large-scale client scanning, as they would result in an unacceptably high false positive rate. This work underscores the need to reassess the security and feasibility of perceptual hash functions, particularly for large-scale applications where privacy risks and false positives have serious consequences.
Improved Cryptanalysis of SNOVA
SNOVA is a multivariate signature scheme submitted to the NIST project for additional signature schemes by Cho, Ding, Kuan, Li, Tseng, Tseng, and Wang. With small key and signature sizes good performance, SNOVA is one of the more efficient schemes in the competition, which makes SNOVA an important target for cryptanalysis.
In this paper, we observe that SNOVA implicitly uses a structured version of the ``whipping'' technique developed for the MAYO signature scheme. We show that the extra structure makes the construction vulnerable to new forgery attacks. Concretely, we formulate new attacks that reduce the security margin of the proposed SNOVA parameter sets by a factor between and . Furthermore, we show that large fractions of public keys are vulnerable to more efficient versions of our attack. For example, for SNOVA-37-17-2, a parameter set targeting NIST's first security level, we show that roughly one out of every public keys is vulnerable to a universal forgery attack with bit complexity , and roughly one out of every public keys is even breakable in practice within a few minutes.
Committing Authenticated Encryption: Generic Transforms with Hash Functions
Recent applications and attacks have highlighted the need for authenticated encryption (AE) schemes to achieve the so-called committing security beyond privacy and authenticity. As a result, several generic solutions have been proposed to transform a non-committing AE scheme to a committing one, for both basic unique-nonce security and advanced misuse-resistant (MR) security. We observe that all existing practical generic transforms are subject to at least one of the following limitations: (i) not committing to the entire encryption context, (ii) involving non-standard primitives, (iii) not being a black-box transform, (iv) providing limited committing security. Furthermore, so far, there has been no generic transform that can directly elevate a basic AE scheme to a committing AE scheme that offers MR security. Our work fills these gaps by developing black-box generic transforms that crucially rely on hash functions, which are well standardized and widely deployed.
First, we construct three basic transforms that combine AE with a single hash function, which we call and . They all guarantee strong security, and can be applied to both AE and basic privacy-only encryption schemes. Next, for MR security, we propose two advanced hash-based transforms that we call and . is an MRAE-preserving transform that adds committing security to an MR-secure AE scheme. is the first generic transform that can directly elevate basic AE to one with both committing and MR security; moreover, also works with arbitrary privacy-only encryption schemes. Both of them feature a simple design and ensure strong security.
For performance evaluation, we compare our transforms to similar existing ones, both in theory and through practical implementations. The results show that our achieves the highest practical efficiency among basic transforms, while excels in MRAE-preserving transforms. Our MRAE-lifting transform demonstrates comparable performance to MRAE-preserving ones and surpasses them for messages larger than approximately bytes; for longer messages, it even outperforms the benchmark, non-committing standardized .
Efficient Instances of Docked Double Decker With AES, and Application to Authenticated Encryption
A tweakable wide blockcipher is a construction which behaves in the same way as a tweakable blockcipher, with the difference that the actual block size is flexible. Due to this feature, a tweakable wide blockcipher can be directly used as a strong encryption scheme that provides full diffusion when encrypting plaintexts to ciphertexts and vice versa. Furthermore, it can be the basis of authenticated encryption schemes fulfilling the strongest security notions. In this paper, we present three instantiations of the docked double decker tweakable wide blockcipher: , , and . These instances exclusively use similar building blocks as AES-GCM (AES and finite field multiplication), are designed for maximal parallelism, and hence, can make efficient use of existing hardware accelerators. is a birthday bound secure scheme, and is an immediate generalization to allow for variable length tweaks. achieves security beyond the birthday bound provided that the same tweak is not used too often. Moreover, builds upon a novel conditionally beyond birthday bound secure pseudorandom function, a tweakable variant of the XOR of permutations, facilitating in the need to include a tweak in the AES evaluations without sacrificing flexibility in docked double decker. We furthermore introduce an authenticated encryption mode specifically tailored to be instantiated with and , where special attention is given to how the nonce and associated data can be processed. We prove that this mode is secure in the nonce-respecting setting, in the nonce-misuse setting, as well as in the setting where random nonces are used.
Last updated: 2025-02-24
Partial and Fully Homomorphic Matching of IP Addresses Against Blacklists for Threat Analysis
In many areas of cybersecurity, we require access to Personally Identifiable Information (PII), such as names, postal addresses and email addresses. Unfortunately, this can lead to data breaches, especially in relation to data compliance regulations such as GDPR. An IP address is a typical identifier which is used to map a network address to a person. Thus, in applications which are privacy-aware, we may aim to hide the IP address while aiming to determine if the address comes from a blacklist. One solution to this is to use homomorphic encryption to match an encrypted version of an IP address to a blacklisted network list. This matching allows us to encrypt the IP address and match it to an encrypted version of a blacklist. In this paper, we use the OpenFHE library \cite{OpenFHE} to convert network addresses into the BFV homomorphic encryption method. In order to assess the performance impact of BFV, it implements a matching method using the OpenFHE library and compares this against the partial homomorphic methods of Paillier, Damgard-Jurik, Okamoto-Uchiyama, Naccache-Stern and Benaloh. The main findings are that the BFV method compares favourably against the partial homomorphic methods in most cases.
Private Multi-Party Neural Network Training over via Galois Rings
Secret-sharing-based multi-party computation provides effective solutions for privacy-preserving machine learning. In this paper, we present novel protocols for privacy-preserving neural network training using Shamir secret sharing scheme over Galois rings. The specific Galois ring we use is , which contains as a subring. The algebraic structure of enables us to benefit from Shamir scheme while performing modulo operations only on instead of a prime number, making our protocols more compatible with modern computer architectures. We achieve the parallel processing of training data by embedding different training samples into the different coefficients of the polynomial representing a single Galois ring element, and we show that this embedding can be performed with no additional communication overhead compared to processing only one sample at a time. To evaluate our methods, we conduct private training of neural networks on the MNIST dataset between different numbers of participants. The experimental results indicate the advantages of our protocols compared to existing -based implementations in this domain.
Honest Majority MPC with Communication in Minicrypt
In this work, we consider the communication complexity of MPC protocols in honest majority setting achieving malicious security in both information-theoretic setting and computational setting. On the one hand, we study the possibility of basing honest majority MPC protocols on oblivious linear evaluation (OLE)-hybrid model efficiently with information-theoretic security. More precisely, we instantiate preprocessing phase of the recent work Sharing Transformation (Goyal, Polychroniadou, and Song, CRYPTO 2022) assuming random OLE correlations. Notably, we are able to prepare packed Beaver triples with malicious security achieving amortized communication of field elements plus a number of OLE correlations per packed Beaver triple, which is the best known result. To further efficiently prepare random OLE correlations, we resort to IKNP-style OT extension protocols (Ishai et al., CRYPTO 2003) in random oracle model.
On the other hand, we derive a communication lower bound for preparing OLE correlations in the information-theoretic setting based on negative results due to Damgård, Larsen, and Nielsen (CRYPTO 2019).
Combining our positive result with the work of Goyal, Polychroniadou, and Song (CRYPTO 2022), we derive an MPC protocol with amortized communication of elements per gate in random oracle model achieving malicious security, where denotes the length of a field element and is the security parameter.
SNARKs for Virtual Machines are Non-Malleable
Cryptographic proof systems have a plethora of applications: from building other cryptographic tools (e.g., malicious security for MPC protocols) to concrete settings such as private transactions or rollups. In several settings it is important for proof systems to be non-malleable: an adversary should not to be able to modify a proof they have observed into another for a statement for which they do not know the witness.
Proof systems that have been deployed in practice should arguably satisfy this notion: it is crucial in settings such as transaction systems and in order to securely compose proofs with other cryptographic protocols. As a consequence, results on non-malleability should keep up with designs of proofs being deployed.
Recently, Arun et al. proposed (Eurocrypt 2024), arguably the first efficient proof system whose architecture is based on the lookup singularity approach (Barry Whitehat, 2022). This approach consists in representing a general computation as a series of table lookups. The final result is a SNARK for a Virtual Machine execution (or SNARK VM). Both SNARK VMs and lookup-singularity SNARKs are architectures with enormous potential and will probably be adopted more and more in the next years (and they already are).
As of today, however, there is no literature regarding the non-malleability of SNARK VMs. The goal of this work is to fill this gap by providing both concrete non-malleability results and a set of technical tools for a more general study of SNARK VMs security (as well as "modular" SNARKs in general). As a concrete result, we study the non-malleability of (an idealized version of) and its fundamental building block, the lookup argument . While connecting our new result on the non-malleability of to that of , we develop a set of tools that enable the composition of non-malleable SNARKs. We believe this toolbox to be valuable in its own right.
(Multi-Input) FE for Randomized Functionalities, Revisited
Randomized functional encryption (rFE) generalizes functional encryption (FE) by incorporating randomized functionalities. Randomized multi-input functional encryption (rMIFE) extends rFE to accommodate multi-input randomized functionalities.
In this paper, we reassess the framework of rFE/rMIFE enhancing our understanding of this primitive and laying the groundwork for more secure and flexible constructions in this field. Specifically, we make three key contributions:
- New definition: We identify critical gap in the existing indistinguishability-based (IND) security definition for rFE/rMIFE. Notably, current definition fails to adequately address security against malicious encryptors—a crucial requirement for rFE/rMIFE since their introduction. We propose a novel, robust IND security definition that not only addresses threats from malicious decryptors but also quantifies the security against malicious encryptors effectively.
- Counterexample: To illustrate the importance of this definitional gap, we provide a counterexample of an insecure rFE scheme that meets IND security under the previous definition but explicitly fails in a natural setting (and where this failure would be precluded by our enhanced definition). Our counterexample scheme is non-trivial and meticulously designed using standard cryptographic tools, namely FE for deterministic functions, pseudorandom function (PRF), public key encryption (PKE), and simulation-sound non-interactive zero-knowledge (NIZK) proof systems.
- Adaptive unbounded-message secure construction: The only viable prior construction of rMIFE by Goldwasser et al. [EUROCRYPT 2014] (which uses indistinguishability obfuscation (iO) and other standard assumptions) has significant limitations: it permits only a pre-defined number of messages per encryption slot and operates under selective-security constraints, requiring adversaries to declare challenge ciphertext queries and "corrupted" encryption keys in advance. We address these shortcomings by employing sub-exponentially secure iO. Technically, we build on and adapt methods developed by Goyal et al. [ASIACRYPT 2016] for deterministic MIFE.
Weakly Super-Invertible Matrices and Constant Communication Dishonest Majority MPC
In recent years, there has been tremendous progress in improving the concrete communication complexity of dishonest majority MPC. In the sub-optimal corruption threshold setting where for some constant , Sharing Transformation (Goyal , CRYPTO'22) and SuperPack (Escudero , EUROCRYPT'23) presented protocols with information-theoretic online phases requiring field elements of total communication per multiplication gate. However, Sharing Transformation assumes that their offline phase is instantiated by a trusted party, while SuperPack instantiates their offline phase with large communication of per multiplication gate assuming oblivious linear evaluation (OLE) correlations. The main bottleneck in instantiating the offline phases of both protocols is generating random packed beaver triples of the form , for random , and , where is the .
To address this bottleneck, our main technical contribution is introducing and constructing super-invertible matrices, a relaxation of super-invertible matrices in which sub-matrices have high (but not necessarily full) rank. This relaxation allows for matrices with only non-zero entries, enabling a first step towards generating packed beaver triples with total communication per underlying triple, assuming OLE correlations. As the second (and final) step, we use the efficient protocol of (Choudhury and Patra, Trans. Inform. Theory '17).
We also implement our packed beaver triple protocol and provide experimental results. Our new protocol obtains up to 38% smaller communication and 9% reduction in runtime compared to SuperPack's triple protocol. Additionally, by instantiating SuperPack's offline phase with our new protocol, we obtain up to 16% communication reductions.
Finally, we use our packed beaver triple protocol to instantiate the offline phase of Sharing Transformation, yielding a dishonest majority MPC protocol with total communication across both the offline and online phases.
Unconditional foundations for supersingular isogeny-based cryptography
In this paper, we prove that the supersingular isogeny problem (Isogeny), endomorphism ring problem (EndRing) and maximal order problem (MaxOrder) are equivalent under probabilistic polynomial time reductions, unconditionally.
Isogeny-based cryptography is founded on the presumed hardness of these problems, and their interconnection is at the heart of the design and analysis of cryptosystems like the SQIsign digital signature scheme. Previously known reductions relied on unproven assumptions such as the generalized Riemann hypothesis. In this work, we present unconditional reductions, and extend this network of equivalences to the problem of computing the lattice of all isogenies between two supersingular elliptic curves (HomModule).
For cryptographic applications, one requires computational problems to be hard on average for random instances. It is well-known that if Isogeny is hard (in the worst case), then it is hard for random instances. We extend this result by proving that if any of the above-mentionned classical problems is hard in the worst case, then all of them are hard on average. In particular, if there exist hard instances of Isogeny, then all of Isogeny, EndRing, MaxOrder and HomModule are hard on average.
Real-world Universal zkSNARKs are non-malleable
Simulation extractability is a strong security notion of zkSNARKs that guarantees that an attacker who produces a valid proof must know the corresponding witness, even if the attacker had prior access to proofs generated by other users. Notably, simulation extractability implies that proofs are non-malleable and is of fundamental importance for applications of zkSNARKs in distributed systems. In this work, we study sufficient and necessary conditions for constructing simulation-extractable universal zkSNARKs via the popular design approach based on compiling polynomial interactive oracle proofs (PIOP). Our main result is the first security proof that popular universal zkSNARKs, such as PLONK and Marlin, as deployed in the real world, are simulation-extractable. Our result fills a gap left from previous work (Faonio et al. TCC’23, and Kohlweiss et al. TCC’23) which could only prove the simulation extractability of the “textbook” versions of these schemes and does not capture their optimized variants, with all the popular optimization tricks in place, that are eventually implemented and deployed in software libraries.
Improved Stock Market Structure Using Cryptography
The stock market has two primary functions, that of providing liquidity and price discovery. While historically, the market micro-structure was mostly ignored or assumed to function ideally for the purpose of asset pricing, O’Hara (Journal of Finance, 2003) established that both liquidity and price discovery affect asset pricing, and consequently asset returns. In this work, we extend the analysis of Easley and O’Hara (Journal of finance, 2004) to study common stock market mechanisms, including open-bid and sealed-bid auctions, as well as the fact that the auctioneer may not be trustworthy. Our analysis shows that open-bid auctions dis-incentivize stock research, leading to higher volatility. On the other hand, we also show that in sealed-bid auctions, the price and volume of an auction reveals only partial private research information, thus retaining incentive for traders to do research. We also show how secure computation can be used to build auctions with a distributed auctioneer, where the clearing and partial-fulfillment information of the auction does not reveal unfilled orders of others, thus maintaining private information
Bulletproofs for R1CS: Bridging the Completeness-Soundness Gap and a ZK Extension
Bulletproofs, introduced by Bünz, Bootle, Boneh, Poelstra, Wuille and Maxwell (IEEE S&P, 2018), is a highly efficient non-interactive argument system that does not require a trusted setup. Recently, Bünz (PhD Thesis, 2023) extended Bulletproofs to support arguments for rank-1 constraint satisfaction (R1CS) systems, a widely-used representation for arithmetic satisfiability problems. Although the argument system constructed by Bünz preserves the attractive properties of Bulletproofs, it presents a gap between its completeness and soundness guarantees: The system is complete for a restricted set of instances, but sound only for a significantly broader set. Although argument systems for such gap relations nevertheless provide clear and concrete guarantees, the gaps they introduce may lead to various inconsistencies or undesirable gaps within proofs of security, especially when used as building blocks within larger systems.
In this work we show that the argument system presented by Bünz can be extended to bridge the gap between its completeness and soundness, and to additionally provide honest-verifier zero-knowledge. For the extended argument system, we introduce a refined R1CS relation that captures the precise set of instances for which both completeness and soundness hold without resorting to a gap formulation. The extended argument system preserves the performance guarantees of the argument system presented by Bünz, and yields a non-interactive argument system using the Fiat-Shamir transform.
PKE and ABE with Collusion-Resistant Secure Key Leasing
Secure key leasing (SKL) is an advanced encryption functionality that allows a secret key holder to generate a quantum decryption key and securely lease it to a user. Once the user returns the quantum decryption key (or provides a classical certificate confirming its deletion), they lose their decryption capability. Previous works on public key encryption with SKL (PKE-SKL) have only considered the single-key security model, where the adversary receives at most one quantum decryption key. However, this model does not accurately reflect real-world applications of PKE-SKL. To address this limitation, we introduce collusion-resistant security for PKE-SKL (denoted as PKE-CR-SKL). In this model, the adversary can adaptively obtain multiple quantum decryption keys and access a verification oracle which validates the correctness of queried quantum decryption keys. Importantly, the size of the public key and ciphertexts must remain independent of the total number of generated quantum decryption keys. We present the following constructions:
- A PKE-CR-SKL scheme based on the learning with errors (LWE) assumption.
- An attribute-based encryption scheme with collusion-resistant SKL (ABE-CR-SKL), also based on the LWE assumption.
- An ABE-CR-SKL scheme with classical certificates, relying on multi-input ABE with polynomial arity.
Stateless Deterministic Multi-Party EdDSA Signatures with Low Communication
EdDSA is a standardized signing algorithm, by both the IRTF and NIST, that is widely used in blockchain, e.g., Hyperledger, Cardano, Zcash, etc. It is a variant of the well-known Schnorr signature scheme that leverages Edwards curves. It features stateless and deterministic nonce generation, meaning it does not rely on a reliable source of randomness or state continuity. Recently, NIST issued a call for multi-party threshold EdDSA signatures, with one approach verifying nonce generation through zero-knowledge (ZK) proofs. In this paper, we propose a new stateless and deterministic multi-party EdDSA protocol in the full-threshold setting, capable of tolerating all-but-one malicious corruption. Compared to the state-of-the-art multi-party EdDSA protocol by Garillot et al. (Crypto’21), our protocol reduces communication cost by a factor of 56x while maintaining the same three-round structure, albeit with a roughly 2.25x increase in computational cost. We utilize information-theoretic message authentication codes (IT-MACs) in a multi-verifier setting to authenticate values and transform them from the Boolean domain to the arithmetic domain by refining multi-verifier extended doubly-authenticated bits (mv-edabits). Additionally, we employ pseudorandom correlation functions (PCF) to generate IT-MACs in a stateless and deterministic manner. Combining these elements, we design a multi-verifier zero-knowledge (MVZK) protocol for stateless and deterministic nonce generation. Our protocol can be used to build secure blockchain wallets and custody solutions, enhancing key protection.
On Quantum Money and Evasive Obfuscation
We show a black box barrier against constructing public key quantum money from obfuscation for evasive functions. As current post-quantum obfuscators based on standard assumptions are all evasive, this shows a fundamental barrier to achieving public key quantum money from standard tools. Our impossibility applies to black box schemes where (1) obfuscation queries made by the mint are classical, and (2) the verifier only makes (possibly quantum) evaluation queries, but no obfuscation queries. This class seems to capture any natural method of using obfuscation to build quantum money.
A note on adding zero-knowledge to STARKs
We discuss zero-knowledge in the context of univariate argument systems which use the FRI proximity test for Reed-Solomon codes as polynomial commitment scheme.
We confine ourselves to small-field STARK, i.e. arguments with an arithmetization over a small finite field (the basefield), and we dwell on two techniques widely used in practice: Randomization by polynomials over the basefield, and decomposing the overall quotient into polynomials of smaller degree. In particular the latter is a source for mistakes, both in literature as well as in software implementations.
The current, updated version further includes a separate discussion on perfect zero-knowledge in permutation arguments.
A New Public Key Cryptosystem Based on the Cubic Pell Curve
Since its invention in 1978 by Rivest, Shamir and Adleman, the public key cryptosystem RSA has become a widely popular and a widely useful scheme in cryptography. Its security is related to the difficulty of factoring large integers which are the product of two large prime numbers. For various reasons, several variants of RSA have been proposed, and some have different arithmetics such as elliptic and singular cubic curves. In 2018, Murru and Saettone proposed another variant of RSA based on the cubic Pell curve with a modulus of the form . In this paper, we present a new public key cryptosystem based on the arithmetic of the cubic Pell curve with a modulus of the form . Its security is based on the hardness of factoring composite integers, and on Rabin's trapdoor one way function. In the new scheme, the arithmetic operations are performed on a cubic Pell curve which is known only to the sender and the recipient of a plaintext.
Bounded CCA2 Secure Proxy Re-encryption Based on Kyber
Proxy re-encryption (PRE) allows a semi-honest party (called a proxy) to convert ciphertexts under a public key into ciphertexts under another public key. Due to this functionality, there are various applications such as encrypted email forwarding, key escrow, and secure distributed file systems. On the other hand, post-quantum cryptography (PQC) is one of the most important research areas. However, there is no post-quantum PRE scheme with security against adaptive chosen ciphertext attacks (denoted by security) while many PRE schemes have been proposed so far.
In this paper, we propose a bounded secure PRE scheme based on CRYSTALS-Kyber (Kyber, for short) which is a selected algorithm in the NIST PQC competition. To this end, we present generic constructions of bounded secure PRE. Our generic constructions start from PRE with (a variant of) security against chosen plaintext attacks (denoted by security) and a new PRE's property introduced in this paper. In order to instantiate our generic constructions, we present a Kyber-based PRE scheme with the required property. As a result, we can construct a bounded secure PRE scheme from Kyber.
A Generic Approach to Adaptively-Secure Broadcast Encryption in the Plain Model
Broadcast encryption allows a user to encrypt a message to recipients with a ciphertext whose size scales sublinearly with . The natural security notion for broadcast encryption is adaptive security which allows an adversary to choose the set of recipients after seeing the public parameters. Achieving adaptive security in broadcast encryption is challenging, and in the plain model, the primary technique is the celebrated dual-systems approach, which can be implemented over groups with bilinear maps. Unfortunately, it has been challenging to replicate the dual-systems approach in other settings (e.g., with lattices or witness encryption). Moreover, even if we focus on pairing-based constructions, the dual-systems framework critically relies on decisional (and source-group) assumptions. We do not have constructions of adaptively-secure broadcast encryption from search (or target-group) assumptions in the plain model.
Gentry and Waters (EUROCRYPT 2009) described a compiler that takes any semi-statically-secure broadcast encryption scheme and transforms it into an adaptively-secure scheme in the random oracle model. While semi-static security is easier to achieve and constructions are known from witness encryption as well as search (and target-group) assumptions on pairing groups, the transformed scheme relies on random oracles. In this work, we show that using publicly-sampleable projective PRGs, we can achieve adaptive security in the plain model. We then show how to build publicly-sampleable projective PRGs from many standard number-theoretic assumptions (e.g., CDH, LWE, RSA).
Our compiler yields the first adaptively-secure broadcast encryption scheme from search assumptions as well as the first such scheme from witness encryption in the plain model. We also obtain the first adaptively-secure pairing-based scheme in the plain model with -size public keys and -size ciphertexts (where suppresses polynomial factors in the security parameter ). Previous adaptively-secure pairing-based schemes in the plain model with -size ciphertexts required -size public keys.
Arctic: Lightweight and Stateless Threshold Schnorr Signatures
Threshold Schnorr signatures are seeing increased adoption in practice, and offer practical defenses against single points of failure. However, one challenge with existing randomized threshold Schnorr signature schemes is that signers must carefully maintain secret state across signing rounds, while also ensuring that state is deleted after a signing session is completed. Failure to do so will result in a fatal key-recovery attack by re-use of nonces.
While deterministic threshold Schnorr signatures that mitigate this issue exist in the literature, all prior schemes incur high complexity and performance overhead in comparison to their randomized equivalents. In this work, we seek the best of both worlds; a deterministic and stateless threshold Schnorr signature scheme that is also simple and efficient.
Towards this goal, we present Arctic, a lightweight two-round threshold Schnorr signature that is deterministic, and therefore does not require participants to maintain state between signing rounds. As a building block, we formalize the notion of a Verifiable Pseudorandom Secret Sharing (VPSS) scheme, and define VPSS1, an efficient VPSS construction. VPSS1 is secure when the total number of participants is at least 2t−1 and the adversary is assumed to corrupt at most t−1; i.e., in the honest majority model.
We prove that Arctic is secure under the discrete logarithm assumption in the random oracle model, similarly assuming at minimum 2t−1 number of signers and a corruption threshold of at most t−1. For moderately sized groups (i.e., when n ≤ 20), Arctic is more than an order of magnitude more efficient than prior deterministic threshold Schnorr signatures in the literature. For small groups where n ≤ 10, Arctic is three orders of magnitude more efficient.
The Malice of ELFs: Practical Anamorphic-Resistant Encryption without Random Oracles
The concept of Anamorphic Encryption (Persiano, Phan and Yung, Eurocrypt '22), aims to enable private communication in settings where the usage of encryption is heavily controlled by a central authority (henceforth called the dictator) who can obtain users' secret keys.
Since then, various works have improved our understanding of AE in several aspects, including its limitations. To this regard, two recent works constructed various Anamorphic-Resistant Encryption (ARE) schemes, i.e., schemes admitting at most bits of covert communication.
However, those results are still unsatisfactory, each coming with at least one of the following issues: (1) use of cryptographic heavy hammers such as indistinguishability obfuscation (iO); (2) abuse of the original definition to define overly powerful dictators; (3) reliance on the Random Oracle Model (ROM). In particular, proofs in the ROM are controversial as they fail to account for anamorphic schemes making non-black-box usage of the hash function used to instantiate the Random Oracle.
In this work, we overcome all of these limitations.
First, we describe an anamorphic-resistant encryption (ARE) scheme approaching practicality by relying only on public-key encryption and Extremely Lossy Functions (ELFs), both known from the (exponential) DDH assumption. Moreover, further assuming Unique NIZKs (known from iO), we provide another construction, which we later use to realize the first ARE; that is, a scheme that achieves the strongest level of anamorphic resistance against each of the possible levels of anamorphic security.
Anamorphic Resistant Encryption: the Good, the Bad and the Ugly
Anamorphic encryption (AE), introduced by Persiano, Phan and Yung at Eurocrypt `22, allows to establish secure communication in scenarios where users might be forced to hand over their decryption keys to some hostile authority. Over the last few years, several works have improved our understanding of the primitive by proposing novel realizations, new security notions and studying inherent limitations.
This work makes progress, mainly, on this last line of research.
We show concrete realizations of public key encryption schemes that, provably, cannot be turned anamorphic. These were called Anamorphic Resistant Encryption (ARE, fort short) in a recent work of Dodis and Goldin.
We also show that, under certain conditions, anamorphic encryption is equivalent to algorithm substitution attacks. This allows to positively reinterpret our AREs as PKE schemes provably resistant to subversion attacks. To the best of our knowledge, these seem to be the first IND-CPA secure schemes achieving subversion resistance without trust assumptions or non-black-box decomposition techniques.
Our two AREs heavily rely, among other things, on a direct usage of extremely lossy functions: here the lossyness property is used in the constructions, rather than just in the proofs. The first construction is in the public parameters model and also requires iO. The second construction eliminates the need of both public parameters and iO, but is in the random oracle and relies on the novel concept of robust extremely lossy functions with group structure, a primitive that we define and (show how to) realize in this paper.
Single Trace Side-Channel Vulnerabilities Discovery Using Statistical Leakage Simulator
This paper presents a novel single-trace side-channel attack on FALCON—a lattice-based post-quantum digital signature protocol recently approved for standardization by NIST. We target the discrete Gaussian sampling operation within the FALCON key generation scheme and use a single power measurement trace to succeed. Notably, negating the ‘shift right 63-bit’ operation (for 64-bit values) leaks critical information about the ‘-1’ vs. ‘0’ assignments to intermediate coefficients. These leaks enable full recovery of the generated secret keys. The proposed attack is simulated on the ELMO simulator running both reference and optimized software implementation from FALCON’s NIST Round 3 package. Statistical analysis with 20k tests reveals a full key-recovery success rate of 100% for FALCON-512. This work highlights the vulnerability of current software solutions to single-trace attacks and underscores the urgent need to develop single-trace resilient software for embedded systems in the presilicon phase.
Traceable Verifiable Secret Sharing and Applications
A secret sharing scheme allows a trusted dealer to divide a secret among multiple parties so that a sufficient number of them can recover the secret, while a smaller group cannot. In CRYPTO'21, Goyal, Song, and Srinivasan introduced Traceable Secret Sharing (TSS), which enhances traditional secret sharing by enabling the identification of parties involved in secret reconstruction, deterring malicious behavior like selling shares. Recently, Boneh, Partap, and Rotem (CRYPTO'24) presented two more efficient TSS schemes. However, these existing TSS schemes assume that all distributed shares are valid and shareholders act honestly during the secret reconstruction phase. In this paper, we introduce Traceable Verifiable Secret Sharing (TVSS), a concept designed to ensure both traceability and verifiability in the face of malicious actions by either the dealer or shareholders. We propose a general strategy for transforming a Shamir-based, computationally secure Verifiable Secret Sharing (VSS) scheme into an efficient TVSS scheme. Building on this strategy, we construct two practical TVSS schemes in the honest-majority setting, based on well-known VSS schemes proposed by Feldman (SFCS'87) and Pedersen (CRYPTO'91). Our proposed TVSS schemes retain public shareholder indexes, enhancing flexibility in designing accountable threshold protocols (e.g., Distributed Key Generation protocols) using TVSS. Compared to the original VSS schemes, the individual share size in the new TVSS schemes increases by only a single field element and is just two or three times the size of the main secret.
Motivated by a recent study on Accountable Threshold Cryptosystems (ATCs) by Boneh, Partap, and Rotem (CRYPTO'24), and by leveraging our proposed Feldman-based TVSS scheme, we also introduce an efficient ATC based on ElGamal cryptosystem. This new ATC enables a tracer to uniquely identify the parties involved in the decryption process while introducing minimal overhead to existing actively secure (and/or robust) threshold protocols built on the ElGamal cryptosystem.
Accelerating the Delfs-Galbraith algorithm with fast subfield root detection
We give a new algorithm for finding an isogeny from a given supersingular elliptic curve to a subfield elliptic curve , which is the bottleneck step of the Delfs-Galbraith algorithm for the general supersingular isogeny problem. Our core ingredient is a novel method of rapidly determining whether a polynomial has any roots in a subfield , while crucially avoiding expensive root-finding algorithms. In the special case when , i.e. when is the -th modular polynomial evaluated at a supersingular -invariant, this provides a means of efficiently determining whether there is an -isogeny connecting the corresponding elliptic curve to a subfield curve. Together with the traditional Delfs-Galbraith walk, inspecting many -isogenous neighbours in this way allows us to search through a larger proportion of the supersingular set per unit of time. Though the asymptotic complexity of our improved algorithm remains unchanged from that of the original Delfs-Galbraith algorithm, our theoretical analysis and practical implementation both show a significant reduction in the runtime of the subfield search. This sheds new light on the concrete hardness of the general supersingular isogeny problem, the foundational problem underlying isogeny-based cryptography.
Minicrypt PIR for Big Batches
We present PIR protocols for offline/online two-server setting where a client wants to privately retrieve a batch of entries from database of size by interacting with a servers . The client has interacted with a server ahead of time, not colluding with . We present simple protocols based on one-way functions that substantially improve on the query complexity or runtime over existing works. Concrete instantiations of our general paradigm lead to batch PIR protocols with the following parameters:
- A protocol for batches of , where , and each spend a total of work and exchange bits of communication. This yields an amortized complexity of work and communication per query in the batch.
- A more balanced protocol for batches of size in which spends a total of work, and spend work, and the total communication is of size .
Our protocols have immediate applications such as Private Set Intersection (PSI) in the two-server setting with preprocessing and unbalanced set sizes.
Chosen-Ciphertext Security for Functional Encryption with Multiple Users: Definitions and Generic Concrete Constructions
Functional Encryption (FE) is a powerful cryptographic primitive that allows for fine- grained computation over encrypted data. In the age of modern computing in complex environments, where data comes from multiple independent sources to be later jointly analysed in a fine-grained computation manner, the notion of multi-user functional encryption is becoming increasingly important. In particular, since their introduction (Goldwasser et al. at Eurocrypt’14; Chotard et al. at Asiacrypt’18), Multi-Client and Multi-Input FE become the subjects of a plethora of works, which study on concrete function classes, improving security, and more. Among many properties, one of the most important security property for Multi-Client/Multi-Input FE is the confidentiality of users’ encrypted data. Due to the complexity of these primitives, modeling a strong security notion and at the same time providing efficient constructions is a challenging task.
However, all security notions considered so far for Multi-Client/Multi-Input FE are in the chosen- plaintext setting, whereas it is long settled that the chosen-ciphertext setting is the most relevant for practical security in classical public-key encryption. For FE, the only known works are on single-user context, namely by Benhamouda et al. (PKC’17), Gay (PKC’20), Castagnos et al. (TCS’22). This leaves open the questions, both conceptually and constructively, of attaining chosen-ciphertext security in the multi-user setting, notably for Multi-Client and Multi-Input FE.
This work tackles the above questions of chosen-ciphertext security in multi-user context for FE for the first time:
- We propose a new security notion for Multi-Client FE, and Multi-Input FE, in the chosen- ciphertext setting. Our notions extend the single-user notion that is studied in previous works and is robust against strong adversaries.
- For the class computing inner products, we demonstrate the feasibility of our new notions by providing nouvel generic constructions for Multi-Client FE and Multi-Input FE. Surprisingly, our contruction for Multi-Input FE attains the same efficiency as in the public key single-client setting of previous works, and can be instantiated from the Decisional Diffie-Hellman or Decision Composite Residuosity assumptions. On the other hand, our contruction for Multi-Client FE enjoys an orignal toolkit of techniques that is developed to bootstrap a MCFE with chosen- plaintext security to chosen-ciphertext security, in its secret key setting, and can be instantiated from Symmetric eXternal Diffie-Hellman and Decision Linear assumptions.
Revisiting Leakage-Resilient MACs and Succinctly-Committing AEAD: More Applications of Pseudo-Random Injections
Pseudo-Random Injections (PRIs) have been used in several applications in symmetric-key cryptography, such as in the idealization of Authenticated Encryption with Associated Data (AEAD) schemes, building robust AEAD, and, recently, in converting a committing AEAD scheme into a succinctly committing AEAD scheme. In Crypto 2024, Bellare and Hoang showed that if an AEAD scheme is already committing, it can be transformed into a succinctly committing scheme by encrypting part of the plaintext using a PRI. In this paper, we revisit the applications of PRIs in building Message Authentication Codes (MACs) and AEAD schemes. First, we look at some of the properties and definitions of PRIs, such as collision resistance and unforgeability when used as a MAC with small plaintext space, under different leakage models. Next, we show how they can be combined with collision-resistant hash functions to build a MAC for long plaintexts, offering flexible security depending on how the PRI and equality check are implemented. If both the PRI and equality check are leak-free, the MAC provides almost optimal security, but the security only degrades a little if the equality check is only leakage-resilient (rather than leak-free). If the equality check has unbounded leakage, the security drops to a baseline security, rather than being completely insecure. Next, we show how to use PRIs to build a succinctly committing online AEAD scheme dubbed as scoAEfrom scratch that achieves succinct CMT-4 security, privacy, and Ciphertext Integrity with Misuse and Leakage (CIML2) security. Last but not least, we show how to build a succinct nonce Misuse-Resistant (MRAE) AEAD scheme, dubbed as scMRAE. The construction combines the SIV paradigm with PRI-based encryption (e.g. the Encode-then-Encipher (EtE) framework).
Randomness Generation for Secure Hardware Masking - Unrolled Trivium to the Rescue
Masking is a prominent strategy to protect cryptographic implementations against side-channel analysis. Its popularity arises from the exponential security gains that can be achieved for (approximately) quadratic resource utilization. Many variants of the countermeasure tailored for different optimization goals have been proposed. The common denominator among all of them is the implicit demand for robust and high entropy randomness. Simply assuming that uniformly distributed random bits are available, without taking the cost of their generation into account, leads to a poor understanding of the efficiency vs. security tradeoff of masked implementations. This is especially relevant in case of hardware masking schemes which are known to consume large amounts of random bits per cycle due to parallelism. Currently, there seems to be no consensus on how to most efficiently derive many pseudo-random bits per clock cycle from an initial seed and with properties suitable for masked hardware implementations.
In this work, we evaluate a number of building blocks for this purpose and find that hardware-oriented stream ciphers like Trivium and its reduced-security variant Bivium B outperform most competitors when implemented in an \emph{unrolled} fashion. Unrolled implementations of these primitives enable the flexible generation of many bits per cycle, which is crucial for satisfying the large randomness demands of state-of-the-art masking schemes.
According to our analysis, only Linear Feedback Shift Registers (LFSRs), when also unrolled, are capable of producing long non-repetitive sequences of random-looking bits at a higher rate per cycle for the same or lower cost as Trivium and Bivium B. Yet, these instances do not provide black-box security as they generate only linear outputs. We experimentally demonstrate that using multiple output bits from an LFSR in the same masked implementation can violate probing security and even lead to harmful randomness cancellations. Circumventing these problems, and enabling an independent analysis of randomness generation and masking, requires the use of cryptographically stronger primitives like stream ciphers.
As a result of our studies, we provide an evidence-based estimate for the cost of securely generating fresh random bits per cycle. Depending on the desired level of black-box security and operating frequency, this cost can be as low as to ASIC gate equivalent (GE) or to FPGA look-up tables (LUTs), where is the number of random bits required. Our results demonstrate that the cost per bit is (sometimes significantly) lower than estimated in previous works, incentivizing parallelism whenever exploitable. This provides further motivation to potentially move low randomness usage from a primary to a secondary design goal in hardware masking research.
Cryptanalysis of Full SCARF
SCARF is a tweakable block cipher dedicated to cache address randomization, proposed at the USENIX Security conference. It has a 10-bit block, 48-bit tweak, and 240-bit key. SCARF is aggressively optimized to meet the harsh latency constraints of cache address randomization, and uses a dedicated model for its security claim.
The full version of SCARF has 8 rounds, and its designers claim security up to queries and computations. In this work we present a distinguisher against 6-round SCARF under the collision model with time and query complexity , and a key-recovery attack against the full 8-round SCARF under the encryption-decryption model with queries and time . As part of the attack, we present a novel method to compute the minimal number of right pairs following a differential characteristic when the input pairs are restricted to a subspace of the domain of the primitive.
Towards Optimally Secure Deterministic Authenticated Encryption Schemes
The public comments received for the review process for NIST (SP) 800-38A pointed out two important issues that most companies face: (1) the limited security that AES can provide due to its 128-bit block size and (2) the problem of nonce-misuse in practice. In this paper, we provide an alternative solution to these problems by introducing two optimally secure deterministic authenticated encryption (DAE) schemes, denoted as DENC1 and DENC2 respectively. We show that our proposed constructions improve the state-of-the-art in terms of security and efficiency. Specifically, DENC1 achieves a robust security level of , while DENC2 attains a near-optimal security level of , where is the total number of blocks, is maximum number of blocks in each query, and is a user-defined parameter closely related to the rate of the construction. Our research centers on the development of two IV-based encryption schemes, referred to as IV1 and IV2, which respectively offer security levels of and . Notably, both of our DAE proposals are nearly rate 1/2 constructions. In terms of efficiency, our proposals compare favorably with state-of-the-art AE modes on contemporary microprocessors.
New Methods for Bounding the Length of Impossible Differentials of SPN Block Ciphers
How to evaluate the security of Substitution-Permutation Network (SPN) block ciphers against impossible differential (ID) cryptanalysis is a valuable problem. In this paper, a series of methods for bounding the length of IDs of SPN block ciphers are proposed. Firstly, we propose the definitions of minimal representative set and partition table. Therefore, an improved partition-first implementation strategy for bounding the length of IDs is given. Secondly, we introduce a new definition of ladder and propose the ladder-first implementation strategy for bounding the length of IDs. In order to be able to apply ladder-first implementation strategy in practice, the methods for determining ladders and integrating a ladder into searching models are given. Thirdly, a heuristic algorithm called dynamic-ladder-partition implementation strategy is proposed. According to our experimental results, dynamic-ladder-partition implementation strategy is more suitable for SPN ciphers whose number of elements in partition tables is little. Fourthly, rotation-equivalence ID sets of ciphers are explored to reduce the number of models that need to be considered. As applications, we show that 9-round PRESENT, 5-round AES, 6-round Rijndael-160, 7-round Rijndael-192, 7-round Rijndael-224 and 7-round Rijndael-256 do not have any ID under the sole assumption that the round keys are uniformly random. What's more, we obtain that 8-round GIFT-64, 12-round GIFT-128 and 14-round SKINNY-128 do not have any ID under the assumptions that GIFT and SKINNY are Markov ciphers and the round keys are uniformly random. Our methods fill crucial gaps on bounding the length of IDs with the differential properties of S-boxes considered. They enhance our confidence in the security and are valuable, especially for designers.
Traceable Verifiable Random Functions
A threshold verifiable random function (threshold VRF) is a VRF where the evaluation key is secret shared among parties, and a quorum of parties is needed to evaluate the VRF. Threshold VRFs are used widely in practice in applications such as randomness beacons and deterministic wallets. Despite their long history, the question of accountability for leaking key shares in a threshold VRF has not been studied. Specifically, consider a set of parties who use their key shares to create an evaluation box that lets anyone evaluate the VRF at any point in the domain of the VRF. When is less than the threshold , this box must also take as input additional evaluation shares. Our goal is to design a threshold VRF where there is a tracing algorithm that can trace any such box to the coalition of parties that created it, using only blackbox access to . The risk of tracing should deter the coalition from selling such a box. Questions in this vein were previously explored in the context of threshold decryption and secret sharing. Here we define and study traceability for a threshold VRF.
Our traceable threshold VRF is built from a VRF based on Paillier encryption. The starting point for our tracing algorithm is the tracing technique of Boneh-Partap-Rotem (Crypto 2024) designed for tracing leaks in the context of secret sharing. However, there are multiple technical challenges in making this approach work, and we develop the necessary tools to overcome all these challenges. The end result is a threshold VRF with a provably secure tracing algorithm.
Minimal Symmetric PAKE and 1-out-of-N OT from Programmable-Once Public Functions
Symmetric password-authenticated key exchange (sPAKE) can be seen as an extension of traditional key exchange where two parties agree on a shared key if and only if they share a common secret (possibly low-entropy) password. We present the first sPAKE protocol to simultaneously achieve the following properties:
- only two exponentiations per party, the same as plain unauthenticated Diffie-Hellman key agreement (and likely optimal);
- optimal round complexity: a single flow (one message from each party that can be sent in parallel) to achieve implicit authentication, or two flows to achieve explicit mutual authentication;
- security in the random oracle model, rather than ideal cipher or generic group model;
- UC security, rather than game-based.
Our protocol is a generalization of the seminal EKE protocol of Bellovin & Merritt (S&P 1992).
We also present a UC-secure 1-out-of- oblivious transfer (OT) protocol, for random payloads. Its communication complexity is independent of , meaning that can even be exponential in the security parameter. Such a protocol can also be considered a kind of oblivious PRF (OPRF). Our protocol improves over the leading UC-secure 1-out-of- OT construction of Masny & Rindal (CCS 2019) for all , and has essentially the same cost for .
The new technique underlying these results is a primitive we call programmable-once public function (POPF). Intuitively, a POPF is a function whose output can be programmed by one party on exactly one point. All other outputs of the function are outside of any party's control, in a provable sense.
Update: dos Santos et al. (Eurocrypt 2023) showed that our POPF definition is not strong enough to prove security of EKE against a man-in-the-middle. See Januzelli et al. (Eurocrypt 2025) for a fixed POPF abstraction and EKE security proof.
Non-Interactive Key Exchange: New Notions, New Constructions, and Forward Security
Non-interactive key exchange (NIKE) is a simple and elegant cryptographic primitive that allows two or more users to agree on a secret shared key without any interaction. NIKE schemes have been formalized in different scenarios (such as the public-key, or the identity-based setting), and have found many applications in cryptography.
In this work, we propose a NIKE variant that generalizes public-key and identity-based NIKE: a multi-authority identity-based NIKE (MA-ID-NIKE) is defined like an identity-based NIKE, only with several identity domains (i.e., several instances of an identity-based NIKE), and such that users from different identity domains can compute shared keys. This makes MA-ID-NIKE schemes more versatile than existing NIKE or identity-based NIKE schemes, for instance, in an application in which users from different (centrally managed) companies need to compute shared keys.
We show several results for MA-ID-NIKE schemes:
- We show that MA-ID-NIKE schemes generically imply public-key NIKEs, identity-based NIKEs, as well as forward-secure NIKE schemes, the latter of which are notoriously hard to construct.
- We propose two simple constructions of MA-ID-NIKE schemes from indistinguishability obfuscation (iO) and multilinear maps, respectively. These constructions achieve only selective security, but can be leveraged to adaptive security for small groups of users (that want to be able to agree on a joint shared key) in the random oracle model.
- We give a simple and elegant construction of MA-ID-NIKEs from identity-based encryption (IBE) and universal samplers. This construction achieves adaptive security also for large groups of users based on the adaptive security of the used universal samplers. Universal samplers, in turn, are known to be achievable using iO in the random oracle model. As a nice feature, the same construction yields hierarchical MA-ID-NIKEs or public-key NIKEs when instantiated with hierarchical IBE or public-key encryption instead of IBE schemes.
While these results are clearly only feasibility results, they do demonstrate the achievability of a concept that itself has very practical use cases.
A Tight Analysis of GHOST Consistency
The GHOST protocol was proposed as an improvement to
the Nakamoto consensus mechanism that underlies Bitcoin. In contrast
to the Nakamoto fork-choice rule, the GHOST rule justifies selection
of a chain with weights computed over subtrees rather than individual
paths. This fork-choice rule has been adopted by a variety of consensus
protocols and is featured in the currently deployed protocol supporting
Ethereum.
We establish an exact characterization of the consistency region of the
GHOST protocol, identifying the relationship between network delay, the
rate of honest block production, and the rate of adversarial block production
that guarantees that the protocol reaches consensus. In contrast
to the closely related Nakamoto consensus protocol, we find that the region
depends on the convention used by the protocol for tiebreaking: we
establish tight results for both adversarial tiebreaking, in which ties are
broken adversarially in order to frustrate consensus, and deterministic
tiebreaking, in which ties between pairs of blocks are broken consistently
throughout an execution. We provide explicit attacks for both conventions
which stall consensus outside of the consistency region.
Our results conclude that the consistency region of GHOST can be
strictly improved by incorporating a tiebreaking mechanism; in either
case, however, the final region of consistency is inferior to the region of
Nakamoto consensus.
ChiLow and ChiChi: New Constructions for Code Encryption
We study the problem of embedded code encryption, i.e., encryption for binary software code for a secure microcontroller that is stored in an insecure external memory. As every single instruction must be decrypted before it can be executed, this scenario requires an extremely low latency decryption. We present a formal treatment of embedded code encryption security definitions, propose three constructions, namely ACE1, ACE2 and ACE3, and analyze their security. Further, we present ChiLow, a family of tweakable block ciphers and a related PRF specifically designed for embedded code encryption. At the core of ChiLow, there is ChiChi, a new family of non-linear layers of even dimension based on the well-known χ function. Our fully unrolled hardware implementation of ChiLow, using the Nangate 15nm Open Cell Library, achieves a decryption latency of less than 280 picoseconds.
Quasi-Linear Indistinguishability Obfuscation via Mathematical Proofs of Equivalence and Applications
Indistinguishability obfuscation (\iO) is a powerful cryptographic primitive
and has been quoted as the ``swiss army-knife of modern cryptography''. Most prior works on \iO focused on theoretical feasibility, and paid less attention to the efficiency of the constructions. As a result, all prior constructions stopped at achieving polynomial efficiency without worrying about how large the polynomial is.
In fact, it has even been conjectured that a polynomial dependence on the input length is necessary.
In this work, we show that if the two circuits to be obfuscated enjoy a succinct propositional logic proof of equivalence, then we can
create obfuscated versions of these programs that are computationally indistinguishable; and importantly, the obfuscated program's efficiency is quasi-linear in the circuit size and proof size. We show that our quasi-linear \iO construction also leads to new applications. Specifically, we show how to achieve quasi-linear efficiency for 1) \iO for Turing Machines with unbounded inputs, and 2) multi-input functional encryption, also assuming succinct proofs of equivalence.
Khatam: Reducing the Communication Complexity of Code-Based SNARKs
Every linear code satisfies the property of ``correlated agreement", meaning that if are two vectors in and if is close in Hamming distance to some codeword in , then and each agree with a codeword in in positions indexed by elements of . In this work, we prove something stronger -- that if is close to , then and all agree with codewords at positions indexed by elements of , except with negligible probability over . Our result holds as long as , and with failure probability smaller than . Furthermore, our results extend to any finite field and any linear code.
We use this result to prove that BaseFold(Crypto 2024), an efficient multilinear polynomial commitment scheme, is secure in the \textit{list decoding regime}, which significantly reduces its communication complexity. Our result is agnostic with respect to both the field and code, and therefore can be used to reduce the communication complexity of a wide class of coding-based multilinear polynomial commitment schemes.
Dimensional e ion: Improving the Attack with Decomposition in Higher Bases
We revisit the polynomial attack to the problem modulo from [BLLOR22]. Our new algorithm achieves a polynomial time solution in dimension , extending the range of dimensions for which a polynomial attack is known beyond the previous bound of .
We also combine our new algorithm with Wagner's attack to improve the general attack complexity for some of the dimensions where a polynomial solution is still not known.
We implement our polynomial attack and break the one-more unforgeability of blind Schnorr signatures over 256-bit elliptic curves in a few seconds with 192 concurrent sessions.
Halving differential additions on Kummer lines
We study differential additions formulas on Kummer lines that factorize through a degree isogeny . We call the resulting formulas half differential additions: from the knowledge of and , the half differential addition allows to recover . We explain how Mumford's theta group theory allows, in any model of Kummer lines, to find a basis of the half differential relations. This involves studying the dimension isogeny .
We then use the half differential addition formulas to build a new type of Montgomery ladder, called the half-ladder, using a time-memory trade-off. On a Montgomery curve with full rational -torsion, our half ladder first build a succession of isogeny images , which only depends on the base point and not the scalar , for a pre-computation cost of by bit. Then we use half doublings and half differential additions to compute any scalar multiplication , for a cost of by bit. The total cost is then , even when the base point is not normalized. By contrast, the usual Montgomery ladder costs by bit, for a normalized point.
In the appendix, we extend our approach to higher dimensional ladders in theta coordinates or twisted theta coordinates. In dimension , after a pre-computation step which depends on the base point , our half ladder only costs , compared to for the standard ladder.
Asynchronous Algorand: Reaching Agreement with Near Linear Communication and Constant Expected Time
The celebrated Algorand protocol solves validated byzantine agreement in a scalable manner in the synchronous setting. In this paper, we study the feasibility of similar solutions in the asynchronous setting. Our main result is an asynchronous validated byzantine agreement protocol that we call Asynchronous Algorand. As with Algorand, it terminates in an expected constant number of rounds, and honest parties send an expected bits, where is the number of parties. The protocol is resilient to a fully-asynchronous weakly-adaptive adversary that can corrupt a near-optimal number of parties ( ) and requires just a VRF setup and secure erasures.
A key innovation in Asynchronous Algorand is a rather simple but surprisingly effective method to do \textit{committee-based role assignment} for asynchronous verifiable secret sharing in the YOSO (You Only Speak Once) model. This method achieves near-optimal resilience and near-linear communication complexity while relying solely on a verifiable random function (VRF) setup and secure erasures.
FHE-SNARK vs. SNARK-FHE: From Analysis to Practical Verifiable Computation
Verifiable Computation over encrypted data (VC) faces a critical dilemma between two competing paradigms: SNARK-FHE (applying SNARKs to prove FHE operations) and FHE-SNARK (homomorphically evaluating SNARK proofs). There are two interesting questions remain open to solving such a dilemma: 1) Are they identical in terms of security? 2) How practically efficient can we get? This work answers these questions through the following results:
1) We establish a formal security analysis between VC and secure two-party computation (2PC). We investigate VC with server inputs and show the following: a) VC with server input has an exact 1-bit security loss compared to 2PC; b) SNARK-FHE aligns with 2PC while FHE-SNARK naturally falls in the VC category; c) Existing FHE-SNARK works is vulnerable in the VC with server input setting, for which we formalize a input-dependent attack.
2) We design an FHE-friendly SNARK that is: a) 3× lower multiplicative depth than FRI-based SNARKs; b) Compatible with FHE SIMD operations. Based on this novel SNARK, we construct an FHE-SNARK scheme that has: a) Stronger security: resistant against input-dependent attack; b) 8× speedup: 3.6-hour proof generation for -gate circuits on a single core CPU (vs. 29 hours in the state-of-the-art); c) Practical verification: 65.3 MB proofs with 2.7 seconds verification (single core).
Single-Input Functionality against a Dishonest Majority: Practical and Round-Optimal
In this work, we focus on Single-Input Functionality (SIF), which can be viewed as a special case of MPC. In a SIF, only one distinguished party called the dealer holds a private input. SIF allows the dealer to perform a computation task with other parties without revealing any additional information about the private input. SIF has diverse applications, including multiple-verifier zero-knowledge, and verifiable relation sharing.
As our main contribution, we propose the first 1-round SIF protocol against a dishonest majority in the preprocessing model, which is highly efficient. The prior works either require at least 2-round online communication (Yang and Wang, Asiacrypt 2022; Baum et al., CCS 2022; Zhou et al., Euro SP 2024) or are only feasibility results (Lepinski et al., TCC 2005; Applebaum et al., Crypto 2022). We show the necessity of using the broadcast channels, by formally proving that 1-round SIF is impossible to achieve in the preprocessing model, if there are no broadcast channels available. We implement our protocol and conduct extensive experiments to illustrate the practical efficiency of our protocol.
Pseudorandom Functions with Weak Programming Privacy and Applications to Private Information Retrieval
Although privately programmable pseudorandom functions (PPPRFs) are known to have numerous applications, so far, the only known constructions rely on Learning with Error (LWE) or indistinguishability obfuscation. We show how to construct a relaxed PPPRF with only one-way functions (OWF). The resulting PPPRF satisfies security and works for polynomially sized input domains. Using the resulting PPPRF, we can get new results for preprocessing Private Information Retrieval (PIR) that improve the state of the art. Specifically, we show that relying only on OWF, we can get a 2-server preprocessing PIR with polylogarithmic bandwidth while consuming client space and server space for an arbitrarily small constant . In the 1-server setting, we get a preprocessing PIR from OWF that achieves polylogarithmic online bandwidth and offline bandwidth, while preserving the same client and server space as before. Our result, in combination with the lower bound of Ishai, Shi, and Wichs (CRYPTO'24), establishes a tight understanding of the bandwidth and client space tradeoff for 1-server preprocessing PIR from Minicrypt assumptions. Interestingly, we are also the first to show non-trivial ways to combine client-side and server-side preprocessing to get improved results for PIR.
Breaking and Fixing Anonymous Credentials for the Cloud
In an attribute-based credential (ABC) system, users obtain a digital certificate on their personal attributes, and can later prove possession of such a certificate in an unlinkable way, thereby selectively disclosing chosen attributes to the service provider. Recently, the concept of encrypted ABCs (EABCs) was introduced by Krenn et al. at CANS 2017, where virtually all computation is outsourced to a semi-trusted cloud-provider called wallet, thereby overcoming existing efficiency limitations on the user’s side, and for the first time enabling “privacy-preserving identity management as a service”.
While their approach is highly relevant for bringing ABCs into the real world, we present a simple attack allowing the wallet to learn a user's attributes when colluding with another user -- a scenario which is not covered by their modeling but which needs to be considered in practice. We then revise the model and construction of Krenn et al. in various ways, such that the above attack is no longer possible. Furthermore, we also remove existing non-collusion assumptions between wallet and service provider or issuer from their construction. Our protocols are still highly efficient in the sense that the computational effort on the end user side consists of a single exponentiation only, and otherwise efficiency is comparable to the original work of Krenn et al.
Bypassing the characteristic bound in logUp
In this informal note, we describe how to bypass the characteristic bound in logUp [eprint 2022/1530] by abstracting the notion of (pole) multiplicity. The method applies as well to the GKR-variant from Papini and Haböck [eprint 2023/1284], and it moreover unlocks fractional decomposition lookups over binary fields.
Basefold in the List Decoding Regime
In this writeup we discuss the soundness of the Basefold multilinear polynomial commitment scheme [Zeilberger, Chen, Fisch 23] applied to Reed-Solomon codes, and run with proximity parameters up to the Johnson list decoding bound.
Our security analysis relies on a generalization of the celebrated correlated agreement theorem from [Ben-Sasson, et al., 20] to linear subcodes of Reed-Solomon codes, which turns out a by-product of the Guruswami-Sudan list decoder analysis.
We further highlight a non-linear variant of the subcode correlated
agreement theorem, which is flexible enough to apply to Basefold-like protocols such as recent optimizations of FRI-Binius [Diamond, Posen 24], and which we believe sufficient for proving the security of a recent multilinear version of STIR [Arnon, Chiesa, Fenzi, Yogev 24] in the list-decoding regime
A note on the G-FFT
For primes with being smooth, the G-FFT from Li and Xing [LX23] is an algebraic FFT, which at first glance seems equivalent to the circle FFT from [IACR eprint 2024/278]: It also uses the circle curve over (in other words the projective line) as underlying domain, and interpolates by low-degree functions with poles over the same set of points. However, their approach to control the degree of the FFT basis is fundamentally different.
The G-FFT makes use of punctured Riemann-Roch spaces, and the construction works with the group doubling map only, no projection onto the -axis involved.
In this note we give an elementary description of the G-FFT without using abstract algebra. We describe a variant which uses a simpler, and in our opinion more natural function space, and which treats the exceptional point of the domain (the group identity) differently. In comparison to the circle FFT, the G-FFT (both the original as well as our variant) has the following downsides. Interpolation and domain evaluation costs the double number of multiplications (the twiddle is not an ``odd'' function), and the function space is not invariant under the group action, causing additional overhead when applied in STARKs.
Circle STARKs
Traditional STARKs require a cyclic group of a smooth order in the field. This allows efficient interpolation of points using the FFT algorithm, and writing constraints that involve neighboring rows. The Elliptic Curve FFT (ECFFT, Part I and II) introduced a way to make efficient STARKs for any finite field, by using a cyclic group of an elliptic curve. We show a simpler construction in the lines of ECFFT over the circle curve . When is divisible by a large power of , this construction is as efficient as traditional STARKs and ECFFT. Applied to the Mersenne prime , which has been recently advertised in the IACR eprint 2023:824, our preliminary benchmarks indicate a speed-up by a factor of compared to a traditional STARK using the Babybear prime .
Improving logarithmic derivative lookups using GKR
In this informal note, we instantiate the Goldwasser-Kalai-Rothblum (GKR) protocol to prove fractional sumchecks as present in lookup arguments based on logarithmic derivatives, with the following impact on the prover cost of logUp (IACR eprint 2022/1530):
When looking up columns in a (for the sake of simplicity) single column table, the prover has to commit only to a single extra column, i.e. the multiplicities of the table entries.
In order to carry over the GKR fractional sumcheck to the univariate setting, we furthermore introduce a simple, yet (as far as we know) novel transformation for turning a univariate polynomial commitment scheme into a multilinear one. The transformation complements existing approaches and might be of independent interest for its elegant way to prove arbitrary powers of the lexicographic shift over the Boolean hypercube.
Multivariate lookups based on logarithmic derivatives
Logarithmic derivatives translate products of linear factors into sums of their reciprocals, turning zeroes into simple poles of same multiplicity. Based on this simple fact, we construct an interactive oracle proof for multi-column lookups over the boolean hypercube, which makes use of a single multiplicity function instead of working with a rearranged union of table and witnesses. For single-column lookups the performance is comparable to the well-known Plookup strategy used by Hyperplonk+. However, the real power of our argument unfolds in the case of batch lookups when multiple columns are subject to a single-table lookup: While the number of field operations is comparable to the Hyperplonk+ lookup (extended to multiple columns), the oracles provided by our prover are much less expensive. For example, for columns of length 2^12, paper-pencil operation counts indicate that the logarithmic derivative lookup is between 1.5 and 4 times faster, depending on the number of columns.
A summary on the FRI low degree test
This document is an informal summary on the FRI low degree test [BSBHR18a], [BSCI+20], and DEEP algebraic linking from [BSGKS20]. Based on its most recent soundness analysis [BSCI+20], we discuss parameter settings for practical security levels, how FRI is turned into a polynomial commitment scheme, and the soundness of DEEP sampling in the list decoding regime. In particular, we illustrate the DEEP method applied to proving satisfiability of algebraic intermediate representations and prove a soundness error bound which slightly improves the one in [Sta21].
Darlin: Recursive Proofs using Marlin
This document describes Darlin, a succinct zero-knowledge argument of knowledge based on the Marlin SNARK (Chiesa et al., Eurocrypt 2020) and the `dlog' polynomial commitment scheme from Bootle et al. EUROCRYPT 2016.
Darlin addresses recursive proofs by integrating the amortization technique from Halo (IACR eprint 2019/099) for the non-succinct parts of the dlog verifier, and we adapt their strategy for bivariate circuit encoding polynomials to aggregate Marlin's inner sumchecks across the nodes the recursive scheme.
We estimate the performance impact of inner sumcheck aggregation by about 30% in a tree-like scheme of in-degree 2, and beyond when applied to linear recursion.
(Un)breakable curses - re-encryption in the Fujisaki-Okamoto transform
The Fujisaki-Okamoto transform (FO) is the go-to method for achieving chosen-ciphertext (CCA) security for post-quantum key encapsulation mechanisms (KEMs). An important step in FO is augmenting the decryption/ decapsulation algorithm with a re-encryption step -- the decrypted message is re-encrypted to check whether the correct encryption randomness was used. While solving a security problem (ciphertext-malleability), re-encryption has turned out to introduce side-channel vulnerabilities and is computationally expensive, which has lead designers to searching for alternatives. In this work, we perform a comprehensive study of such alternatives. We formalize a central security property, computational rigidity, and show that it is sufficient for obtaining CCA security. We present a framework for analyzing algorithms that can replace re-encryption and still achieve rigidity, and analyze existing proposals in this framework.
Along the way, we pick up a novel QROM security statement for explicitly rejecting KEMs based on deterministic PKE schemes, something that so far only was possible when requiring a hard-to-ensure quantum property for the base PKE scheme.
Stateless Hash-Based Signatures for Post-Quantum Security Keys
The U.S. National Institute of Standards and Technology
recently standardized the first set of post-quantum cryptography algo-
rithms. These algorithms address the quantum threat, but also present
new challenges due to their larger memory and computational footprint.
Three of the four standardized algorithms are lattice based, offering good
performance but posing challenges due to complex implementation and
intricate security assumptions. A more conservative choice for quantum-
safe authentication are hash-based signature systems. However, due to
large signature sizes and low signing speeds, hash-based systems have
only found use in niche applications. The first NIST standardized, state-
less hash-based signature system is the SPHINCS+-based SLH-DSA.
In this work we combine different approaches to show that SPHINCS+
can be optimized in its parameters and implementation, to be high per-
forming, even when signing in an embedded setting. We demonstrate
this in the context of user authentication using hardware security keys
within FIDO. Our SPHINCS+-based implementation can even outper-
form lattice-based solutions while remaining highly portable. Due to con-
servative security assumptions, our solution does not require a hybrid
construction and can perform authentication on current security keys.
For reproducibility and to encourage further research we publish our
Cortex M4-based implementation.
Rondo: Scalable and Reconfiguration-Friendly Randomness Beacon
We present Rondo, a scalable and reconfiguration-friendly distributed randomness beacon (DRB) protocol in the partially synchronous model. Rondo is the first DRB protocol that is built from batched asynchronous verifiable secret sharing (bAVSS) and meanwhile avoids the high message cost, where is the number of nodes. Our key contribution lies in the introduction of a new variant of bAVSS called batched asynchronous verifiable secret sharing with partial output (bAVSS-PO). bAVSS-PO is a weaker primitive than bAVSS but allows us to build a secure and more scalable DRB protocol. We propose a bAVSS-PO protocol Breeze. Breeze achieves the optimal messages for the sharing stage and allows Rondo to offer better scalability than prior DRB protocols.
Additionally, to support the reconfiguration, we introduce Rondo-BFT, a dynamic and partially synchronous Byzantine fault-tolerant protocol inspired by Dyno (S\&P 2022). Unlike Dyno, Rondo-BFT provides a communication pattern that generates randomness beacon output periodically, making it well-suited for DRB applications.
We implement our protocols and evaluate the performance on Amazon EC2 using up to 91 instances. Our evaluation results show that Rondo achieves higher throughput than existing works and meanwhile offers better scalability, where the performance does not degrade as significantly as grows.
Lattice-based Zero-knowledge Proofs for Blockchain Confidential Transactions
We propose new zero-knowledge proofs for efficient and postquantum ring confidential transaction (RingCT) protocols based on lattice assumptions in Blockchain systems. First, we introduce an inner-product based linear equation satisfiability approach for balance proofs with a wide range (e.g., 64-bit precision). Unlike existing balance proofs (MatRiCT and MatRiCT+) that require additional proofs for some "corrector values", our approach avoids the corrector values for better efficiency. Furthermore, we design a ring signature scheme to efficiently hide a user’s identity in large anonymity sets. Different from existing approaches that adopt a one-out-of-many proof (MatRiCT and MatRiCT+), we show that a linear sum proof suffices in ring signatures, which could avoid the costly binary proof part. We further use the idea of "unbalanced" relations to build a logarithmic-size ring signature scheme. Finally, we show how to adopt these techniques in RingCT protocols and implement a prototype to compare the performance with existing approaches. The results show our solutions can reduce up to 50% and 20% proof size, 30% and 20% proving time, 20% and 20% verification time of MatRiCT and MatRiCT+, respectively. We also believe our techniques are of independent interest for other applications and are applicable in a generic setting.
Efficient Error-tolerant Side-channel Attacks on GPV Signatures Based on Ordinary Least Squares Regression
The Gentry-Peikert-Vaikuntanathan (GPV) framework is utilized for constructing practical digital signatures, which is proven to be secure in both the classical/quantum random-oracle models. Falcon is such a signature scheme, recognized as a compact and efficient signature among NIST-standardized signature schemes. Recently, Guerreau et al. (CHES 2022) and Zhang et al. (Eurocrypt 2023) proposed the secret key recovery attack on Falcon utilizing signatures filtered by simple power analysis (SPA) attacks. However, these attacks, which exploit the conditional signature distributions, require a large number of SPA attacks to obtain the filtered signatures. Furthermore, no existing attack considers general GPV signatures despite the importance of the GPV framework in modern digital signatures. Therefore, we address these problems as follows.
First, we introduce, for the first time, a concept of vulnerable partial information of GPV signatures and propose a non-filtering secret key recovery attack, called OLS attack, which effectively utilizes partial information without filtering. The proposed OLS attack is a linear estimator with computational complexity that scales linearly with the number of samples, making OLS attack highly practical. Furthermore, we prove that the secret key recovered by the OLS attack converges to the real secret key in probability as the number of samples increases.
Second, we leverage SPA to extract Gaussian leakage, which is used as partial information for the OLS attack on Falcon. As a result, the OLS attack shows a significantly higher success rate with the fewest samples than the state-of-the-art attacks. Furthermore, by incorporating the DDGR attack, the OLS attack can recover the secret key using much fewer samples with a success rate close to 100%. Moreover, we propose an OLS attack specialized for Falcon, which can even more reduce the number of required side-channel attacks.
Third, we propose an error-tolerant power analysis attack using MAP decoding, which effectively corrects the errors in samples to properly estimate Gaussian leakage. For concrete experimental validation, an ELMO simulator generates noisy power traces and ChipWhisperer measures power traces from the STM32F415 model. The proposed MAP decoding achieves high effectiveness for estimating Gaussian leakage, particularly when applied to power traces collected using low-resolution ChipWhisperer. In conclusion, it is important for future work to study countermeasures for OLS attacks.
Stationary Syndrome Decoding for Improved PCGs
Syndrome decoding (SD), and equivalently Learning Parity with Noise (LPN), is a fundamental problem in cryptography, which states that for a field , some compressing public matrix , and a secret sparse vector sampled from some noise distribution, is indistinguishable from uniform. Recently, the SD has gained significant interest due to its use in pseudorandom correlation generators (PCGs).
In pursuit of better efficiency, we propose a new assumption called Stationary Syndrome Decoding (SSD). In SSD, we consider correlated noise vectors and associated instances where the noise vectors are restricted to having non-zeros in the same small subset of positions . That is, for all , is uniformly random, while for all other , .
Although naively reusing the noise vector renders SD and LPN insecure via simple Gaussian elimination, we observe known attacks do not extend to our correlated noise. We show SSD is unconditionally secure against so-called linear attacks, e.g., advanced information set decoding and representation techniques (Esser and Santini, Crypto 2024). We further adapt the state-of-the-art nonlinear attack (Briaud and Oygarden, Eurocrypt 2023) to SSD and demonstrate both theoretically and experimentally resistance to the attack.
We apply SSD to PCGs to amortize the cost of noise generation protocol. For OT and VOLE generation, each instance requires communication instead of . For suggested parameters, we observe a improvement in the running time or between 6 and reduction in communication. For Beaver triple generation using Ring LPN, our techniques have the potential for substantial amortization due to the high concrete overhead of the Ring LPN noise generation.
Improved Secure Two-party Computation from a Geometric Perspective
Multiplication and other non-linear operations are widely recognized as the most costly components of secure two-party computation (2PC) based on linear secret sharing. Multiplication and non-linear operations are well known to be the most expensive protocols in secure two-party computation (2PC). Moreover, the comparison protocol (or protocol) is essential for various operations such as truncation, signed extension, and signed non-uniform multiplication. This paper aims to optimize these protocols by avoiding invoking the costly comparison protocol, thereby improving their efficiency.
We propose a novel approach to study 2PC from a geometric perspective. Specifically, we interpret the two shares of a secret as the horizontal and vertical coordinates of a point in a Cartesian coordinate system, with the secret itself represented as the corresponding point. This reformulation allows us to address the comparison problem by determining the region where the point lies. Furthermore, we identify scenarios where the costly comparison protocol can be replaced by more efficient evaluating AND gate protocols within a constrained range. Using this method, we improve protocols for truncation, signed extension and signed non-uniform multiplication, all of which are fundamental to 2PC. In particular, for the one-bit error truncation protocol and signed extension protocols, we reduce the state-of-the-art communication complexities of Cheetah (USENIX’22) and SirNN (S\&P’21) from to in two rounds, where is the input length and is the security parameter. For signed multiplication with non-uniform bit-width, we reduce the communication cost of SirNN's by 40\% to 60\%.
Neo: Lattice-based folding scheme for CCS over small fields and pay-per-bit commitments
This paper introduces Neo, a new lattice-based folding scheme for CCS, an NP-complete relation that generalizes R1CS, Plonkish, and AIR. Neo's folding scheme can be viewed as adapting the folding scheme in HyperNova (CRYPTO'24), which assumes elliptic-curve based linearly homomorphic commitments, to the lattice setting. Unlike HyperNova, Neo can use “small” prime fields (e.g., over the Goldilocks prime). Additionally, Neo provides plausible post-quantum security.
Prior to Neo, folding schemes in the lattice setting, notably LatticeFold (ePrint 2024/257), worked with constraint systems defined over a cyclotomic polynomial ring. This structure allows packing a fixed batch of constraint systems over a small prime field into a single constraint system over a polynomial ring. However, it introduces significant overheads, both for committing to witnesses (e.g., the commitment scheme cannot take advantage of bit-width of values), and within the folding protocol itself (e.g., the sum-check protocol is run over cyclotomic polynomial rings). Additionally, the required ring structure places restrictions on the choice of primes (e.g., LatticeFold is not compatible with the Goldilocks field).
Neo addresses these problems, by drawing inspiration from both HyperNova and LatticeFold. A key contribution is a folding-friendly instantiation of Ajtai's commitments, with "pay-per-bit" commitment costs i.e., the commitment costs scale with the bit-width of the scalars (e.g., committing to a vector of bits is cheaper than committing to a vector of -bit values). This scheme commits to vectors over a small prime field. It does so by transforming the provided vector into a matrix and committing to that matrix. We prove that this commitment scheme provides the desired linear homomorphism for building a folding scheme. Additionally, like HyperNova, Neo runs a single invocation of the sum-check protocol, where in HyperNova it is over the scalar field of an elliptic curve and in Neo it is over an extension of a small prime field.