All papers (Page 2 of 23839 results)
Higher Residuosity Attacks on Small RSA Subgroup Decision Problems
Secure two-party comparison, known as Yao's millionaires' problem, has been a fundamental challenge in privacy-preserving computation. It enables two parties to compare their inputs without revealing the exact values of those inputs or relying on any trusted third party. One elegant approach to secure computation is based on homomorphic encryption. Recently, building on this approach, Carlton et al. (CT-RSA 2018) and Bourse et al. (CT-RSA 2020) presented novel solutions for the problem of secure integer comparison. These protocols have demonstrated significantly improved performance compared to the well-known and frequently used DGK protocol (ACISP 2007 and Int. J. Appl. Cryptogr. 1(4),323–324, 2009). In this paper, we introduce a class of higher residuosity attacks, which can be regarded as an extension of the classical quadratic residuosity attack on the decisional Diffie-Hellman problem. We demonstrate that the small RSA subgroup decision problems, upon which both the CEK and BST protocols are based, are not difficult to solve when the prime base is small (e.g., ). Under these conditions, the protocols achieve optimal overall performance. Furthermore, we offer recommendations for precluding such attacks, including one approach that does not adversely affect performance. We hope that these attacks can be applied to analyze other number-theoretic hardness assumptions.
Polynomial Secret Sharing Schemes and Algebraic Matroids
In a secret sharing scheme with polynomial sharing, the secret is an element of a finite field, and the shares are obtained by evaluating polynomials on the secret and some random field elements, i.e., for every party there is a set of polynomials that computes the share of the party. These schemes generalize the linear ones, adding more expressivity and giving room for more efficient schemes. To identify the access structures for which this efficiency gain is relevant, we need a systematic method to identify the access structure of polynomial schemes; i.e., to identify which sets can reconstruct the secret in the scheme. As a first step, we study ideal polynomial secret sharing schemes where there is a single polynomial for each party. Ideal schemes have optimal share size because the size of each share is the size of the secret.
Our goal is to generalize results of linear secret sharing schemes, i.e., schemes in which the shares are computed by applying linear mappings and the linear dependency of these mappings determines their access structures. To achieve this goal, we study the connection between the algebraic dependency of the sharing polynomials and the access structure of the polynomial scheme. Our first result shows that if the degree of the sharing polynomials is not too big compared to the size of the field, then the algebraic dependence of the sharing polynomials determines the access structure of the scheme. This contributes to the characterization of ideal polynomial schemes and establishes a new connection between families of ideal schemes and algebraic matroids.
Conversely, we ask the question: If we associate a polynomial with each party and the dealer, can we use these polynomials to realize the access structure determined by the algebraic dependency of the polynomials? Our second result shows that these access structures admit statistical schemes with small shares. Finally, we extend this result to the general case where each party may have more than one polynomial.
Partial Lattice Trapdoors: How to Split Lattice Trapdoors, Literally
Lattice trapdoor algorithms allow us to sample hard random lattices together with their trapdoors, given which short lattice vectors can be sampled efficiently. This enables a wide range of advanced cryptographic primitives. In this work, we ask: can we distribute lattice trapdoor algorithms non-interactively?
We study a natural approach to sharing lattice trapdoors: splitting them into partial trapdoors for different lower-rank sublattices which allow the local sampling of short sublattice vectors. Given sufficiently many short sublattice vectors, these can then be combined to yield short vectors in the original lattice. Moreover, this process can be repeated an unbounded polynomial number of times without needing a party holding a full trapdoor to intervene. We further define one-wayness and indistinguishability properties for partial trapdoors.
We establish that such objects exist that have non-trivial performance under standard assumptions. Specifically, we prove these properties for a simple construction from the κ-SIS and κ-LWE assumptions, which were previously shown to be implied by the plain SIS and LWE assumptions, respectively. The security proofs extend naturally to the ring or module settings under the respective analogues of these assumptions, which have been conjectured to admit similar reductions.
Our partial trapdoors achieve non-trivial efficiency, with relevant parameters sublinear in the number of shareholders. Our construction is algebraic, without resorting to generic tools such as multiparty computation or fully homomorphic encryption. Consequently, a wide range of lattice-trapdoor-based primitives can be thresholdised non-interactively by simply substituting the trapdoor preimage sampling procedure with our partial analogue.
Enabling Microarchitectural Agility: Taking ML-KEM & ML-DSA from Cortex-M4 to M7 with SLOTHY
Highly-optimized assembly is commonly used to achieve the best performance for popular cryptographic schemes such as the newly standardized ML-KEM and ML-DSA.
The majority of implementations today rely on hand-optimized assembly for the core building blocks to achieve both security and performance.
However, recent work by Abdulrahman et al. takes a new approach, writing a readable base assembly implementation first and leaving the bulk of the optimization work to a tool named SLOTHY based on constraint programming.
SLOTHY performs instruction scheduling, register allocation, and software pipelining simultaneously using constraints modeling the architectural and microarchitectural details of the target platform.
In this work, we extend SLOTHY and investigate how it can be used to migrate already highly hand-optimized assembly to a different microarchitecture, while maximizing performance.
As a case study, we optimize state-of-the-art Arm Cortex-M4 implementations of ML-KEM and ML-DSA for the Arm Cortex-M7.
Our results suggest that this approach is promising:
For the number-theoretic transform (NTT) – the core building block of both ML-DSA and ML-KEM – we achieve speed-ups of and , respectively.
For Keccak – the permutation used by SHA-3 and SHAKE and also vastly used in ML-DSA and ML-KEM – we achieve speed-ups of 30% compared to the M4 code and 5% compared to hand-optimized M7 code.
For many other building blocks, we achieve similarly significant speed-ups of up to .
Overall, this results in 11 to 33% faster code for the entire cryptosystems.
Lattice-Based Updatable Public-Key Encryption for Group Messaging
Updatable Public-Key Encryption (UPKE) augments the security of PKE with Forward Secrecy properties. While requiring more coordination between parties, UPKE enables much more efficient constructions than full-fledged Forward-Secret PKE. Alwen, Fuchsbauer and Mularczyk (AFM, Eurocrypt’24) presented the strongest security notion to date. It is the first to meet the needs of UPKE’s most important applications: Secure Group Messaging and Continuous Group Key Agreement. The authors provide a very efficient construction meeting their notion with classic security based on the Computational Diffie-Hellman (CDH) assumption in the Random Oracle Model (ROM).
In this work we present the first post-quantum secure UPKE construction meeting (a slight relaxation of) the AFM security notion. Based on the Module LWE assumption, our construction is practically efficient. Moreover, public key sizes are about and ciphertext sizes around of those of the state-of-the-art lattice-based UPKE scheme in the ROM by Abou Haidar, Passelègue and Stehlé – despite only being shown to satisfy a significantly weaker security notion. As the AFM proofs relies on random self-reducibility of CDH, which has no analogue for lattices, we develop a new proof technique for strong UPKE, identifying the core properties required from the underlying (lattice-based) encryption scheme.
Traitor Tracing in Multi-sender Setting ( : Traceable Multi-client Functional Encryption)
Traitor tracing is a traditional cryptographic primitive designed for scenarios with multiple legitimate receivers. When the plaintext - that is, the output of decryption - is leaked and more than one legitimate receiver exists, it becomes imperative to identify the source of the leakage, a need that has motivated the development of traitor tracing techniques. Recent advances in standard encryption have enabled decryption outcomes to be defined in a fine-grained manner through the introduction of Functional Encryption (FE). Constructing FE schemes is intriguing, and achieving the tracing property adds an additional layer of complexity. Traitor tracing techniques have been actively developed for more than three decades, yet they have always remained within the same framework - a single sender responsible for encrypting all the data.
However, fine-grained decryption is particularly useful when data originates from multiple sources, allowing for joint computation on personal data. This leads to the concept of multi-client functional encryption (MCFE), where multiple concurrent senders independently encrypt their data while agreeing on the decryption of a specific function (e.g., a statistical measure) computed on the aggregated data, without revealing any additional information. In the era of cloud computing and big data, privacy-preserving joint computation is crucial, and tracing the source of any breach by dishonest participants becomes essential. Thus, in this paper we take the first step toward addressing the tracing problem in the general context of joint computation with multiple senders. Our contributions are twofold:
- We propose the first tracing model in the context of multi-sender encryption, namely ( ), which allows a pirate to extract secret information from both receivers and senders. Our model supports strong and naturally admissible decoders, removing artificial restrictions on the pirate decoder and thus addressing the shortcomings of existing traceable functional encryption schemes designed for the single-sender setting.
- To achieve our conceptual objective, we build upon the recently introduced notion of strong admissibility for MCFE. Our main technical contribution is a generic compiler that transforms a large class of MCFE schemes with weak admissibility into schemes with strong admissibility. This compiler not only helps overcome existing challenges but may also be of general interest within the functional encryption domain. Finally, we present a concrete lattice-based scheme for inner-product functionalities that achieves post-quantum security under standard assumptions.
The Security of Hash-and-Sign with Retry against Superposition Attacks
Considering security against quantum adversaries, while it is important to consider the traditional existential unforgeability (EUF-CMA security), it is desirable to consider security against adversaries making quantum queries to the signing oracle: Plus-one security (PO security) and blind unforgeability (BU security) proposed by Boneh and Zhandry (Crypto 2013) and Alagic et al. (EUROCRYPT 2020), respectively. Hash-and-sign is one of the most common paradigms for constructing EUF-CMA-secure signature schemes in the quantum random oracle model, employing a trapdoor function and a hash function. It is known that its derandomized version is PO- and BU-secure. A variant of hash-and-sign, known as hash-and-sign with retry (HSwR), formulated by Kosuge and Xagawa (PKC 2024), is widespread since it allows for weakening the security assumptions of a trapdoor function. Unfortunately, it has not been known whether HSwR can achieve PO- and BU-secure even with derandomization.
In this paper, we apply a derandomization with bounded loops to HSwR. We demonstrate that HSwR can achieve PO and BU security through this approach. Since derandomization with bounded loops offers advantages in some implementations, our results support its wider adoption, including in NIST PQC candidates.
Adaptively Secure Fully Homomorphic Message Authentication Code with Pre-processable Verification
There has been remarkable progress in fully homomorphic encryption, ever since Gentry's first scheme. In contrast, fully homomorphic authentication primitives received relatively less attention, despite existence of some previous constructions. While there exist various schemes with different functionalities for fully homomorphic encryption, there are only a few options for fully homomorphic authentication. Moreover, there are even fewer options when considering two of the most important properties: adaptive security, and pre-processable verification. To our knowledge, except for some concurrent works, achieving both properties requires the use of nested construction, which involves homomorphically authenticating a homomorphic authentication tag of a message, making the scheme costly and complicated.
In this work, we propose a dedicated scheme for (leveled) fully homomorphic message authentication code that is adaptively secure and has pre-processable verification. Leveraging the secrecy of the primitive, we demonstrate that a slight modification of a selectively secure (leveled) fully homomorphic signature scheme yields an adaptively secure (leveled) fully homomorphic message authentication code with pre-processable verification. Additionally, we introduce a novel notion and generic transform to enhance the security of a homomorphic message authentication code, which also exploits the secrecy of the primitive.
Predicate Encryption from Lattices: Enhanced Compactness and Refined Functionality
Uncategorized
Uncategorized
In this work, we explore the field of lattice-based Predicate Encryption (PE), with a focus on enhancing compactness and refining functionality.
First, we present a more compact bounded collusion predicate encryption scheme compared to previous constructions, significantly reducing both the per-unit expansion and fixed overhead, while maintaining an optimal linear blow-up proportional to .
Next, we propose a Predicate Inner Product Functional Encryption (P-IPFE) scheme based on our constructed predicate encryption scheme. P-IPFE preserves the attribute-hiding property while enabling decryption to reveal only the inner product between the key and message vectors, rather than the entire message as in traditional PE. Our P-IPFE scheme also achieves bounded collusion resistance while inheriting the linear compactness optimized in the underlying PE scheme. Additionally, it supports any polynomial-sized and bounded-depth circuits, thereby extending beyond the inner-product predicate class in prior works.
Furthermore, all the proposed schemes achieve selective fully attribute-hiding security in the simulation-based model, therefore, can further attain semi-adaptive security by adopting existing upgrading techniques.
Vanishing Short Integer Solution, Revisited: Reductions, Trapdoors, Homomorphic Signatures for Low-Degree Polynomials
The vanishing short integer solution (vSIS) assumption [Cini-Lai-Malavolta, Crypto'23], at its simplest form, asserts the hardness of finding a polynomial with short coefficients which vanishes at a given random point. While vSIS has proven to be useful in applications such as succinct arguments, not much is known about its theoretical hardness. Furthermore, without the ability to generate a hard instance together with a trapdoor, the applicability of vSIS is significantly limited.
We revisit the vSIS assumption focusing on the univariate single-point constant-degree setting, which can be seen as a generalisation of the (search) NTRU problem. In such a setting, we show that the vSIS problem is as hard as finding the shortest vector in certain ideal lattices. We also show how to generate a random vSIS instance together with a trapdoor, under the (decision) NTRU assumption. Interestingly, a vSIS trapdoor allows to sample polynomials of short coefficients which evaluate to any given value at the public point. By exploiting the multiplicativity of the polynomial ring, we use vSIS trapdoors to build a new homomorphic signature scheme for low-degree polynomials.
A Note on Zero-Knowledge Simulator of the CROSS Identification Protocol
We point out flaw in zero-knowledge of the CROSS identification protocol, , which allows a distinguisher to distinguish real and simulated transcripts given access to the witness. Moreover, we show that the real and simulated transcripts are not statistically indistinguishable, and therefore the protocol can only satisfy weak computational (rather than strong, statistical or perfect) Honest Verifier Zero-knowledge. This issue is still present in version 2.0 updated on January 31, 2025, which resolves the security losses attained via the attacks of [BLP+25]
The Complexity of Memory Checking with Covert Security
A memory checker is an algorithmic tool used to certify the integrity of a database maintained on a remote, unreliable, computationally bounded server. Concretely, it allows a user to issue instructions to the server and after every instruction, obtain either the correct value or a failure (but not an incorrect answer) with high probability. A recent result due to Boyle, Komargodski, and Vafa (BKV, STOC '24) showed a tradeoff between the size of the local storage and the number of queries the memory checker makes to the server upon every logical instruction. Specifically, they show that every non-trivial memory checker construction with inverse-polynomial soundness and local storage at most must make queries, and this is tight up to constant factors given known constructions. However, an intriguing question is whether natural relaxations of the security guarantee could allow for more efficient constructions.
We consider and adapt the notion of covert security to the memory checking context, wherein the adversary can effectively cheat while taking the risk of being caught with constant probability. Notably, BKV's lower bound does not apply in this setting.
We close this gap and prove that overhead is unavoidable even in the covert security setting. Our lower bound applies to any memory checker construction, including ones that use randomness and adaptivity and ones that rely on cryptographic assumptions and/or the random oracle model, as long as they satisfy a natural "read-only reads" property. This property requires a memory checker not to modify contents of the database or local storage in the execution of a logical read instruction.
Random Number Generation from Pulsars
Pulsars exhibit signals with precise inter-arrival times that are on the order of milliseconds to seconds depending on the individual pulsar. There is subtle variation in the timing of pulsar signals, primarily due to the presence of gravitational waves, intrinsic variance in the period of the pulsar, and errors in the realization of Terrestrial Time (TT). Traditionally, these variations are dismissed as noise in high-precision timing experiments. In this paper, we show that these variations serve as a natural entropy source for the creation of Random Number Generators (RNG). We also explore the effects of using randomness extractors to increase the entropy of random bits extracted from Pulsar timing data. To evaluate the quality of the Pulsar RNG, we model its entropy as a -source and use well-known cryptographic results to show its closeness to a theoretically ideal uniformly random source. To remain consistent with prior work, we also show that the Pulsar RNG passes well-known statistical tests such as the NIST test suite.
Lattice-based Proof-Friendly Signatures from Vanishing Short Integer Solutions
Efficient anonymous credentials are typically constructed by combining proof-friendly signature schemes with compatible zero-knowledge proof systems. Inspired by pairing-based proof-friendly signatures such as Boneh- Boyen (BB) and Boneh-Boyen-Shacham (BBS), we propose a wide family of lattice-based proof-friendly signatures based on variants of the vanishing short integer solution (vSIS) assumption [Cini-Lai-Malavolta, Crypto'23]. In particular, we obtain natural lattice-based adaptions of BB and BBS which, similar to their pairing-based counterparts, admit nice algebraic properties.
[Bootle-Lyubashevsky-Nguyen-Sorniotti, Crypto'23] (BLNS) recently proposed a framework for constructing lattice-based proof-friendly signatures and anonymous credentials, based on another new lattice assumption called parametrised by a fixed function , with focus on being the binary decomposition. We introduce a generalised framework, called , with a keyed and probabilistic function . For example, picking with key for short ring element leads to algebraic and thus proof-friendly signatures. To better gauge the robustness and proof-friendliness of , we consider what happens when the inputs to are chosen selectively (or even adaptively) by the adversary, and the behaviour under relaxed norm checks. While bit decomposition quickly becomes insecure, our proposed function families seem robust.
Commit-and-Prove System for Vectors and Applications to Threshold Signing
Multi-signatures allow to combine several individual signatures into a compact one and verify it against a short aggregated key. Compared to threshold signatures, multi-signatures enjoy non-interactive key generation but give up on the threshold-setting. Recent works by Das et al. (CCS'23) and Garg et al. (S&P'24) show how multi-signatures can be turned into schemes that enable efficient verification when an ad hoc threshold -- determined only at verification -- is satisfied. This allows to keep the simple key generation of multi-signatures and support flexible threshold settings in the signing process later on. Both works use the same idea of combining BLS multi-signatures with inner-product proofs over committed keys. Das et al. give a somewhat generic proof from both building blocks, which we show to be flawed, whereas Garg et al. give a direct proof for the combined construction in the algebraic group model.
In this work, we identify the common blueprint used in both works and abstract the proof-based approach through the building block of a commit-and-prove system for vectors (CP). We formally define a flexible set of security properties for the CP system and show how it can be securely combined with a multi-signature to yield a signature with ad hoc thresholds. Our scheme also lifts the threshold signatures into the multiverse setting recently introduced by Baird et al. (S&P'23), which allows signers to re-use their long-term keys across several groups. The challenge in the generic construction is to express -- and realize -- the combination of homomorphic proofs and commitments (needed to realize flexible thresholds over fixed group keys) and their simulation extractability (needed in the threshold signature security proof). We finally show that a CP instantiation closely following the ideas of Das et al. can be proven secure, but requires a new flexible-base DL-assumption to do so.
Delayed-Input Multi-Party Computation
In this work, we consider the setting where the process of securely evaluating a multi-party functionality is divided into two phases: offline (or preprocessing) and online. The offline phase is independent of the parties’ inputs, whereas the online phase does require the knowledge of the inputs. We consider the problem of minimizing the round of communication required in the online phase and propose a round preserving compiler that can turn a big class of multi-party computation (MPC) protocols into protocols in which only the last two rounds are input-dependent. Our compiler can be applied to a big class of MPC protocols, and in particular to all existing round-optimal MPC protocols. All our results assume no setup and are proven in the dishonest majority setting with black-box simulation. As part of our contribution, we propose a new definition we call Multi-Party Computation with Adaptive-Input Selection, which allows the distinguisher to craft the inputs the honest parties should use during the online phase, adaptively on the offline phase. This new definition is needed to argue that not only are the messages of the offline phase input-independent but also that security holds even in the stronger (and realistic) adversarial setting where the inputs may depend on some of the offline-phase protocol messages. We argue that this is the definition that any protocol should satisfy to be securely used while preprocessing part of the rounds. We are the first to study this definition in a setting where there is no setup, and the majority of the parties can be corrupted. Prior definitions have been presented in the Universal Composable framework, which is unfortunately not well suited for our setting (i.e., no setup and dishonest majority). As a corollary, we obtain the first four-round (which is optimal) MPC protocol, where the first two rounds can be preprocessed, and its security holds against adaptive-input selection.
Stronger Security for Threshold Blind Signatures
Uncategorized
Uncategorized
Blind signatures allow a user to obtain a signature from an issuer in a privacy-preserving way: the issuer neither learns the signed message, nor can link the signature to its issuance. The threshold version of blind signatures further splits the secret key among n issuers, and requires the user to obtain at least t ≤ n of signature shares in order to derive the final signature. Security should then hold as long as at most t − 1 issuers are corrupt. Security for blind signatures is expressed through the notion of one-more unforgeability and demands that an adversary must not be able to produce more signatures than what is considered trivial after its interactions with the honest issuer(s). While one-more unforgeability is well understood for the single-issuer setting, the situation is much less clear in the threshold case: due to the blind issuance, counting which interactions can yield a trivial signature is a challenging task. Existing works bypass that challenge by using simplified models that do not fully capture the expectations of the threshold setting. In this work, we study the security of threshold blind signatures, and propose a framework of one-more unforgeability notions where the adversary can corrupt c < t issuers. Our model is generic enough to capture both interactive and non-interactive protocols, and it provides a set of natural properties with increasingly stronger guarantees, giving the issuers gradually more control over how their shares can be combined. As a point of comparison, we reconsider the existing threshold blind signature models and show that their security guarantees are weaker and less clearly comprehensible than they seem. We then re-assess the security of existing threshold blind signature schemes – BLS-based and Snowblind – in our framework, and show how to lift them to provide stronger security.
Efficient NIZK Arguments with Straight-Line Simulation and Extraction
Non-interactive zero-knowledge (NIZK) arguments allow a prover to convince a verifier about the truthfulness of an NP-statement by sending just one message, without disclosing any additional information. In several practical scenarios, the Fiat-Shamir transform is used to convert an efficient constant-round public-coin honest-verifier zero-knowledge proof system into an efficient NIZK argument system. This approach is provably secure in the random oracle model, crucially requires the programmability of the random oracle and extraction works through rewinds. The works of Lindell [TCC 2015] and Ciampi et al. [TCC 2016] proposed efficient NIZK arguments with non-programmable
random oracles along with a programmable common reference string. In this work we show an efficient NIZK argument with straight-line simulation and extraction that relies on features that alone are insufficient to construct NIZK arguments (regardless of efficiency). More specifically we consider the notion of quasi-polynomial time simulation proposed by Pass in [EUROCRYPT 2003] and combine it with simulation and extraction with non-programmable random
oracles thus obtaining a NIZK argument of knowledge where neither the zero-knowledge simulator, nor the argument of knowledge extractor needs to program the random oracle. Still, both the simulator and the extractor are straight-line. Our construction uses as a building block a modification of the Fischlin’s transform [CRYPTO 2005] and combines it with the concept of dense puzzles introduced by Baldimtsi et al. [ASIACRYPT 2016]. We also argue that our NIZK argument system inherits the efficiency features of Fischlin’s transform, which represents the main advantage of Fischlin’s protocol over existing schemes.
Thorough Power Analysis on Falcon Gaussian Samplers and Practical Countermeasure
Falcon is one of post-quantum signature schemes selected by NIST for standardization. With the deployment underway, its implementation security is of great importance. In this work, we focus on the side-channel security of Falcon and our contributions are threefold.
First, by exploiting the symplecticity of NTRU and a recent decoding technique, we dramatically improve the key recovery using power leakages within Falcon Gaussian samplers. Compared to the state of the art (Zhang, Lin, Yu and Wang, EUROCRYPT 2023), the amount of traces required by our attack for a full key recovery is reduced by at least 85%.
Secondly, we present a complete power analysis for two exposed power leakages within Falcon’s integer Gaussian sampler. We identify new sources of these leakages, which have not been identified by previous works, and conduct detailed security evaluations within the reference implementation of Falcon on Chipwhisperer.
Thirdly, we propose effective and easy-to-implement countermeasures against both two leakages to protect the whole Falcon’s integer Gaussian sampler. Configured with our countermeasures, we provide security evaluations on Chipwhisperer and report performance of protected implementation. Experimental results highlight that our countermeasures admit a practical trade-off between effciency and side-channel security.
Bootstrapping with RMFE for Fully Homomorphic Encryption
There is a heavy preference towards instantiating BGV and BFV homomorphic encryption schemes where the cyclotomic order is a power of two, as this admits highly efficient fast Fourier transformations. Field Instruction Multiple Data (FIMD) was introduced to increase packing capacity in the case of small primes and improve amortised performance, using reverse multiplication-friendly embeddings (RMFEs) to encode more data into each SIMD slot. However, FIMD currently does not admit bootstrapping.
In this work, we achieve bootstrapping for RMFE-packed ciphertexts with low capacity loss. We first adapt the digit extraction algorithm to work over RMFE-packed ciphertexts, by applying the recode map after every evaluation of the lifting polynomial. This allows us to follow the blueprint of thin bootstrapping, performing digit extraction on a single ciphertext. To achieve the low capacity loss, we introduce correction maps to the Halevi-Shoup digit extraction algorithm, to remove all but the final recode of RMFE digit extraction.
We implement several workflows for bootstrapping RMFE-packed ciphertexts in HElib, and benchmark them against thin bootstrapping for . Our experiments show that the basic strategy of recoding multiple times in digit extraction yield better data packing, but result in very low remaining capacity and latencies of up to hundreds of seconds. On the other hand, using correction maps gives up to additional multiplicative depth and brings latencies often below seconds, at the cost of lower packing capacity.
Efficient Distributed Randomness Generation from Minimal Assumptions where PArties Speak Sequentially Once
We study efficient public randomness generation protocols in the PASSO (PArties Speak Sequentially Once) model for multi-party computation (MPC). PASSO is a variation of traditional MPC where parties are executed in sequence and each party ``speaks'' only once, broadcasting and sending secret messages only to parties further down the line. Prior results in this setting include information-theoretic protocols in which the computational complexity scales exponentially with the number of corruptions (CRYPTO 2022), as well as more efficient computationally-secure protocols either assuming a trusted setup phase or DDH (FC 2024). Moreover, these works only consider security against static adversaries.
In this work, we focus on computational security against adaptive adversaries and from minimal assumptions, and improve on the works mentioned above in several ways:
- Assuming the existence of non-interactive perfectly binding commitments, we design protocols with or parties that are efficient and secure whenever is small compared to the security parameter (e.g., is constant). This improves the resiliency of all previous protocols, even those requiring a trusted setup. It also shows that parties are necessary and sufficient for corruptions in the computational setting, while parties are required for information-theoretic security.
- Under the same assumption, we design protocols with or parties (depending on the adversarial network model) which are efficient whenever . This improves on the existing DDH-based protocol both in terms of resiliency and the underlying assumptions.
- We design efficient protocols with or parties (depending on the adversarial network model) assuming the existence of one-way functions.
We complement these results by studying lower bounds for randomness generation protocols in the computational setting.
Juicebox Protocol: Distributed Storage and Recovery of Secrets Using Simple PIN Authentication
Existing secret management techniques demand users memorize complex passwords, store convoluted recovery phrases, or place their trust in a specific service or hardware provider. We have designed a novel protocol that combines existing cryptographic techniques to eliminate these complications and reduce user complexity to recalling a short PIN. Our protocol specifically focuses on a distributed approach to secret storage that leverages Oblivious Pseudorandom Functions (OPRFs) and a Secret-Sharing Scheme (SSS) combined with self-destructing secrets to minimize the trust placed in any singular server. Additionally, our approach allows for servers distributed across organizations, eliminating the need to trust a singular service operator. We have built an open-source implementation of the client and server sides of this new protocol, the latter of which has variants for running on commodity hardware and secure hardware.
Helix: Scalable Multi-Party Machine Learning Inference against Malicious Adversaries
With the growing emphasis on data privacy, secure multi-party computation has garnered significant attention for its strong security guarantees in developing privacy-preserving machine learning (PPML) schemes. However, only a few works address scenarios with a large number of participants. The state of the art by Liu et al. (LXY24, USENIX Security'24) first achieves a practical PPML protocol for up to 63 parties but is constrained to semi-honest security. Although naive extensions to the malicious setting are possible, they would introduce significant overhead.
In this paper, we propose Helix, a scalable framework for maliciously secure PPML in the honest majority setting, aiming to enhance both the scalability and practicality of maliciously secure protocols. In particular, we report a privacy leakage issue in LXY24 during prefix OR operations and introduce a round-optimized alternative based on a single-round vectorized three-layer multiplication protocol. Additionally, by exploiting reusability properties within the computation process, we propose lightweight compression protocols that substantially improve the efficiency of multiplication verification. We also develop a batch check protocol to reduce the computational complexity of revealing operations in the malicious setting. For 63-party neural network inference, compared to the semi-honest LXY24, Helix is only 1.9 (1.1 ) slower in the online phase and 1.2 (1.1 ) slower in preprocessing under LAN (WAN) in the best case.
Homomorphic Encryption for Large Integers from Nested Residue Number Systems
Existing fully homomorphic encryption (FHE) schemes primarily support a plaintext space defined over a relatively small prime. However, in some important applications of FHE one needs arithmetic over a large prescribed prime. In this paper we construct a new FHE system that is specifically designed for this purpose.
Our system composes three layers of residue systems to enable much better performance than was previously possible. Our experiments show that for arithmetic modulo a 256-bit integer, when compared to the TFHE-rs implementation of 256-bit arithmetic, our new system achieves a factor of a thousand better multiplication throughput and a factor of ten better latency. Moreover, for a 2048-bit prime modulus we achieve far better performance than was previously possible.
Publicly Verifiable Threshold Proxy Re-encryption and Its Application in Data Rights Confirmation
Uncategorized
Uncategorized
Proxy re-encryption (PRE) has been regarded as an effective cryptographic primitive in data sharing systems with distributed proxies. However, no literature considers the honesty of data owners, which is critical in the age of big data. In this paper, we fill the gap by introducing a new proxy re-encryption scheme, called publicly verifiable threshold PRE (PVTPRE). Briefly speaking, we innovatively apply a slightly modified publicly verifiable secret sharing (PVSS) scheme to distribute the re-encryption keys to multiple proxies. Consequently, we achieve publicly verifiability of data owners non-interactively. Then, the correctness of data users in decryption and public verifiability of proxies in re-encryption are guaranteed seamlessly through execution of the PVSS reconstruction algorithms. We further prove that PVTPRE satisfies IND-CPA security. Besides, we put forward a privacy-preserving data rights confirmation framework by providing clear principles for data ownership and usage, based on the PVTPRE scheme and blockchain. Blockchain plays the role of data bank and smart contract engine, providing reliable storage and verification for all framework. To our knowledge, we are the first to systematically investigate data rights confirmation considering privacy as well as public verifiability, addressing the growing need for robust mechanisms to protect data rights and ensure transparency. Finally, we conduct comprehensive experiments to illustrate the correctness, feasibility and effectiveness. The experimental results show that our PVTPRE outperforms other PREs in many aspects.
Last updated: 2025-03-10
Publicly Verifiable Generalized Secret Sharing and Its Application in Building Decentralized Exchange
Generalized secret sharing (GSS), which can offer more flexibility by accommodating diverse access structures and conditions, has been under-explored in distributed computing over the past decades. To address the gaps, we propose the publicly verifiable generalized secret sharing (PVGSS) scheme, enhancing the applicability of GSS in transparent systems. Public verifiability is a crucial property to gain trustworthiness for decentralized systems like blockchain. We begin by introducing two GSS constructions, one based on Shamir's secret sharing and the other on the linear secret sharing scheme (LSSS). Next, we present PVGSS schemes that combine GSS with non-interactive zero-knowledge (NIZK) proofs. Further, we construct a decentralized exchange (DEX) based on PVGSS scheme, where any users can participate in exchanges and engage in arbitrage. Specifically, users can fairly swap ERC-20 tokens with passive watchers, who earn profits by providing arbitration services. The critical property of "fairness" required by the DEX is ensured through a sophisticated access structure, supported by the PVGSS scheme. We provide a comprehensive evaluation on the performance of the PVGSS schemes and the monetary costs for users in the DEX. The results demonstrate the feasibility and practicality of this approach in real-world applications.
Tight Multi-challenge Security Reductions for Key Encapsulation Mechanisms
A key encapsulation mechanism (KEM) allows two parties to establish a shared secret key using only public communication. For post-quantum KEMs, the most widespread approach is to design a passively secure public-key encryption (PKE) scheme and then apply the Fujisaki–Okamoto (FO) transform that turns any such PKE scheme into an IND-CCA secure KEM. While the base security requirement for KEMs is typically IND-CCA security, adversaries in practice can sometimes observe and attack many public keys and/or ciphertexts, which is referred to as multi-challenge security. FO does not necessarily guarantee multi-challenge security: for example, FrodoKEM, a Round 3 alternate in NIST’s post-quantum project, used FO to achieve IND-CCA security, but was subsequently shown to be vulnerable to attackers that can target multiple ciphertexts. To avert this multi-ciphertext attack, the FrodoKEM team added a salt to the encapsulation procedure and proved that this does not degrade (single-ciphertext) IND-CCA security. The formal analysis of whether this indeed averts multi-ciphertext attacks, however, was left open, which we address in this work.
Firstly, we formalize FrodoKEM's approach as a new variant of the FO transform, called the salted FO transform. Secondly, we give tight reductions from multi-challenge security of the resulting KEM to multi-challenge security of the underlying public key encryption scheme, in both the random oracle model (ROM) and the quantum-accessible ROM (QROM). Together these results justify the multi-ciphertext security of the salted FrodoKEM scheme, and can also be used generically by other schemes requiring multi-ciphertext security.
Traceable Threshold Encryption without Trusted Dealer
The fundamental assumption in -out-of- threshold encryption is that the adversary can only corrupt less than parties. Unfortunately, it may be unfounded in practical scenarios where shareholders could be incentivized to collude. Boneh, Partap, and Rotem (Crypto'24) recently addressed the setting where or more shareholders work together to decrypt illegally. Inspired by the well-established notion of traitor tracing in broadcast encryption, they added a traceability mechanism that guarantees identifying at least one of the colluders. They provide several constructions that enable traceability, all of which require a trusted dealer to distribute the secret shares. While the trusted dealer can be replaced with a DKG for conventional threshold encryption, it is unclear how to do so without compromising traceability. As thresholdizing is meant to mitigate a single point of failure, a natural question that remains is: Can we construct an efficient traceable threshold encryption scheme that does not rely on a trusted party to distribute the secret shares?
In this paper, we achieve two dealerless traceable threshold encryption constructions with different merits by extending the PLBE primitive of Boneh et al. (Eurocrypt'06) and combining it with the silent setup threshold encryption construction of Garg et al. (Crypto'24). Our first construction achieves an amortized ciphertext of size (for ciphertexts). Our second construction achieves constant ciphertext size even in the worst case but requires a less efficient preprocessing phase as a tradeoff. Both our constructions enjoy a constant secret key size and do not require any interaction between the parties.
An additional restriction in the constructions of Boneh et al. is that they can only guarantee to find at least one colluder, leaving techniques to identify more traitors as an open problem. In this paper, we take a first step towards solving this question by formalizing a technique and applying it to our first construction. Namely, our first construction enables tracing traitors.
CCA-Secure Traceable Threshold (ID-based) Encryption and Application
A recent work by Boneh, Partap, and Rotem [Crypto'24] introduced the concept of traceable threshold encryption, in that if or more parties collude to construct a decryption box, which performs decryptions, then at least one party's identity can be traced by making a few black-box queries to the box. This has important applications, e.g., in blockchain mempool privacy, where collusion yields high financial gain through MEVs without any consequence - the possibility of tracing discourages collusion.
Nevertheless, their definitions leave room for exploitation as they only achieve CPA security and do not consider inconsistency in decryption via different participating sets.
This paper proposes stronger definitions of traceable threshold encryption, which supports CCA-security and consistency. Our main approach considers identity-based variants of traceable encryption (which we also define). It converts that to a CCA-secure construction, adapting two generic transformations, first using a one-time signature and then a fingerprinting code.
We put forward two efficient instantiations of our identity-based scheme with different merits: our first construction is based on Boneh-Franklin IBE [Crypto'01] and has constant size ciphertexts but quadratic size public keys - this is proven secure based on XDH and BDDH. Our second construction is based on Boneh-Boyen IBE [Eurocrypt'04]. It supports both constant-size ciphertexts and constant-size public keys - this is proven secure based on a variant of the uber assumption over bilinear pairings. Our concrete analysis shows that the first construction's ciphertext is much (~6x) smaller than the second construction. Finally, we extend the definitions to support consistency and achieve it by adjoining an efficient, non-interactive proof of correct encryption.
Hollow LWE: A New Spin, Unbounded Updatable Encryption from LWE and PCE
Uncategorized
Uncategorized
Updatable public-key encryption (UPKE) allows anyone to update a public key while simultaneously producing an update token, given which the secret key holder could consistently update the secret key. Furthermore, ciphertexts encrypted under the old public key remain secure even if the updated secret key is leaked -- a property much desired in secure messaging. All existing lattice-based constructions of UPKE update keys by a noisy linear shift. As the noise accumulates, these schemes either require super-polynomial-size moduli or an a priori bounded number of updates to maintain decryption correctness.
Inspired by recent works on cryptography based on the lattice isomorphism problem, we propose an alternative way to update keys in lattice-based UPKE. Instead of shifting, we rotate them. As rotations do not induce norm growth, our construction supports an unbounded number of updates with a polynomial-size modulus. The security of our scheme is based on the LWE assumption over hollow matrices -- matrices which generate linear codes with non-trivial hull -- and the hardness of permutation code equivalence. Along the way, we also show that LWE over hollow matrices is as hard as LWE over uniform matrices, and that a leftover hash lemma holds for hollow matrices.
Key-Homomorphic Computations for RAM: Fully Succinct Randomised Encodings and More
We propose a new method to construct a public-key encryption scheme, where one can homomorphically transform a ciphertext encrypted under a key into a ciphertext under , for any polynomial-time RAM program with runtime and memory . Combined with other lattice techniques, this allows us to construct:
1) Succinct-randomised encodings from RAM programs with encoder complexity and rate-1 encodings.
2) Laconic function evaluation for RAM programs, with encoder runtime bounded by and rate-1 encodings.
3) Key-policy attribute-based encryption for RAM programs, with ciphertexts of size . The same scheme can be converted to the register setting, obtaining linear CRS size in the number of parties.
All of our schemes rely on the hardness of the \emph{decomposed learning with errors} (LWE) problem, along with other standard computational assumptions on lattices. The decomposed LWE problem can be interpreted as postulating the circular-security of a natural lattice-based public-key encryption scheme. To gain confidence in the assumption, we show that it is implied by the hardness of the succinct LWE problem of Wee (CRYPTO'24).
CT-LLVM: Automatic Large-Scale Constant-Time Analysis
Constant-time (CT) is a popular programming discipline to protect
cryptographic libraries against micro-architectural timing attacks.
One appeal of the CT discipline lies in its conceptual simplicity: a
program is CT iff it has no secret-dependent data-flow,
control-flow or variable-timing operation. Thanks to its simplicity,
the CT discipline is supported by dozens of analysis tools. However, a
recent user study demonstrates that these tools are seldom used due to
poor usability and maintainability (Jancar et al. IEEE SP 2022).
In this paper, we introduce CT-LLVM, a CT analysis tool designed for
usability, maintainability and automatic large-scale analysis.
Concretely, CT-LLVM is packaged as a
LLVM plugin and is built as a thin layer on top of two standard LLVM
analysis: def-use and alias analysis. Besides confirming known CT
violations, we demonstrate the usability and scalability of CT-LLVM by
automatically analyzing nine cryptographic libraries. On
average, CT-LLVM can automatically and soundly analyze 36% of the
functions in these libraries, proving that 61% of them are CT. In
addition, the large-scale automatic analysis also reveals new
vulnerabilities in these libraries. In the end, we demonstrate
that CT-LLVM helps systematically mitigate compiler-introduced CT
violations, which has been a long-standing issue in CT analysis.
Efficient IP Masking with Generic Security Guarantees under Minimum Assumptions
Leakage-resilient secret sharing schemes are a fundamental building block for secure computation in the presence of leakage. As a result, there is a strong interest in building secret sharing schemes that combine resilience in practical leakage scenarios with potential for efficient computation. In this work, we revisit the inner-product framework, where a secret is encoded by two vectors , such that their inner product is equal to . So far, the most efficient inner-product masking schemes (in which is public but random) are provably secure with the same security notions (e.g., in the abstract probing model) as additive, Boolean masking, yet at the cost of a slightly more expensive implementation. Hence, their advantage in terms of theoretical security guarantees remains unclear, also raising doubts about their practical relevance. We address this question by showing the leakage resilience of inner-product masking schemes, in the bounded leakage threat model. It depicts well implementation contexts where the physical noise is negligible. In this threat model, we show that if bits are leaked from the shares of the encoding over an -bit field, then with probability at least over the choice of , the scheme is -leakage resilient. Furthermore, this result holds without assuming independent leakage from the shares, which may be challenging to enforce in practice. We additionally show that in large Mersenne-prime fields, a wise choice of the public coefficients can yield leakage resilience up to , in the case where one physical bit from each share is revealed to the adversary. The exponential rate of the leakage resilience we put forward significantly improves upon previous bounds in additive masking, where the past literature exhibited a constant exponential rate only.
Succinct Oblivious Tensor Evaluation and Applications: Adaptively-Secure Laconic Function Evaluation and Trapdoor Hashing for All Circuits
We propose the notion of succinct oblivious tensor evaluation (OTE), where two parties compute an additive secret sharing of a tensor product of two vectors , exchanging two simultaneous messages. Crucially, the size of both messages and of the CRS is independent of the dimension of .
We present a construction of OTE with optimal complexity from the standard learning with errors (LWE) problem. Then we show how this new technical tool enables a host of cryptographic primitives, all with security reducible to LWE, such as:
1)Adaptively secure laconic function evaluation for depth- functions with communication .
2) A trapdoor hash function for all functions.
3) An (optimally) succinct homomorphic secret sharing for all functions.
4) A rate- laconic oblivious transfer for batch messages, which is best possible.
In particular, we obtain the first laconic function evaluation scheme that is adaptively secure from the standard LWE assumption, improving upon Quach, Wee, and Wichs (FOCS 2018). As a key technical ingredient, we introduce a new notion of adaptive lattice encodings, which may be of independent interest.
Privacy-Preserving Multi-Signatures: Generic Techniques and Constructions Without Pairings
Multi-signatures allow a set of parties to produce a single signature for a common message by combining their individual signatures. The result can be verified using the aggregated public key that represents the group of signers. Very recent work by Lehmann and Özbay (PKC '24) studied the use of multi-signatures for ad-hoc privacy-preserving group signing, formalizing the notion of multi-signatures with probabilistic yet verifiable key aggregation. Moreover, they proposed new BLS-type multi-signatures, allowing users holding a long-term key pair to engage with different groups, without the aggregated key leaking anything about the corresponding group. This enables key-reuse across different groups in a privacy-preserving way. Unfortunately, their technique cannot be applied to Schnorr-type multi-signatures, preventing state-of-the-art multi-signatures to benefit from those privacy features.
In this work, we revisit the privacy framework from Lehmann and Özbay. Our first contribution is a generic lift that adds privacy to any multi-signature with deterministic key aggregation. As our second contribution, we study two concrete multi-signatures, and give dedicated transforms that take advantage of the underlying structures for improved efficiency. The first one is a slight modification of the popular MuSig2 scheme, achieving the strongest privacy property for free compared to the original scheme. The second is a variant of the lattice-based multi-signature scheme DualMS, making our construction the first post-quantum secure multi-signature for ad-hoc privacy-preserving group signing. The light overhead incurred by the modifications in our DualMS variant still allow us to benefit from the competitiveness of the original scheme.
How to Share an NP Statement or Combiners for Zero-Knowledge Proofs
In Crypto'19, Goyal, Jain, and Sahai (GJS) introduced the elegant notion of *secret-sharing of an NP statement* (NPSS). Roughly speaking, a -out-of- secret sharing of an NP statement is a reduction that maps an instance-witness pair to instance-witness pairs such that any subset of reveals no information about the original witness, while any subset of allows full recovery of the original witness. Although the notion was formulated for general , the only existing construction (due to GJS) applies solely to the case where and provides only computational privacy. In this paper, we further explore NPSS and present the following contributions.
1. **Definition.** We introduce a refined definition of information-theoretically secure NPSS. This notion can be seen as a cryptographic variant of standard NP-reductions and can be compiled into the GJS definition using any one-way function.
2. **Construction.** We construct information-theoretic -out-of- NPSS for any values of with complexity polynomial in . Along the way, we present a new notion of secure multiparty computation that may be of independent interest.
3. **Applications.** Our NPSS framework enables the *non-interactive combination* of instances of zero-knowledge proofs, where only of them are sound and only are zero-knowledge, provided that . Our combiner preserves various desirable properties, such as the succinctness of the proof. Building on this, we establish the following results under the minimal assumption of one-way functions:
(i) *Standard NIZK implies NIZK in the Multi-String Model* (Groth and Ostrovsky, J. Cryptology, 2014), where security holds as long as a majority of the common reference strings were honestly generated. Previously, such a transformation was only known in the common random string model, where the reference string is uniformly distributed.
(ii) A *Designated-Prover NIZK in the Multi-String Model*, achieving a strong form of two-round Multi-Verifier Zero-Knowledge in the honest-majority setting.
(iii) A *three-round secure multiparty computation protocol* for general functions in the honest-majority setting. The round complexity of this protocol is optimal, resolving a line of research that previously relied on stronger assumptions (Aharonov et al., Eurocrypt'12; Gordon et al., Crypto'15; Ananth et al., Crypto'18; Badrinarayanan et al., Asiacrypt'20; Applebaum et al., TCC'22).
Leap: A Fast, Lattice-based OPRF With Application to Private Set Intersection
Oblivious pseudorandom functions (OPRFs) are an important primitive in privacy-preserving cryptographic protocols. The growing interest in OPRFs, both in theory and practice, has led to the development of numerous constructions and variations. However, most of these constructions rely on classical assumptions. Potential future quantum attacks may limit the practicality of those OPRFs for real-world applications.
To close this gap, we introduce Leap, a novel OPRF based on heuristic lattice assumptions. Fundamentally, Leap builds upon the Spring [BBL+15] pseudorandom function (PRF), which relies on the learning with rounding assumption, and integrates techniques from multi-party computation, specifically Oblivious Transfer (OT) and Oblivious Linear Evaluation (OLE). With this combination of oblivious protocols, we construct an OPRF that evaluates in less than a millisecond on a modern computer.
Efficiency-wise, our prototype implementation achieves computation times of just 11 microseconds for the client and 750 microseconds for the server, excluding some base OT preprocessing overhead. Moreover, Leap requires an online communication cost of 23 kB per evaluation, where the client only has to send around 380 bytes online. To demonstrate the practical applicability of Leap, we present an efficient private set intersection (PSI) protocol built on top of Leap. This application highlights the potential for the integration of Leap into various privacy-preserving applications: We can compute an unbalanced set intersection with set sizes of 2^24 and 2^15 in under a minute of online time and just over two minutes overall.
Towards Leakage-Resilient Ratcheted Key Exchange
Ratcheted key exchange (RKE) is at the heart of modern secure messaging, enabling protocol participants to continuously update their secret material to protect against full state exposure through forward security (protecting past secrets and messages) and post-compromise security (recovering from compromise). However, many practical attacks only provide the adversary with partial access to a party's secret state, an attack vector studied under the umbrella of leakage resilience. Existing models of RKE provide suboptimal guarantees under partial leakage due to inherent limitations in security under full state exposure.
In this work, we initiate the study of leakage-resilient ratcheted key exchange that provides typical guarantees under full state exposure and additional guarantees under partial state exposure between ratchets of the protocol. We consider unidirectional ratcheted key exchange (URKE) where one party acts as the sender and the other as receiver. Building on the notions introduced by Balli, Rösler and Vaudenay (ASIACRYPT 2020), we formalise a key indistinguishability game under randomness manipulation and bounded leakage (KIND), which in particular enables the adversary to continually leak a bounded amount of the sender's state between honest send calls. We construct a corresponding protocol from a key-updatable key encapsulation mechanism (kuKEM) and a leakage-resilient one-time MAC. By instantiating this MAC in the random oracle model (ROM), results from Balli, Rösler and Vaudenay imply that in the ROM, kuKEM and KIND-secure URKE are equivalent, i.e., can be built from each other. To address the strong limitations that key indistinguishability imposes on the adversary, we formalise a one-wayness game that also permits leakage on the receiver. We then propose a corresponding construction from leakage-resilient kuKEM, which we introduce, and a leakage-resilient one-time MAC. We further show that leakage-resilient kuKEM and one-way-secure URKE are equivalent in the ROM, highlighting the cost that strong one-way security entails. Our work opens exciting directions for developing leakage-resilient messaging protocols.
Private Multi-Party Neural Network Training over via Galois Rings
Secret-sharing-based multi-party computation provides effective solutions for privacy-preserving machine learning. In this paper, we present novel protocols for privacy-preserving neural network training using Shamir secret sharing scheme over Galois rings. The specific Galois ring we use is , which contains as a subring. The algebraic structure of enables us to benefit from Shamir scheme while performing modulo operations only on instead of a prime number, making our protocols more compatible with modern computer architectures. We achieve the parallel processing of training data by embedding different training samples into the different coefficients of the polynomial representing a single Galois ring element, and we show that this embedding can be performed with no additional communication overhead compared to processing only one sample at a time. To evaluate our methods, we conduct private training of neural networks on the MNIST dataset between different numbers of participants. The experimental results indicate the advantages of our protocols compared to existing -based implementations in this domain.
(Multi-Input) FE for Randomized Functionalities, Revisited
Randomized functional encryption (rFE) generalizes functional encryption (FE) by incorporating randomized functionalities. Randomized multi-input functional encryption (rMIFE) extends rFE to accommodate multi-input randomized functionalities.
In this paper, we reassess the framework of rFE/rMIFE enhancing our understanding of this primitive and laying the groundwork for more secure and flexible constructions in this field. Specifically, we make three key contributions:
- New definition: We identify critical gap in the existing indistinguishability-based (IND) security definition for rFE/rMIFE. Notably, current definition fails to adequately address security against malicious encryptors—a crucial requirement for rFE/rMIFE since their introduction. We propose a novel, robust IND security definition that not only addresses threats from malicious decryptors but also quantifies the security against malicious encryptors effectively.
- Counterexample: To illustrate the importance of this definitional gap, we provide a counterexample of an insecure rFE scheme that meets IND security under the previous definition but explicitly fails in a natural setting (and where this failure would be precluded by our enhanced definition). Our counterexample scheme is non-trivial and meticulously designed using standard cryptographic tools, namely FE for deterministic functions, pseudorandom function (PRF), public key encryption (PKE), and simulation-sound non-interactive zero-knowledge (NIZK) proof systems.
- Adaptive unbounded-message secure construction: The only viable prior construction of rMIFE by Goldwasser et al. [EUROCRYPT 2014] (which uses indistinguishability obfuscation (iO) and other standard assumptions) has significant limitations: it permits only a pre-defined number of messages per encryption slot and operates under selective-security constraints, requiring adversaries to declare challenge ciphertext queries and "corrupted" encryption keys in advance. We address these shortcomings by employing sub-exponentially secure iO. Technically, we build on and adapt methods developed by Goyal et al. [ASIACRYPT 2016] for deterministic MIFE.
Towards a White-Box Secure Fiat-Shamir Transformation
The Fiat–Shamir transformation is a fundamental cryptographic technique widely used to convert public-coin interactive protocols into non-interactive ones. This transformation is crucial in both theoretical and practical applications, particularly in the construction of succinct non-interactive arguments (SNARKs). While its security is well-established in the random oracle model, practical implementations replace the random oracle with a concrete hash function, where security is merely assumed to carry over.
A growing body of work has given theoretical examples of protocols that remain secure under the Fiat–Shamir transformation in the random oracle model but become insecure when instantiated with any white-box implementation of the hash function. Recent research has shown how these attacks can be applied to natural cryptographic schemes, including real-world systems. These attacks rely on a general diagonalization technique, where the protocol exploits its access to the white-box implementation of the hash function. These attacks cast serious doubt on the security of cryptographic systems deployed in practice today, leaving their soundness uncertain.
We propose a new Fiat–Shamir transformation (XFS) that aims to defend against a broad family of attacks. Our approach is designed to be practical, with minimal impact on the efficiency of the prover and verifier and on the proof length. At a high level, our transformation combines the standard Fiat–Shamir technique with a new type of proof-of-work that we construct.
We provide strong evidence for the security of our transformation by proving its security in a relativized random oracle model. Specifically, we show diagonalization attacks on the standard Fiat–Shamir transformation that can be mapped to analogous attacks within this model, meaning they do not rely on a concrete instantiation of the random oracle. In contrast, we prove unconditionally that our XFS variant of the Fiat–Shamir transformation remains secure within this model. Consequently, any successful attack on XFS must deviate from known techniques and exploit aspects not captured by our model.
We hope that our transformation will help preserve the security of systems relying on the Fiat–Shamir transformation.
Fully Asymmetric Anamorphic Homomorphic Encryption from LWE
As introduced by Persiano {\it et al.} (Eurocrypt'22), anamorphic encryption (AE) is a primitive enabling private communications against a dictator that forces users to surrender their decryption keys. In its fully asymmetric flavor (defined by Catalano {\it et al.}, Eurocrypt'24), anamorphic channels can work as hidden public-key mechanisms in the sense that anamorphic encryptors are not necessarily able to decrypt anamorphic ciphertexts. Unfortunately, fully asymmetric AE is hard to come by and even impossible to obtain from ordinary public-key encryption via black-box constructions. So far, only three schemes are known to rely on well-established assumptions. In this paper, we exhibit constructions from the standard LWE assumption based on Regev's cryptosystem and its dual version. In both cases, we retain the additive homomorphism of the schemes. We additionally show that dual Regev is public-key anamorphic in the sense of Persiano {\it et al.} (Crypto'24). In the FHE setting, we show that the dual GSW system provides fully asymmetric AE (while preserving its leveled homomorphism) when instantiated with binary/ternary secret keys. Along the way, we discuss the extent to which our schemes satisfy a generalization of Banfi {\it et al.}'s notion of robustness (Eurocrypt'24) to the case of homomorphically evaluated ciphertexts.
Bulletproofs for R1CS: Bridging the Completeness-Soundness Gap and a ZK Extension
Bulletproofs, introduced by Bünz, Bootle, Boneh, Poelstra, Wuille and Maxwell (IEEE S&P, 2018), is a highly efficient non-interactive argument system that does not require a trusted setup. Recently, Bünz (PhD Thesis, 2023) extended Bulletproofs to support arguments for rank-1 constraint satisfaction (R1CS) systems, a widely-used representation for arithmetic satisfiability problems. Although the argument system constructed by Bünz preserves the attractive properties of Bulletproofs, it presents a gap between its completeness and soundness guarantees: The system is complete for a restricted set of instances, but sound only for a significantly broader set. Although argument systems for such gap relations nevertheless provide clear and concrete guarantees, the gaps they introduce may lead to various inconsistencies or undesirable gaps within proofs of security, especially when used as building blocks within larger systems.
In this work we show that the argument system presented by Bünz can be extended to bridge the gap between its completeness and soundness, and to additionally provide honest-verifier zero-knowledge. For the extended argument system, we introduce a refined R1CS relation that captures the precise set of instances for which both completeness and soundness hold without resorting to a gap formulation. The extended argument system preserves the performance guarantees of the argument system presented by Bünz, and yields a non-interactive argument system using the Fiat-Shamir transform.
On the Adaptive Security of Free-XOR-based Garbling Schemes in the Plain Model
A Garbling Scheme is a fundamental cryptographic primitive, with numerous theoretical and practical applications. Since its inception by Yao (FOCS'82, '86), optimizing the communication and computation complexities of securely garbling circuits has been an area of active research. One such optimization, and perhaps the most fundamental, is the `Free-XOR' technique (Kolesnikov and Schneider, ICALP'08) which allows XOR gates in a function garbling to not require representation, and therefore communication.
Since then, several works have designed and analysed the security of schemes that adopt the Free-XOR optimisation. In particular: (1) Applebaum (JoC'16) proved that this can be securely instantiated assuming symmetric-key encryption satisfying a notion called RK-KDM security; and (2) Zahur, Rosulek and Evans (Eurocrypt'15) proposed the so-called `Half Gates' scheme, and proved that it can be instantiated assuming hash functions satisfying a notion called CCR security. Although both schemes have been proven selectively secure, prior work leaves it open to analyze whether they satisfy a stronger security notion -- adaptive security -- in the plain model.
In this work, we formally show that the selective security of these two schemes cannot be lifted to adaptive security under the same assumptions. To establish these barriers, we adopt techniques from the work of Kamath et al (Crypto'21), who proved similar negative results for Yao's garbling. We use that as a starting point and introduce new techniques tailored towards addressing Free-XOR-based schemes.
On Quantum Money and Evasive Obfuscation
We show a black box barrier against constructing public key quantum money from obfuscation for evasive functions. As current post-quantum obfuscators based on standard assumptions are all evasive, this shows a fundamental barrier to achieving public key quantum money from standard tools. Our impossibility applies to black box schemes where (1) obfuscation queries made by the mint are classical, and (2) the verifier only makes (possibly quantum) evaluation queries, but no obfuscation queries. This class seems to capture any natural method of using obfuscation to build quantum money.
Fine-Grained Complexity in a World without Cryptography
The study of fine-grained cryptography has proliferated in recent years due to its allure of potentially relying on weaker assumptions compared to standard cryptography. As fine-grained cryptography only requires polynomial gaps between the adversary and honest parties, it seems plausible to build primitives relying upon popular hardness assumptions about problems in such as - or - - . The ultimate hope is that fine-grained cryptography could still be viable even if all current cryptographic assumptions are false, such as if or if we live in Pessiland where one-way functions do not exist.
In our work, we consider whether this approach is viable by studying fine-grained complexity when all standard cryptographic assumptions are false. As our main result, we show that many popular fine-grained complexity problems are easy to solve in the average-case when one-way functions do not exist. In other words, many candidate hardness assumptions for building fine-grained cryptography are no longer options in Pessiland. As an example, we prove that the average-case - and - - conjectures are false for sufficiently large constant when no one-way functions exist. The average-case - - assumption was used to build fine-grained key-exchange by Lavigne et al. [CRYPTO'19]. One can also view the contrapositive of our result as providing an explicit construction of one-way functions assuming average-case hardness of - or - - for all constant .
We also show that barriers for reductions in fine-grained complexity may be explained by problems in cryptography. First, we show that finding faster algorithms for computing discrete logarithms is equivalent to designing average-case equivalence between - and - (an extension of - to cyclic groups). In particular, finding such a reduction from - to - could potentially lead to breakthrough algorithms for the discrete logarithm, factoring, RSA and quadratic residuosity problems. Finally, we show that discrete logarithms with preprocessing may be reduced to the - - problem, and we present faster algorithms for average-case - - and - - .
A Generic Approach to Adaptively-Secure Broadcast Encryption in the Plain Model
Broadcast encryption allows a user to encrypt a message to recipients with a ciphertext whose size scales sublinearly with . The natural security notion for broadcast encryption is adaptive security which allows an adversary to choose the set of recipients after seeing the public parameters. Achieving adaptive security in broadcast encryption is challenging, and in the plain model, the primary technique is the celebrated dual-systems approach, which can be implemented over groups with bilinear maps. Unfortunately, it has been challenging to replicate the dual-systems approach in other settings (e.g., with lattices or witness encryption). Moreover, even if we focus on pairing-based constructions, the dual-systems framework critically relies on decisional (and source-group) assumptions. We do not have constructions of adaptively-secure broadcast encryption from search (or target-group) assumptions in the plain model.
Gentry and Waters (EUROCRYPT 2009) described a compiler that takes any semi-statically-secure broadcast encryption scheme and transforms it into an adaptively-secure scheme in the random oracle model. While semi-static security is easier to achieve and constructions are known from witness encryption as well as search (and target-group) assumptions on pairing groups, the transformed scheme relies on random oracles. In this work, we show that using publicly-sampleable projective PRGs, we can achieve adaptive security in the plain model. We then show how to build publicly-sampleable projective PRGs from many standard number-theoretic assumptions (e.g., CDH, LWE, RSA).
Our compiler yields the first adaptively-secure broadcast encryption scheme from search assumptions as well as the first such scheme from witness encryption in the plain model. We also obtain the first adaptively-secure pairing-based scheme in the plain model with -size public keys and -size ciphertexts (where suppresses polynomial factors in the security parameter ). Previous adaptively-secure pairing-based schemes in the plain model with -size ciphertexts required -size public keys.
Partial and Fully Homomorphic Matching of IP Addresses Against Blacklists for Threat Analysis
In many areas of cybersecurity, we require access to Personally Identifiable Information (PII), such as names, postal addresses and email addresses. Unfortunately, this can lead to data breaches, especially in relation to data compliance regulations such as GDPR. An IP address is a typical identifier which is used to map a network address to a person. Thus, in applications which are privacy-aware, we may aim to hide the IP address while aiming to determine if the address comes from a blacklist. One solution to this is to use homomorphic encryption to match an encrypted version of an IP address to a blacklisted network list. This matching allows us to encrypt the IP address and match it to an encrypted version of a blacklist. In this paper, we use the OpenFHE library \cite{OpenFHE} to convert network addresses into the BFV homomorphic encryption method. In order to assess the performance impact of BFV, it implements a matching method using the OpenFHE library and compares this against the partial homomorphic methods of Paillier, Damgard-Jurik, Okamoto-Uchiyama, Naccache-Stern and Benaloh. The main findings are that the BFV method compares favourably against the partial homomorphic methods in most cases.
Differential Cryptanalysis of the Reduced Pointer Authentication Code Function used in Arm’s FEAT_PACQARMA3 Feature
The Pointer Authentication Code ( ) feature in the Arm architecture is used to enforce the Code Flow Integrity ( ) of running programs.
It does so by generating a short - called the - of the return address and some additional context information upon function entry, and checking it upon exit.
An attacker that wants to overwrite the stack with manipulated addresses now faces an additional hurdle, as they now have to guess, forge, or reuse values.
is deployed on billions of devices as a first line of defense to harden system software and complex programs against software exploitation.
The original version of the feature uses a 12-round version the block cipher.
The output is then truncated to between 3 and 32 bits, in order to be inserted into unused bits of 64-bit pointers.
A later revision of the specification allows the use of an 8-round version of .
This reduction may introduce vulnerabilities such as high-probability distinguishers, potentially enabling key recovery attacks.
The present paper explores this avenue.
A cryptanalysis of the computation function entails restricting the inputs to valid virtual addresses, meaning that certain most significant bits are fixed to zero,
and considering only the truncated output.
Within these constraints, we present practical attacks on various configurations.
These attacks, while not presenting immediate threat to the mechanism, show that some versions of the feature
do miss the security targets made for the original function.
This offers new insights into the practical security of constructing from truncated block ciphers, expanding on the mostly theoretical understanding of creating PRFs from truncated PRPs.
We note that the results do not affect the security of when used with the recommended number of rounds for general purpose applications.
Committing Authenticated Encryption: Generic Transforms with Hash Functions
Recent applications and attacks have highlighted the need for authenticated encryption (AE) schemes to achieve the so-called committing security beyond privacy and authenticity. As a result, several generic solutions have been proposed to transform a non-committing AE scheme to a committing one, for both basic unique-nonce security and advanced misuse-resistant (MR) security. We observe that all existing practical generic transforms are subject to at least one of the following limitations: (i) not committing to the entire encryption context, (ii) involving non-standard primitives, (iii) not being a black-box transform, (iv) providing limited committing security. Furthermore, so far, there has been no generic transform that can directly elevate a basic AE scheme to a committing AE scheme that offers MR security. Our work fills these gaps by developing black-box generic transforms that crucially rely on hash functions, which are well standardized and widely deployed.
First, we construct three basic transforms that combine AE with a single hash function, which we call and . They all guarantee strong security, and can be applied to both AE and basic privacy-only encryption schemes. Next, for MR security, we propose two advanced hash-based transforms that we call and . is an MRAE-preserving transform that adds committing security to an MR-secure AE scheme. is the first generic transform that can directly elevate basic AE to one with both committing and MR security; moreover, also works with arbitrary privacy-only encryption schemes. Both of them feature a simple design and ensure strong security.
For performance evaluation, we compare our transforms to similar existing ones, both in theory and through practical implementations. The results show that our achieves the highest practical efficiency among basic transforms, while excels in MRAE-preserving transforms. Our MRAE-lifting transform demonstrates comparable performance to MRAE-preserving ones and surpasses them for messages larger than approximately bytes; for longer messages, it even outperforms the benchmark, non-committing standardized .
Single Trace Side-Channel Vulnerabilities Discovery Using Statistical Leakage Simulator
This paper presents a novel single-trace side-channel attack on FALCON—a lattice-based post-quantum digital signature protocol recently approved for standardization by NIST. We target the discrete Gaussian sampling operation within the FALCON key generation scheme and use a single power measurement trace to succeed. Notably, negating the ‘shift right 63-bit’ operation (for 64-bit values) leaks critical information about the ‘-1’ vs. ‘0’ assignments to intermediate coefficients. These leaks enable full recovery of the generated secret keys. The proposed attack is simulated on the ELMO simulator running both reference and optimized software implementation from FALCON’s NIST Round 3 package. Statistical analysis with 20k tests reveals a full key-recovery success rate of 100% for FALCON-512. This work highlights the vulnerability of current software solutions to single-trace attacks and underscores the urgent need to develop single-trace resilient software for embedded systems in the presilicon phase.
Traceable Verifiable Secret Sharing and Applications
A secret sharing scheme allows a trusted dealer to divide a secret among multiple parties so that a sufficient number of them can recover the secret, while a smaller group cannot. In CRYPTO'21, Goyal, Song, and Srinivasan introduced Traceable Secret Sharing (TSS), which enhances traditional secret sharing by enabling the identification of parties involved in secret reconstruction, deterring malicious behavior like selling shares. Recently, Boneh, Partap, and Rotem (CRYPTO'24) presented two more efficient TSS schemes. However, these existing TSS schemes assume that all distributed shares are valid and shareholders act honestly during the secret reconstruction phase. In this paper, we introduce Traceable Verifiable Secret Sharing (TVSS), a concept designed to ensure both traceability and verifiability in the face of malicious actions by either the dealer or shareholders. We propose a general strategy for transforming a Shamir-based, computationally secure Verifiable Secret Sharing (VSS) scheme into an efficient TVSS scheme. Building on this strategy, we construct two practical TVSS schemes in the honest-majority setting, based on well-known VSS schemes proposed by Feldman (SFCS'87) and Pedersen (CRYPTO'91). Our proposed TVSS schemes retain public shareholder indexes, enhancing flexibility in designing accountable threshold protocols (e.g., Distributed Key Generation protocols) using TVSS. Compared to the original VSS schemes, the individual share size in the new TVSS schemes increases by only a single field element and is just two or three times the size of the main secret.
Motivated by a recent study on Accountable Threshold Cryptosystems (ATCs) by Boneh, Partap, and Rotem (CRYPTO'24), and by leveraging our proposed Feldman-based TVSS scheme, we also introduce an efficient ATC based on ElGamal cryptosystem. This new ATC enables a tracer to uniquely identify the parties involved in the decryption process while introducing minimal overhead to existing actively secure (and/or robust) threshold protocols built on the ElGamal cryptosystem.
Minicrypt PIR for Big Batches
We present PIR protocols for offline/online two-server setting where a client wants to privately retrieve a batch of entries from database of size by interacting with a servers . The client has interacted with a server ahead of time, not colluding with . We present simple protocols based on one-way functions that substantially improve on the query complexity or runtime over existing works. Concrete instantiations of our general paradigm lead to batch PIR protocols with the following parameters:
- A protocol for batches of , where , and each spend a total of work and exchange bits of communication. This yields an amortized complexity of work and communication per query in the batch.
- A more balanced protocol for batches of size in which spends a total of work, and spend work, and the total communication is of size .
Our protocols have immediate applications such as Private Set Intersection (PSI) in the two-server setting with preprocessing and unbalanced set sizes.
Uncategorized
Uncategorized
We introduce , a hash-based succinct argument for integer arithmetic. 's goal is to provide a practically efficient scheme that bypasses the arithmetization overheads that many succinct arguments present. These overheads can be of orders of magnitude in many applications. By enabling proving statements over the integers, we are able to arithmetize many operations of interest with almost no overhead. This includes modular operations involving any moduli, not necessarily prime, and possibly involving multiple moduli in the same statement. In particular, allows to prove statements for the ring for arbitrary . Importantly, and departing from prior work, our schemes are purely code and hash-based, and do not require hidden order groups. In its final form, operates similarly to other hash-based schemes using Brakedown as their PCS, and at the same time it benefits from the arithmetization perks brought by working over (and ) natively.
At its core, is a succinct argument for proving relations over the rational numbers , even though when applied to integer statements, an honest prover and verifier will only operate with small integers. consists of two main components: 1) - , a framework for proving algebraic statements over the rationals by reducing modulo a randomly chosen prime , followed by running a suitable PIOP over (this is similar to the approach taken in prior works, with the difference that we use localizations of to enable prime modular projection); and 2) , a Brakedown-type polynomial commitment scheme built from an IOP of proximity to the integers, a novel primitive that we introduce. The latter primitive guarantees that a prover is using a polynomial with coefficients close to being integral. With these two primitives in place, one can use a lookup argument over the rationals to ensure that the witness contains only integer elements.
Cryptanalysis of Full SCARF
SCARF is a tweakable block cipher dedicated to cache address randomization, proposed at the USENIX Security conference. It has a 10-bit block, 48-bit tweak, and 240-bit key. SCARF is aggressively optimized to meet the harsh latency constraints of cache address randomization, and uses a dedicated model for its security claim.
The full version of SCARF has 8 rounds, and its designers claim security up to queries and computations. In this work we present a distinguisher against 6-round SCARF under the collision model with time and query complexity , and a key-recovery attack against the full 8-round SCARF under the encryption-decryption model with queries and time . As part of the attack, we present a novel method to compute the minimal number of right pairs following a differential characteristic when the input pairs are restricted to a subspace of the domain of the primitive.
Towards Optimally Secure Deterministic Authenticated Encryption Schemes
The public comments received for the review process for NIST (SP) 800-38A pointed out two important issues that most companies face: (1) the limited security that AES can provide due to its 128-bit block size and (2) the problem of nonce-misuse in practice. In this paper, we provide an alternative solution to these problems by introducing two optimally secure deterministic authenticated encryption (DAE) schemes, denoted as DENC1 and DENC2 respectively. We show that our proposed constructions improve the state-of-the-art in terms of security and efficiency. Specifically, DENC1 achieves a robust security level of , while DENC2 attains a near-optimal security level of , where is the total number of blocks, is maximum number of blocks in each query, and is a user-defined parameter closely related to the rate of the construction. Our research centers on the development of two IV-based encryption schemes, referred to as IV1 and IV2, which respectively offer security levels of and . Notably, both of our DAE proposals are nearly rate 1/2 constructions. In terms of efficiency, our proposals compare favorably with state-of-the-art AE modes on contemporary microprocessors.
Lattice-based -Protocols for Polynomial Relations with Standard Soundness
We propose new techniques for enhancing the efficiency of -protocols in lattice settings.
One major challenge in lattice-based -protocols is restricting the norm of the extracted witness in soundness proofs.
Most of existing solutions either repeat the protocol several times or opt for a relaxation version of the original relation.
Recently, Boneh and Chen have proposed an innovative solution called ,
which utilizes a sum-check protocol to enforce the norm bound on the witness.
In this paper, we elevate this idea to efficiently proving multiple polynomial relations without relaxation.
Simply incorporating the techniques from into -protocols leads to inefficient results;
therefore, we introduce several new techniques to ensure efficiency.
First, to enable the amortization in [AC20] for multiple polynomial relations,
we propose a general linearization technique to reduce polynomial relations to homomorphic ones.
Furthermore, we generalize the folding protocol in LatticeFold, enabling us to efficiently perform folding and other complex operations multiple times without the need to repeatedly execute sum-checks. Moreover, we achieve zero-knowledge by designing hiding claims and elevating the zero-knowledge sum-check protocol [XZZ+19] on rings.
Our protocol achieves standard soundness, thereby enabling the efficient integration of the compressed -protocol theory [AC20, ACF21] in lattice settings.
Traceable Verifiable Random Functions
A threshold verifiable random function (threshold VRF) is a VRF where the evaluation key is secret shared among parties, and a quorum of parties is needed to evaluate the VRF. Threshold VRFs are used widely in practice in applications such as randomness beacons and deterministic wallets. Despite their long history, the question of accountability for leaking key shares in a threshold VRF has not been studied. Specifically, consider a set of parties who use their key shares to create an evaluation box that lets anyone evaluate the VRF at any point in the domain of the VRF. When is less than the threshold , this box must also take as input additional evaluation shares. Our goal is to design a threshold VRF where there is a tracing algorithm that can trace any such box to the coalition of parties that created it, using only blackbox access to . The risk of tracing should deter the coalition from selling such a box. Questions in this vein were previously explored in the context of threshold decryption and secret sharing. Here we define and study traceability for a threshold VRF.
Our traceable threshold VRF is built from a VRF based on Paillier encryption. The starting point for our tracing algorithm is the tracing technique of Boneh-Partap-Rotem (Crypto 2024) designed for tracing leaks in the context of secret sharing. However, there are multiple technical challenges in making this approach work, and we develop the necessary tools to overcome all these challenges. The end result is a threshold VRF with a provably secure tracing algorithm.
Malleable SNARKs and Their Applications
Succinct non-interactive arguments of knowledge (SNARKs) are variants of non-interactive zero-knowledge proofs (NIZKs) in which complex statements can be proven in a compact way. SNARKs have had tremendous impact in several areas of cryptography, including verifiable computing, blockchains, and anonymous communication. A recurring concept in many applications is the concept of recursive SNARKs, in which a proof references a previous proof to show an evolved statement.
In this work, we investigate malleable SNARKs, a generalization of this concept of recursion. An adaptation of the existing concept of malleable NIZKs, malleable SNARKs allow to modify SNARK proofs to show related statements, but such that such mauled proofs are indistinguishable from “properly generated” fresh proofs of the related statement. We show how to instantiate malleable SNARKs for universal languages and relations, and give a number of applications: the first post-quantum RCCA-secure rerandomizable and updatable encryption schemes, a generic construction of reverse firewalls, and an unlinkable (i.e., computation-hiding) targeted malleable homomorphic encryption scheme.
Technically, our malleable SNARK construction relies on recursive proofs, but with a twist: in order to support the strong indistinguishability properties of mauled and fresh SNARK proofs, we need to allow an unbounded recursion depth. To still allow for a reasonable notion of extractability in this setting (and in particular to guarantee that extraction eventually finishes with a “proper” witness that does not refer to a previous SNARK proof), we rely on a new and generic computational primitive called adversarial one-way function (AOWF) that may be of independent interest. We give an AOWF candidate and prove it secure in the random oracle model.
Non-Interactive Key Exchange: New Notions, New Constructions, and Forward Security
Non-interactive key exchange (NIKE) is a simple and elegant cryptographic primitive that allows two or more users to agree on a secret shared key without any interaction. NIKE schemes have been formalized in different scenarios (such as the public-key, or the identity-based setting), and have found many applications in cryptography.
In this work, we propose a NIKE variant that generalizes public-key and identity-based NIKE: a multi-authority identity-based NIKE (MA-ID-NIKE) is defined like an identity-based NIKE, only with several identity domains (i.e., several instances of an identity-based NIKE), and such that users from different identity domains can compute shared keys. This makes MA-ID-NIKE schemes more versatile than existing NIKE or identity-based NIKE schemes, for instance, in an application in which users from different (centrally managed) companies need to compute shared keys.
We show several results for MA-ID-NIKE schemes:
- We show that MA-ID-NIKE schemes generically imply public-key NIKEs, identity-based NIKEs, as well as forward-secure NIKE schemes, the latter of which are notoriously hard to construct.
- We propose two simple constructions of MA-ID-NIKE schemes from indistinguishability obfuscation (iO) and multilinear maps, respectively. These constructions achieve only selective security, but can be leveraged to adaptive security for small groups of users (that want to be able to agree on a joint shared key) in the random oracle model.
- We give a simple and elegant construction of MA-ID-NIKEs from identity-based encryption (IBE) and universal samplers. This construction achieves adaptive security also for large groups of users based on the adaptive security of the used universal samplers. Universal samplers, in turn, are known to be achievable using iO in the random oracle model. As a nice feature, the same construction yields hierarchical MA-ID-NIKEs or public-key NIKEs when instantiated with hierarchical IBE or public-key encryption instead of IBE schemes.
While these results are clearly only feasibility results, they do demonstrate the achievability of a concept that itself has very practical use cases.
A Unified Treatment of Anamorphic Encryption
Receiver anamorphic encryption (hereafter anamorphic encryption), introduced by Persiano et al. at Eurocrypt 2022, allows for a double message to be symmetrically hidden in a public-key encryption ciphertext via a pre-shared -double key-. In anamorphic encryption, confidentiality must be preserved even if the adversary (or the -dictator-) has access to all regular keys. It has been the subject of several works since its introduction that explore tweaks and extensions to the core primitive. However, this study has not been systematic, and so disparate security notions have been proposed, for which their relationships are not clear. Moreover, there are clear gaps in the literature, including in the treatment of chosen-ciphertext attacks.
In this work, we conduct a systematic study of receiver anamorphic encryption. We unify existing security notions and propose several new ones, and prove implications and separations between them. Our main findings are as follows. First, we identify gaps in previous security notions against an anamorphic -sender-, namely an adversary who is given the double key, and propose three new security notions to bridge these gaps. We also identify several gaps in the treatment of chosen-ciphertext attacks, a setting only very recently considered in anamorphic cryptography (Jaeger and Stracovsky, Asiacrypt 2024). Moreover, observing that no previous construction achieves all desirable security properties in this setting, we propose a suitable construction that does. Finally, we propose several security notions for -asymmetric- anamorphic encryption, and explore the case here where the dictator and the anamorphic sender collude.
ChiLow and ChiChi: New Constructions for Code Encryption
We study the problem of embedded code encryption, i.e., encryption for binary software code for a secure microcontroller that is stored in an insecure external memory. As every single instruction must be decrypted before it can be executed, this scenario requires an extremely low latency decryption. We present a formal treatment of embedded code encryption security definitions, propose three constructions, namely ACE1, ACE2 and ACE3, and analyze their security. Further, we present ChiLow, a family of tweakable block ciphers and a related PRF specifically designed for embedded code encryption. At the core of ChiLow, there is ChiChi, a new family of non-linear layers of even dimension based on the well-known χ function. Our fully unrolled hardware implementation of ChiLow, using the Nangate 15nm Open Cell Library, achieves a decryption latency of less than 280 picoseconds.
Quasi-Linear Indistinguishability Obfuscation via Mathematical Proofs of Equivalence and Applications
Indistinguishability obfuscation (\iO) is a powerful cryptographic primitive
and has been quoted as the ``swiss army-knife of modern cryptography''. Most prior works on \iO focused on theoretical feasibility, and paid less attention to the efficiency of the constructions. As a result, all prior constructions stopped at achieving polynomial efficiency without worrying about how large the polynomial is.
In fact, it has even been conjectured that a polynomial dependence on the input length is necessary.
In this work, we show that if the two circuits to be obfuscated enjoy a succinct propositional logic proof of equivalence, then we can
create obfuscated versions of these programs that are computationally indistinguishable; and importantly, the obfuscated program's efficiency is quasi-linear in the circuit size and proof size. We show that our quasi-linear \iO construction also leads to new applications. Specifically, we show how to achieve quasi-linear efficiency for 1) \iO for Turing Machines with unbounded inputs, and 2) multi-input functional encryption, also assuming succinct proofs of equivalence.
Dimensional e ion: Improving the Attack with Decomposition in Higher Bases
We revisit the polynomial attack to the problem modulo from [BLLOR22]. Our new algorithm achieves a polynomial time solution in dimension , extending the range of dimensions for which a polynomial attack is known beyond the previous bound of .
We also combine our new algorithm with Wagner's attack to improve the general attack complexity for some of the dimensions where a polynomial solution is still not known.
We implement our polynomial attack and break the one-more unforgeability of blind Schnorr signatures over 256-bit elliptic curves in a few seconds with 192 concurrent sessions.
The Malice of ELFs: Practical Anamorphic-Resistant Encryption without Random Oracles
The concept of Anamorphic Encryption (Persiano, Phan and Yung, Eurocrypt '22), aims to enable private communication in settings where the usage of encryption is heavily controlled by a central authority (henceforth called the dictator) who can obtain users' secret keys.
Since then, various works have improved our understanding of AE in several aspects, including its limitations. To this regard, two recent works constructed various Anamorphic-Resistant Encryption (ARE) schemes, i.e., schemes admitting at most bits of covert communication.
However, those results are still unsatisfactory, each coming with at least one of the following issues: (1) use of cryptographic heavy hammers such as indistinguishability obfuscation (iO); (2) abuse of the original definition to define overly powerful dictators; (3) reliance on the Random Oracle Model (ROM). In particular, proofs in the ROM are controversial as they fail to account for anamorphic schemes making non-black-box usage of the hash function used to instantiate the Random Oracle.
In this work, we overcome all of these limitations.
First, we describe an anamorphic-resistant encryption (ARE) scheme approaching practicality by relying only on public-key encryption and Extremely Lossy Functions (ELFs), both known from the (exponential) DDH assumption. Moreover, further assuming Unique NIZKs (known from iO), we provide another construction, which we later use to realize the first ARE; that is, a scheme that achieves the strongest level of anamorphic resistance against each of the possible levels of anamorphic security.
Lattice-based Cryptography: A survey on the security of the lattice-based NIST finalists
This survey, mostly written in the years 2022-2023, is meant as an as short as possible description of the current state-of-the-art lattice attacks on lattice-based cryptosystems, without losing the essence of the matter.
The main focus is the security of the NIST finalists and
alternatives that are based on lattices, namely CRYSTALS-Kyber, CRYSTALS-Dilithium and Falcon. Instead of going through these cryptosystems case by case, this survey considers attacks on the underlying hardness assumptions: in the case of the mentioned lattice-based schemes, these are (variants of) LWE (Learning With Errors) and NTRU.
Asynchronous Algorand: Reaching Agreement with Near Linear Communication and Constant Expected Time
The celebrated Algorand protocol solves validated byzantine agreement in a scalable manner in the synchronous setting. In this paper, we study the feasibility of similar solutions in the asynchronous setting. Our main result is an asynchronous validated byzantine agreement protocol that we call Asynchronous Algorand. As with Algorand, it terminates in an expected constant number of rounds, and honest parties send an expected bits, where is the number of parties. The protocol is resilient to a fully-asynchronous weakly-adaptive adversary that can corrupt a near-optimal number of parties ( ) and requires just a VRF setup and secure erasures.
A key innovation in Asynchronous Algorand is a rather simple but surprisingly effective method to do \textit{committee-based role assignment} for asynchronous verifiable secret sharing in the YOSO (You Only Speak Once) model. This method achieves near-optimal resilience and near-linear communication complexity while relying solely on a verifiable random function (VRF) setup and secure erasures.
FHE-SNARK vs. SNARK-FHE: From Analysis to Practical Verifiable Computation
Verifiable Computation over encrypted data (VC) faces a critical dilemma between two competing paradigms: SNARK-FHE (applying SNARKs to prove FHE operations) and FHE-SNARK (homomorphically evaluating SNARK proofs). There are two interesting questions remain open to solving such a dilemma: 1) Are they identical in terms of security? 2) How practically efficient can we get? This work answers these questions through the following results:
1) We establish a formal security analysis between VC and secure two-party computation (2PC). We investigate VC with server inputs and show the following: a) VC with server input has an exact 1-bit security loss compared to 2PC; b) SNARK-FHE aligns with 2PC while FHE-SNARK naturally falls in the VC category; c) Existing FHE-SNARK works is vulnerable in the VC with server input setting, for which we formalize a input-dependent attack.
2) We design an FHE-friendly SNARK that is: a) 3× lower multiplicative depth than FRI-based SNARKs; b) Compatible with FHE SIMD operations. Based on this novel SNARK, we construct an FHE-SNARK scheme that has: a) Stronger security: resistant against input-dependent attack; b) 8× speedup: 3.6-hour proof generation for -gate circuits on a single core CPU (vs. 29 hours in the state-of-the-art); c) Practical verification: 65.3 MB proofs with 2.7 seconds verification (single core).
Making Protocol FSU Revocable
This paper examines whether a revocation function can be added to a protocol, protocol FSU, that has been adopted as an international standard, ISO/IEC11770-3. Protocol FSU is an IB-AKE protocol based on a mathematical problem, an asymmetric gap bilinear Diffie--Hellman (GBDH) problem.
To make protocol FSU revocable, a generic technique is applied, which converts an identity-based encryption scheme to a revocable identity-based encryption scheme by introducing a symmetric-key encryption scheme. In addition, to make the converted RIB-AKE protocol efficient, we reduce ephemeral information exchanged in the protocol, and introduce an additional parameter to the master public-key where the secret information of the additional parameter is not needed to include in the master secret-key.
We discuss the security of the resultant protocol, and prove that it is rid-eCK secure under the asymmetric GBDH assumption.
Pseudorandom Functions with Weak Programming Privacy and Applications to Private Information Retrieval
Although privately programmable pseudorandom functions (PPPRFs) are known to have numerous applications, so far, the only known constructions rely on Learning with Error (LWE) or indistinguishability obfuscation. We show how to construct a relaxed PPPRF with only one-way functions (OWF). The resulting PPPRF satisfies security and works for polynomially sized input domains. Using the resulting PPPRF, we can get new results for preprocessing Private Information Retrieval (PIR) that improve the state of the art. Specifically, we show that relying only on OWF, we can get a 2-server preprocessing PIR with polylogarithmic bandwidth while consuming client space and server space for an arbitrarily small constant . In the 1-server setting, we get a preprocessing PIR from OWF that achieves polylogarithmic online bandwidth and offline bandwidth, while preserving the same client and server space as before. Our result, in combination with the lower bound of Ishai, Shi, and Wichs (CRYPTO'24), establishes a tight understanding of the bandwidth and client space tradeoff for 1-server preprocessing PIR from Minicrypt assumptions. Interestingly, we are also the first to show non-trivial ways to combine client-side and server-side preprocessing to get improved results for PIR.
(Un)breakable curses - re-encryption in the Fujisaki-Okamoto transform
The Fujisaki-Okamoto transform (FO) is the go-to method for achieving chosen-ciphertext (CCA) security for post-quantum key encapsulation mechanisms (KEMs). An important step in FO is augmenting the decryption/ decapsulation algorithm with a re-encryption step -- the decrypted message is re-encrypted to check whether the correct encryption randomness was used. While solving a security problem (ciphertext-malleability), re-encryption has turned out to introduce side-channel vulnerabilities and is computationally expensive, which has lead designers to searching for alternatives. In this work, we perform a comprehensive study of such alternatives. We formalize a central security property, computational rigidity, and show that it is sufficient for obtaining CCA security. We present a framework for analyzing algorithms that can replace re-encryption and still achieve rigidity, and analyze existing proposals in this framework.
Along the way, we pick up a novel QROM security statement for explicitly rejecting KEMs based on deterministic PKE schemes, something that so far only was possible when requiring a hard-to-ensure quantum property for the base PKE scheme.
Stateless Hash-Based Signatures for Post-Quantum Security Keys
The U.S. National Institute of Standards and Technology
recently standardized the first set of post-quantum cryptography algo-
rithms. These algorithms address the quantum threat, but also present
new challenges due to their larger memory and computational footprint.
Three of the four standardized algorithms are lattice based, offering good
performance but posing challenges due to complex implementation and
intricate security assumptions. A more conservative choice for quantum-
safe authentication are hash-based signature systems. However, due to
large signature sizes and low signing speeds, hash-based systems have
only found use in niche applications. The first NIST standardized, state-
less hash-based signature system is the SPHINCS+-based SLH-DSA.
In this work we combine different approaches to show that SPHINCS+
can be optimized in its parameters and implementation, to be high per-
forming, even when signing in an embedded setting. We demonstrate
this in the context of user authentication using hardware security keys
within FIDO. Our SPHINCS+-based implementation can even outper-
form lattice-based solutions while remaining highly portable. Due to con-
servative security assumptions, our solution does not require a hybrid
construction and can perform authentication on current security keys.
For reproducibility and to encourage further research we publish our
Cortex M4-based implementation.
Practical Zero-Trust Threshold Signatures in Large-Scale Dynamic Asynchronous Networks
Threshold signatures have become a critical tool in cryptocurrency systems, offering enhanced security by distributing the signing process among multiple signers. In this work, we distribute this process between a client and a permissionless decentralized blockchain, and present novel protocols for ECDSA and EdDSA/Schnorr signatures in this setting. Typical threshold access architectures used by trusted custodians suffer from the honeypot problem, wherein the more assets the custodian holds, the greater the incentive of compromising it.
Implementing threshold signatures over permissionless blockchains poses a few challenges.
First, existing networks typically work over an asynchronous reliable broadcast communication channel. Accordingly, our protocol is implemented over such a channel. As a result, it also benefits from identifiable abort, public verifiability, and guaranteed output delivery, and the client benefits from censorship resistance of blockchain systems.
Second, upon signing each block, the participating quorum may dynamically change and is post-determined.
Therefore, we design a fluid protocol, that supports a post-determined dynamic quorum in each communication round, thereby complying with existing broadcast channel implementations. Third, in permissionless networks, parties may join, leave, and change their stake. Therefore, we offer protocols for network reconfiguration, with complexity independent of the number of clients in the system, and our protocol efficiently supports a weighted threshold access structure for the network. Specifically, the complexity of distributed key generation and presign only depends on the number of parties and not on the overall weight, and the amortized cost of sign only depends on the individual weight.
Furthermore, our protocol introduces key improvements, including the removal of zero-knowledge proofs towards the client, and presigns with a non-interactive client. For Schnorr, the presigns are client-independent, and can be collected by the blockchain in a common pool, available for all clients in the system. These optimizations reduce communication overhead and improve the system's ability to handle traffic spikes during high-demand periods.
Our protocol is UC-secure, and is therefore natively designed for multiple clients to use the system in parallel. Notably, we propose a novel assumption, Slightly-Enhanced ECDSA Unforgeability, offering concrete security for 256-bit elliptic curves for threshold ECDSA with support for parallel execution of presigns.
In addition to securing cryptocurrency wallets, we demonstrate how our protocol enables various cross-chain applications, such as decentralized bridges, future transactions, andwallet transfer. Our system is designed for interoperability across multiple blockchains, enhancing security, scalability, and flexibility for decentralized finance (DeFi) ecosystems.
DFS: Delegation-friendly zkSNARK and Private Delegation of Provers
Zero-Knowledge Succinct Non-interactive Arguments of Knowledge (zkSNARKs) lead to proofs that can be succinctly verified but require huge computational resources to generate. Prior systems outsource proof generation either through public delegation, which reveals the witness to the third party, or, more preferably, private delegation that keeps the witness hidden using multiparty computation (MPC). However, current private delegation schemes struggle with scalability and efficiency due to MPC inefficiencies, poor resource utilization, and suboptimal design of zkSNARK protocols.
In this paper, we introduce DFS, a new zkSNARK that is delegation-friendly for both public and private scenarios. Prior work focused on optimizing the MPC protocols for existing zkSNARKs, while DFS uses co-design between MPC and zkSNARK so that the protocol is efficient for both distributed computing and MPC. In particular, DFS achieves linear prover time and logarithmic verification cost in the non-delegated setting. For private delegation, DFS introduces a scheme with zero communication overhead in MPC and achieves malicious security for free, which results in logarithmic overall communication; while prior work required linear communication. Our evaluation shows that DFS is as efficient as state-of-the-art zkSNARKs in public delegation; when used for private delegation, it scales better than previous work. In particular, for constraints, the total communication of DFS is less than KB, while prior work incurs GB, which is linear to the circuit size. Additionally, we identify and address a security flaw in prior work, EOS (USENIX'23).
Stationary Syndrome Decoding for Improved PCGs
Syndrome decoding (SD), and equivalently Learning Parity with Noise (LPN), is a fundamental problem in cryptography, which states that for a field , some compressing public matrix , and a secret sparse vector sampled from some noise distribution, is indistinguishable from uniform. Recently, the SD has gained significant interest due to its use in pseudorandom correlation generators (PCGs).
In pursuit of better efficiency, we propose a new assumption called Stationary Syndrome Decoding (SSD). In SSD, we consider correlated noise vectors and associated instances where the noise vectors are restricted to having non-zeros in the same small subset of positions . That is, for all , is uniformly random, while for all other , .
Although naively reusing the noise vector renders SD and LPN insecure via simple Gaussian elimination, we observe known attacks do not extend to our correlated noise. We show SSD is unconditionally secure against so-called linear attacks, e.g., advanced information set decoding and representation techniques (Esser and Santini, Crypto 2024). We further adapt the state-of-the-art nonlinear attack (Briaud and Oygarden, Eurocrypt 2023) to SSD and demonstrate both theoretically and experimentally resistance to the attack.
We apply SSD to PCGs to amortize the cost of noise generation protocol. For OT and VOLE generation, each instance requires communication instead of . For suggested parameters, we observe a improvement in the running time or between 6 and reduction in communication. For Beaver triple generation using Ring LPN, our techniques have the potential for substantial amortization due to the high concrete overhead of the Ring LPN noise generation.
Neo: Lattice-based folding scheme for CCS over small fields and pay-per-bit commitments
This paper introduces Neo, a new lattice-based folding scheme for CCS, an NP-complete relation that generalizes R1CS, Plonkish, and AIR. Neo's folding scheme can be viewed as adapting the folding scheme in HyperNova (CRYPTO'24), which assumes elliptic-curve based linearly homomorphic commitments, to the lattice setting. Unlike HyperNova, Neo can use “small” prime fields (e.g., over the Goldilocks prime). Additionally, Neo provides plausible post-quantum security.
Prior to Neo, folding schemes in the lattice setting, notably LatticeFold (ePrint 2024/257), worked with constraint systems defined over a cyclotomic polynomial ring. This structure allows packing a fixed batch of constraint systems over a small prime field into a single constraint system over a polynomial ring. However, it introduces significant overheads, both for committing to witnesses (e.g., the commitment scheme cannot take advantage of bit-width of values), and within the folding protocol itself (e.g., the sum-check protocol is run over cyclotomic polynomial rings). Additionally, the required ring structure places restrictions on the choice of primes (e.g., LatticeFold is not compatible with the Goldilocks field).
Neo addresses these problems, by drawing inspiration from both HyperNova and LatticeFold. A key contribution is a folding-friendly instantiation of Ajtai's commitments, with "pay-per-bit" commitment costs i.e., the commitment costs scale with the bit-width of the scalars (e.g., committing to a vector of bits is cheaper than committing to a vector of -bit values). This scheme commits to vectors over a small prime field. It does so by transforming the provided vector into a matrix and committing to that matrix. We prove that this commitment scheme provides the desired linear homomorphism for building a folding scheme. Additionally, like HyperNova, Neo runs a single invocation of the sum-check protocol, where in HyperNova it is over the scalar field of an elliptic curve and in Neo it is over an extension of a small prime field.
Anamorphic-Resistant Encryption; Or Why the Encryption Debate is Still Alive
Ever since the introduction of encryption, society has been divided over whether the government (or law enforcement agencies) should have the capability to decrypt private messages (with or without a war- rant) of its citizens. From a technical viewpoint, the folklore belief is that semantic security always enables some form of steganography. Thus, adding backdoors to semantically secure schemes is pointless: it only weakens the security of the “good guys”, while “bad guys” can easily circumvent censorship, even if forced to hand over their decryption keys.
In this paper we put a dent in this folklore. We formalize three worlds: Dictatoria (“dictator wins”: no convenient steganography, no user co- operation needed), Warrantland (“checks-and-balances”: no convenient steganography, but need user’s cooperation) and Privatopia (“privacy wins”: built-in, high-rate steganography, even if giving away the decryption key). We give strong evidence that all these worlds are possible, thus reopening the encryption debate on a technical level.
Our main novelty is the definition and design of special encryption schemes we call anamorphic-resistant (AR). In contrast to so called “anamorphic schemes”, — which were studied in the literature and form the basis of Privatopia, — any attempt to steganographically communicate over an AR-encryption scheme will be either impossible or hugely slow (depending on the definitional details).
Tight Lower Bounds and New Upper Bounds For Evolving CDS
Komargodski et. al. defined evolving secret-sharing schemes with an unbounded number of parties. In this model, parties arrive one after the other and the number of parties that will arrive is not known.
Another cryptographic primitive related to secret-sharing is conditional disclosure of secrets protocols (CDS) that defined by Gertner et. al.
A CDS protocol for a Boolean function involves servers and a referee. Each server holds a common secret , a common random string , and a private input ; using these , each server locally computes one message and sends it to the referee. The referee, knowing the inputs and the messages, should be able to compute if . Otherwise, the referee should not learn information about the secret. In a sense, this is a non-monotone version of secret sharing schemes.
Peter (ISC 23'), defines evolving CDS, implementing in particular evolving predicate (he handles somewhat more general predicates for larger input domains, but generalizing to other input domains is not hard, and we focus on boolean predicates). In this setting, the number of parties is unbounded, where the parties arrive in sequential order. Each party, when arrives, sends one random message to a referee, that depending on its input, the secret, and a common randomness. He also devise evolving CDS protocols for a general evolving predicate via a black-box reduction to evolving secret-sharing scheme for a related access structure.
He analyzes this construction for general access structures, as well as other classes of functions, which yields message complexity for the worst predicates.
In this work we provide new upper and lower bounds for evolving CDS.
Observing that CDS has the potential for improved efficiency, as it is not restricted to monotone operations, we devise a new evolving general CDS construction.
In particular, our construction relies on representing the evolving predicate via an infinite branching program - LINBP, generalizing the monotone infinite branching program based construction of Alon et. al of evolving secret sharing schemes.
We obtain nontrivial ( for ) message complexity for branching programs of larger width than Alon et al's construction (even when restricting attention to monotone predicates), as well as Peter's construction for certain (but not all) 's.
Indeed, we prove that our construction, as well as Peter's article (ISC 23’) is tight for a certain evolving predicate -- as for evolving secret-sharing, (so called strong) evolving CDS also requires share complexity of . This is unlike the state of affairs for the finite setting, where the best known CDS constructions are much more efficient than the best known secret sharing schemes (for the hardest monotone functions).
The latter bound is proved based on an adaptation of Mazor's lower bound (in turns based on Csirmaz's lower bounding technique) to the CDS setting. It relies on a reduction from secret sharing for a certain class of infinite access structures -- the so called partite access structures to evolving CDS for a related (not necessarily monotone) function. Then, a partite evolving access structure is crafted using the Csirmaz-type argument.
A Note on Adaptive Security in Hierarchical Identity-Based Encryption
We present the first construction for adaptively secure HIBE, that does not rely on bilinear pairings or random oracle heuristics. Notably, we design an adaptively secure HIBE from any selectively secure IBE system in the standard model. Combining this with known results, this gives the first adaptively secure HIBE system from a wide variety of standard assumptions such as CDH/Factoring/LWE/LPN. We also extend our adaptively secure HIBE system to satisfy full anonymity, giving the first adaptively secure anonymous HIBE under CDH/LWE assumption. All our HIBE systems support unbounded length identities as well as unbounded number of recursive delegation operations.
Dynamic Decentralized Functional Encryption: Generic Constructions with Strong Security
Dynamic Decentralized Functional Encryption (DDFE) is a generalization of Functional Encryption which allows multiple users to join the system dynamically without interaction and without relying on a trusted third party. Users can independently encrypt their inputs for a joint evaluation under functions embedded in functional decryption keys; and they keep control on these functions as they all have to contribute to the generation of the functional keys.
In this work, we present new generic compilers which, when instantiated with existing schemes from the literature, improve over the state-of-the-art in terms of security, computational assumptions and functionality. Specifically, we obtain the first adaptively secure DDFE schemes for inner products in both the standard and the stronger function-hiding setting which guarantees privacy not only for messages but also for the evaluated functions. Furthermore, we present the first DDFE for inner products whose security can be proven under the LWE assumption in the standard model. Finally, we give the first construction of a DDFE for the attribute-weighted sums functionality with attribute-based access control (with some limitations). All prior constructions guarantee only selective security, rely on group-based assumptions on pairings, and cannot provide access control.
Significantly Improved Cryptanalysis of Salsa20 With Two-Round Criteria
Over the past decade and a half, cryptanalytic techniques for Salsa20 have been increasingly refined, largely following the overarching concept of Probabilistically Neutral Bits (PNBs) by Aumasson et al. (FSE 2008). In this paper, we present a novel criterion for choosing key- pairs using certain 2-round criteria and connect that with clever tweaks of existing techniques related to Probabilistically Independent bits (earlier used for ARX ciphers, but not for Salsa20) and well-studied PNBs. Through a detailed examination of the matrix after initial rounds of Salsa20, we introduce the first-ever cryptanalysis of Salsa20 exceeding rounds. Specifically, Salsa20/ , consisting of secret key bits, can be cryptanalyzed with a time complexity of and data amounting to . Further, the sharpness of our attack can be highlighted by showing that Salsa20/ can be broken with time and data , which is a significant improvement over the best-known result of Coutinho et al. (Journal of Cryptology, 2023, time and data ). Here, the refinements related to backward biases for PNBs are also instrumental in achieving the improvements. We also provide certain instances of how these ideas improve the cryptanalysis on -bit versions. In the process, a few critical points are raised on some existing state-of-the-art works in this direction, and in those cases, their estimates of time and data are revisited to note the correct complexities, revising the incorrect numbers.
How to Securely Implement Cryptography in Deep Neural Networks
The wide adoption of deep neural networks (DNNs) raises the question of how can we equip them with a desired cryptographic functionality (e.g, to decrypt an encrypted input, to verify that this input is authorized, or to hide a secure watermark in the output). The problem is that cryptographic primitives are typically designed to run on digital computers that use Boolean gates to map sequences of bits to sequences of bits, whereas DNNs are a special type of analog computer that uses linear mappings and ReLUs to map vectors of real numbers to vectors of real numbers. This discrepancy between the discrete and continuous computational models raises the question of what is the best way to implement standard cryptographic primitives as DNNs, and whether DNN implementations of secure cryptosystems remain secure in the new setting, in which an attacker can ask the DNN to process a message whose "bits" are arbitrary real numbers.
In this paper we lay the foundations of this new theory, defining the meaning of correctness and security for implementations of cryptographic primitives as ReLU-based DNNs. We then show that the natural implementations of block ciphers as DNNs can be broken in linear time by using such nonstandard inputs. We tested our attack in the case of full round AES-128, and had success rate in finding randomly chosen keys. Finally, we develop a new method for implementing any desired cryptographic functionality as a standard ReLU-based DNN in a provably secure and correct way. Our protective technique has very low overhead (a constant number of additional layers and a linear number of additional neurons), and is completely practical.
A reduction from Hawk to the principal ideal problem in a quaternion algebra
In this article we present a non-uniform reduction from rank-
2 module-LIP over Complex Multiplication fields, to a variant of the
Principal Ideal Problem, in some fitting quaternion algebra. This reduction
is classical deterministic polynomial-time in the size of the inputs. The
quaternion algebra in which we need to solve the variant of the principal
ideal problem depends on the parameters of the module-LIP problem,
but not on the problem’s instance. Our reduction requires the knowledge
of some special elements of this quaternion algebras, which is why it is
non-uniform.
In some particular cases, these elements can be computed in polynomial
time, making the reduction uniform. This is the case for the Hawk
signature scheme: we show that breaking Hawk is no harder than solving
a variant of the principal ideal problem in a fixed quaternion algebra
(and this reduction is uniform).
Verifiable Computation for Approximate Homomorphic Encryption Schemes
We address the problem of proving the validity of computation on ciphertexts of homomorphic encryption (HE) schemes, a feature that enables outsourcing of data and computation while ensuring both data privacy and integrity.
We propose a new solution that handles computations in RingLWE-based schemes, particularly the CKKS scheme for approximate arithmetic. Our approach efficiently handles ciphertext arithmetic in the polynomial ring without emulation overhead and manages ciphertexts maintenance operations, such as modulus switching, key switching, and rescaling, with small cost.
Our main result is a succinct argument that efficiently handles arithmetic computations and range checks over the ring . To build this argument system, we construct new polynomial interactive oracle proofs (PIOPs) and multilinear polynomial commitments supporting polynomials over , unlike prior work which focused on finite fields. We validate the concrete complexity of our approach through implementation and experimentation. Compared to the current state-of-the-art on verifiable HE for RNS schemes, we present similar performance for small circuits while being able to efficiently scale to larger ones, which was a major challenge for previous constructions as it requires verifying procedures such as relinearization.
MicroCrypt Assumptions with Quantum Input Sampling and Pseudodeterminism: Constructions and Separations
We investigate two natural relaxations of quantum cryptographic assumptions. First, we examine primitives such as pseudorandom generators ( s) and pseudorandom states ( s), extended with quantum input sampling, which we term and . In these primitives, the input is sampled via a quantum algorithm rather than uniformly at random. The second relaxation, -pseudodeterminism, allows the generator to output on an inverse-polynomial fraction of inputs.
We demonstrate an equivalence between (bounded-query) logarithmic-sized , logarithmic-sized , and . Notably, such an equivalence remains unknown for the uniform key sampling versions of these primitives. Furthermore, we establish that can be constructed from -pseudodeterministic s ( s).
To further justify our exploration, we present two separation results. First, we examine the relationship between -pseudodeterministic notions and their deterministic counterparts. We show that there does not exist a black-box construction of a one-way state generator from a , indicating that -pseudodeterministic primitives may be inherently weaker than their deterministic counterparts. Second, we explore the distinction between quantum and uniform input sampling. We prove that there does not exist a black-box construction of a -psuedodeterministic from a , suggesting that primitives relying on quantum input sampling may be weaker than those using traditional uniform sampling. Given the broad cryptographic applicability of s and s, these separation results yield numerous new insights into the hierarchy of primitives within MicroCrypt.
S2DV: Scalable and Secure DAO Voting
Decentralized Autonomous Organization operates without a central entity, being owned and governed collectively by its members. In this organization, decisions are carried out automatically through smart contracts for routine tasks, while members vote for unforeseen issues. Scalability in decision-making through voting on proposals is essential to accommodate a growing number of members without sacrificing security. This paper addresses this challenge by introducing a scalable and secure DAO voting system that ensures security through Groth16 zk-SNARKs and exponential ElGamal encryption algorithm while achieving scalability by verifiably delegating heavy computations to untrusted entities. While offline computation on the exponential ElGamal homomorphic encryption algorithm is enabled to reduce the computational cost of the blockchain, Groth16 is allowed to maintain robust off-chain calculation without revealing any further details. Specifically, the Groth16 proof guarantees that (i) the encrypted votes accurately reflect the voter's voting power, ensuring no unauthorized weight manipulation; (ii) only valid non-negative vote values are encrypted, preventing unintended or malicious vote tampering; and (iii) the homomorphic summation is performed correctly. The implementation shows that the proofs are verified remarkably fast, making the S2DV protocol highly suitable for scalable DAO voting, while preserving the security of the election.
Honest Majority MPC with Communication in Minicrypt
In this work, we consider the communication complexity of MPC protocols in honest majority setting achieving malicious security in both information-theoretic setting and computational setting. On the one hand, we study the possibility of basing honest majority MPC protocols on oblivious linear evaluation (OLE)-hybrid model efficiently with information-theoretic security. More precisely, we instantiate preprocessing phase of the recent work Sharing Transformation (Goyal, Polychroniadou, and Song, CRYPTO 2022) assuming random OLE correlations. Notably, we are able to prepare packed Beaver triples with malicious security achieving amortized communication of field elements plus a number of OLE correlations per packed Beaver triple, which is the best known result. To further efficiently prepare random OLE correlations, we resort to IKNP-style OT extension protocols (Ishai et al., CRYPTO 2003) in random oracle model.
On the other hand, we derive a communication lower bound for preparing OLE correlations in the information-theoretic setting based on negative results due to Damgård, Larsen, and Nielsen (CRYPTO 2019).
Combining our positive result with the work of Goyal, Polychroniadou, and Song (CRYPTO 2022), we derive an MPC protocol with amortized communication of elements per gate in random oracle model achieving malicious security, where denotes the length of a field element and is the security parameter.
Transistor: a TFHE-friendly Stream Cipher
Fully Homomorphic Encryption (FHE) allows computations on encrypted data without requiring decryption, ensuring data privacy during processing. However, FHE introduces a significant expansion of ciphertext sizes compared to plaintexts, which results in higher communication. A practical solution to mitigate this issue is transciphering, where only the master key is homomorphically encrypted, while the actual data is encrypted using a symmetric cipher, usually a stream cipher. The server then homomorphically evaluates the stream cipher to convert the encrypted data into a homomorphically encrypted form.
We introduce Transistor, a stream cipher specifically designed for efficient homomorphic evaluation within the TFHE scheme, a widely-used FHE framework known for its fast bootstrapping and ability to handle low-precision data. Transistor operates on which is chosen to optimize TFHE performances. Its components are carefully engineered to both control noise growth and provide strong security guarantees. First, a simple TFHE-friendly implementation technique for LFSRs allows us to use such components to cheaply increase the state size. At the same time, a small Finite State Machine is the only part of the state updated non-linearly, each non-linear operation corresponding in TFHE to a rather expensive Programmable Bootstrapping. This update is done using an AES-round-like transformation. But, in contrast to other stream ciphers like SNOW or LEX, our construction comes with information-theoretic security arguments proving that an attacker cannot obtain any information about the secret key from three or fewer consecutive keystream outputs. These information-theoretic arguments are then combined with a thorough analysis of potential correlations to bound the minimal keystream length required for recovering the secret key.
Our implementation of Transistor significantly outperforms the state of the art of TFHE transciphering, achieving a throughput of over 60 bits/s on a standard CPU, all while avoiding the need for an expensive initialization process.
Securely Instantiating 'Half Gates' Garbling in the Standard Model
Garbling is a fundamental cryptographic primitive, with numerous theoretical and practical applications. Since the first construction by Yao (FOCS’82, ’86), a line of work has concerned itself with reducing the communication and computational complexity of that construction. One of the most efficient garbling schemes presently is the ‘Half Gates’ scheme by Zahur, Rosulek, and Evans (Eurocrypt’15). Despite its widespread adoption, the provable security of this scheme has been based on assumptions whose only instantiations are in idealized models. For example, in their original paper, Zahur, Rosulek, and Evans showed that hash functions satisfying a notion called circular correlation robustness (CCR) suffice for this task, and then proved that CCR secure hash functions can be instantiated in the random permutation model. In this work, we show how to securely instantiate the Half Gates scheme in the standard model. To this end, we first show how this scheme can be securely instantiated given a (family of) weak CCR hash function, a notion that we introduce. Furthermore, we show how a weak CCR hash function can be used to securely instantiate other efficient garbling schemes, namely the ones by Rosulek and Roy (Crypto’21) and Heath (Eurocrypt’24). Thus we believe this notion to be of independent interest. Finally, we construct such weak CCR hash functions using indistinguishability obfuscation and one-way functions. The security proof of this construction constitutes our main technical contribution. While our construction is not practical, it serves as a proof of concept supporting the soundness of these garbling schemes, which we regard to be particularly important given the recent initiative by NIST to standardize garbling, and the optimizations in Half Gates being potentially adopted.
Cryptanalysis of rank-2 module-LIP: a single real embedding is all it takes
The rank- module-LIP problem was introduced in cryptography by (Ducas, Postlethwaite, Pulles, van Woerden, Asiacrypt 2022), to construct the highly performant HAWK scheme. A first cryptanalytic work by (Mureau, Pellet--Mary, Pliatsok, Wallet, Eurocrypt 2024) showed a heuristic polynomial time attack against the rank- module-LIP problem over totally real number fields. While mathematically interesting, this attack focuses on number fields that are not relevant for cryptography. The main families of fields used in cryptography are the highly predominant cyclotomic fields (used for instance in the HAWK scheme), as well as the NTRU Prime fields, used for instance in the eponymous NTRU Prime scheme (Bernstein, Chuengsatiansup, Lange, van Vredendaal, SAC 2017).
In this work, we generalize the attack of Mureau et al. against rank- module-LIP to the family of all number fields with at least one real embedding, which contains the NTRU Prime fields. We present three variants of our attack, firstly a heuristic one that runs in quantum polynomial time. Secondly, under the extra assumption that the defining polynomial of has a -transitive Galois group (which is the case for the NTRU Prime fields), we give a provable attack that runs in quantum polynomial time. And thirdly, with the same -transitivity assumption we give a heuristic attack that runs in classical polynomial time. For the latter we use a generalization of the Gentry--Szydlo algorithm to any number field which might be of independent interest.
Context-Dependent Threshold Decryption and its Applications
We initiate the study of high-threshold public-key decryption, along with an enhanced security feature called context-dependent decryption.
Our study includes definitions, constructions, security proofs, and applications.
The notion of high-threshold decryption has received almost no attention in the literature. The enhanced security feature of context-dependent encryption is entirely new, and plays an important role in many natural applications of threshold decryption.
New Techniques for Random Probing Security and Application to Raccoon Signature Scheme
The random probing model formalizes a leakage scenario where each wire in a circuit leaks with probability . This model holds practical relevance due to its reduction to the noisy leakage model, which is widely regarded as the appropriate formalization for power and electromagnetic side-channel attacks.
In this paper, we present new techniques for designing efficient masking schemes that achieve tighter random probing security with lower complexity. First, we introduce the notion of \emph{cardinal random probing composability} (Cardinal-RPC), offering a new trade-off between complexity and security for composing masking gadgets. Next, we propose a novel refresh technique based on a simple iterative process: randomly selecting and updating two shares with fresh randomness. While not perfectly secure in the standard probing model, this method achieves arbitrary cardinal-RPC security, making it a versatile tool for constructing random-probing secure circuits. Using this refresh, we develop additional basic gadgets (e.g., linear multiplication, addition, and copy) that satisfy the cardinal-RPC notion. Despite the increased complexity, the gains in security significantly outweigh the overhead, with the number of iterations offering useful flexibility.
To showcase our techniques, we apply them to lattice-based signatures. Specifically, we introduce a new random-probing composable gadget for sampling small noise, a key component in various post-quantum algorithms. To assess security in this context, we generalize the random probing security model to address auxiliary inputs and public outputs. We apply our findings to Raccoon, a masking-friendly signature scheme originally designed for standard probing security. We prove the secure composition of our new gadgets for key generation and signature computation, and show that our masking scheme achieves a superior security-performance tradeoff compared to previous approaches based on random probing expansion. To our knowledge, this is the first fully secure instantiation of a post-quantum algorithm in the random probing model.
Tighter Control for Distributed Key Generation: Share Refreshing and Expressive Reconstruction Policies
The secure management of private keys is a fundamental challenge, particularly for the general public, as losing these keys can result in irreversible asset loss. Traditional custodial approaches pose security risks, while decentralized secret sharing schemes offer a more resilient alternative by distributing trust among multiple parties. In this work, we extend an existing decentralized, verifiable, and extensible cryptographic key recovery scheme based on Shamir's secret sharing. We introduce a refresh phase that ensures proactive security, preventing long-term exposure of secret shares. Our approach explores three distinct methods for refreshing shares, analyzing and comparing their security guarantees and computational complexity. Additionally, we extend the protocol to support more complex access structures, with a particular focus on threshold access trees, enabling fine-grained control over key reconstruction.
Finding and Protecting the Weakest Link: On Side-Channel Attacks on Masked ML-DSA
NIST has standardized ML-KEM and ML-DSA as replacements for pre-quantum key exchanges and digital signatures. Both schemes have already seen analysis with respect to side-channels, and first fully masked implementations of ML-DSA have been published. Previous attacks have focused on unprotected implementations or assumed only hiding countermeasures to be in-place. Thus, in contrast to ML-KEM, the threat of side-channel attacks for protected implementations of ML-DSA is mostly unclear.
In this work, we analyze the side-channel vulnerability of masked ML-DSA implementations. We first systematically assess the vulnerability of several potential points of attacks in different leakage models using information theory. Then, we explain how an adversary could launch first, second, and higher-order attacks using a recently presented framework for side-channel information in lattice-based schemes. In this context, we propose a filtering technique that allows the framework to solve for the secret key from a large number of hints; this had previously been prevented by numerical instabilities. We simulate the presented attacks and discuss the relation to the information-theoretic analysis.
Finally, we carry out relevant attacks on physical devices, discuss recent masked implementations, and instantiate a countermeasure against the most threatening attacks. The countermeasure mitigates the attacks with the highest noise-tolerance while having very little overhead. The results on the physical devices validate our simulations.
Error-Simulatable Sanitization for TFHE and Applications
We show that the randomized TFHE bootstrapping technique of Bourse and Izabechéne provides a form of sanitization which is error-simulatable. This means that the randomized bootstrap can be used not only for sanitization of ciphertexts (i.e. to hide the function that has been computed), but that it can also be used in server-assisted threshold decryption. Thus we extend the server-assisted threshold decryption method of Passelégue and Stehlé (ASIACRYPT '24) to FHE schemes which have small ciphertext modulus (such as TFHE). In addition the error-simulatable sanitization enables us to obtain FuncCPA security for TFHE essentially for free.
Post-Quantum Blind Signatures from Matrix Code Equivalence
We construct a novel code-based blind signature scheme, us- ing the Matrix Equivalence Digital Signature (MEDS) group action. The scheme is built using similar ideas to the Schnorr blind signature scheme and CSI-Otter, but uses additional public key and commitment informa- tion to overcome the difficulties that the MEDS group action faces: lack of module structure (present in Schnorr), lack of a quadratic twist (present in CSI-Otter), and non-commutativity of the acting group. We address security concerns related to public key validation, and prove the security of our protocol in the random oracle model, using the security framework of Kastner, Loss, and Xu, under a variant of the Inverse Matrix Code Equivalence problem and a mild heuristic assumption.
Clustering Approach for Higher-Order Deterministic Masking
We present a novel scheme for securely computing the AND operation, without requiring additional online randomness. Building on the work of Nikova et al., our construction extends security beyond the first order while ensuring a uniform output distribution and resilience against glitches up to a specified threshold. This result addresses a longstanding open problem in side-channel-resistant masking schemes.
Our approach is based on a new method of share clustering, inspired by finite affine geometry, enabling simultaneous consideration of both security and uniformity. Furthermore, we demonstrate how this clustering-based framework can be applied to higher-order protection of ciphers like Ascon under a fully deterministic masking regime.
By eliminating the need for online randomness within the protected circuit, our work expands the practical scope of efficient and higher-order masking schemes for resource constraint applications.
X-Transfer: Enabling and Optimizing Cross-PCN Transactions
Blockchain interoperability solutions allow users to hold and transfer assets among different chains, and in so doing reap the benefits of each chain. To fully reap the benefits of multi-chain financial operations, it is paramount to support interoperability and cross-chain transactions also on Layer-2 networks, in particular payment channel networks (PCNs). Nevertheless, existing works on Layer-2 interoperability solutions still involve on-chain events, which limits their scalability and throughput. In this work, we present X-Transfer, the first secure, scalable, and fully off-chain protocol that allows payments across different PCNs. We formalize and prove the security of X-Transfer against rational adversaries with a game theoretic analysis. In order to boost efficiency and scalability, X-Transfer also performs transaction aggregation to increase channel liquidity and transaction throughput while simultaneously minimizing payment routing fees. Our empirical evaluation of X-Transfer shows that X-Transfer achieves at least twice as much throughput compared to the baseline of no transaction aggregation, confirming X-Transfer's efficiency.
Unconditional foundations for supersingular isogeny-based cryptography
In this paper, we prove that the supersingular isogeny problem (Isogeny), endomorphism ring problem (EndRing) and maximal order problem (MaxOrder) are equivalent under probabilistic polynomial time reductions, unconditionally.
Isogeny-based cryptography is founded on the presumed hardness of these problems, and their interconnection is at the heart of the design and analysis of cryptosystems like the SQIsign digital signature scheme. Previously known reductions relied on unproven assumptions such as the generalized Riemann hypothesis. In this work, we present unconditional reductions, and extend this network of equivalences to the problem of computing the lattice of all isogenies between two supersingular elliptic curves (HomModule).
For cryptographic applications, one requires computational problems to be hard on average for random instances. It is well-known that if Isogeny is hard (in the worst case), then it is hard for random instances. We extend this result by proving that if any of the above-mentionned classical problems is hard in the worst case, then all of them are hard on average. In particular, if there exist hard instances of Isogeny, then all of Isogeny, EndRing, MaxOrder and HomModule are hard on average.
A Decomposition Approach for Evaluating Security of Masking
Masking is a common countermeasure against side-channel attacks that encodes secrets into multiple shares, each of which may be subject to leakage. A key question is under what leakage conditions, and to what extent, does increasing the number of shares actually improve the security of these secrets. Although this question has been studied extensively in low-SNR regimes, scenarios where the adversary obtains substantial information—such as on low-noise processors or through static power analysis—have remained underexplored.
In this paper, we address this gap by deriving necessary and sufficient noise requirements for masking security in both standalone encodings and linear gadgets. We introduce a decomposition technique that reduces the relationship between an extended-field variable and its leakage into subproblems involving linear combinations of the variable’s bits. By working within binary subfields, we derive optimal bounds and then lift these results back to the extended field.
Beyond binary fields, we also present a broader framework for analyzing masking security in other structures, including prime fields. As an application, we prove a conjecture by Dziembowski et al. (TCC 2016), which states that for an additive group with its largest subgroup , a -noisy leakage satisfying ensures that masking enhances the security of the secret.
- « Previous
- 1
- 2
- 3
- ...
- 239
- Next »