Papers updated in last 7 days (69 results)
Asymptotically Faster Multi-Key Homomorphic Encryption from Homomorphic Gadget Decomposition
Homomorphic Encryption (HE) is a cryptosytem that allows us to perform an arbitrary computation on encrypted data.
The standard HE, however, has a disadvantage in that the authority is concentrated in the secret key owner since computations can only be performed on ciphertexts encrypted under the same secret key.
To resolve this issue, research is underway on Multi-Key Homomorphic Encryption (MKHE), which is a variant of HE supporting computations on ciphertexts possibly encrypted under different keys.
Despite its ability to provide privacy for multiple parties, existing MKHE schemes suffer from poor performance due to the cost of multiplication which grows at least quadratically with the number of keys involved.
In this paper, we revisit the work of Chen et al. (ACM CCS 2019) on MKHE schemes from CKKS and BFV and significantly improve their performance.
Specifically, we redesign the multi-key multiplication algorithm and achieve an asymptotically optimal complexity that grows linearly with the number of keys.
Our construction relies on a new notion of gadget decomposition, which we call homomorphic gadget decomposition, where arithmetic operations can be performed over the decomposed vectors with guarantee of its functionality.
Finally, we implement our MKHE schemes and demonstrate their benchmarks. For example, our multi-key CKKS multiplication takes only 0.5, 1.0, and 1.9 seconds compared to 1.6, 5.9, and 23.0 seconds of the previous work when 8, 16, and 32 keys are involved, respectively.
Optimizing HE operations via Level-aware Key-switching Framework
In lattice-based Homomorphic Encryption (HE) schemes, the key-switching procedure is a core building block of non-linear operations but also a major performance bottleneck.
The computational complexity of the operation is primarily determined by the so-called gadget decomposition, which transforms a ciphertext entry into a tuple of small polynomials before being multiplied with the corresponding evaluation key.
However, the previous studies such as Halevi et al. (CT-RSA 2019) and Han and Ki (CT-RSA 2020) fix a decomposition function in the setup phase which is applied commonly across all ciphertext levels, resulting in suboptimal performance.
In this paper, we introduce a novel key-switching framework for leveled HE schemes. We aim to allow the use of different decomposition functions during the evaluation phase so that the optimal decomposition method can be utilized at each level to achieve the best performance.
A naive solution might generate multiple key-switching keys corresponding to all possible decomposition functions, and sends them to an evaluator.
However, our solution can achieve the goal without such communication overhead since it allows an evaluator to dynamically derive other key-switching keys from a single key-switching key depending on the choice of gadget decomposition.
We implement our framework at a proof-of-concept level to provide concrete benchmark results. Our experiments show that we achieve the optimal performance at every level while maintaining the same computational capability and communication costs.
Compute, but Verify: Efficient Multiparty Computation over Authenticated Inputs
Traditional notions of secure multiparty computation (MPC) allow mutually distrusting parties to jointly compute a function over their private inputs, but typically do not specify how these inputs are chosen. Motivated by real-world applications where corrupt inputs could adversely impact privacy and operational legitimacy, we consider a notion of authenticated MPC where the inputs are authenticated, e.g., signed using a digital signature by some certification authority. We propose a generic and efficient compiler that transforms any linear secret sharing based honest-majority MPC protocol into one with input authentication.
Our compiler incurs significantly lower computational costs and competitive communication overheads when compared to the best existing solutions, while entirely avoiding the (potentially expensive) protocol-specific techniques and pre-processing requirements that are inherent to these solutions. For $n$-party honest majority MPC protocols with abort security where each party has $\ell$ inputs, our compiler incurs $O(n\log \ell)$ communication overall and a computational overhead of $O(\ell)$ group exponentiations per party (the corresponding overheads for the most efficient existing solution are $O(n^2)$ and $O(\ell n)$). Finally, for a corruption threshold $t<n/3$, our compiler preserves the stronger identifiable abort security of the underlying MPC protocol. No existing solution for authenticated MPC achieves this regardless of the corruption threshold.
Along the way, we make several technical contributions that are of independent interest. This includes the notion of distributed proofs of knowledge and concrete realizations of the same for several relations of interest, such as proving knowledge of many popularly used digital signature schemes, and proving knowledge of opening of a Pedersen commitment.
Anonymous Permutation Routing
The Non-Interactive Anonymous Router (NIAR) model was introduced by Shi and Wu [SW21] as an alternative to conventional solutions to the anonymous routing problem, in which a set of senders wish to send messages to a set of receivers. In contrast to most known approaches to support anonymous routing (e.g. mix-nets, DC-nets, etc.) which rely on a network of routers communicating with users via interactive protocols, the NIAR model assumes a $single$ router and is inherently $non$-$interactive$ (after an initial setup phase). In addition to being non-interactive, the NIAR model is compelling due to the security it provides: instead of relying on the honesty of some subset of the routers, the NIAR model requires anonymity even if the router (as well as an arbitrary subset of senders/receivers) is corrupted.
In this paper, we present a protocol for the NIAR model that improves upon the results from [SW21] in two ways:
- Improved computational efficiency (quadratic to near linear): Our protocol matches the communication complexity of [SW21] for each sender/receiver, while reducing the computational overhead for the router to polylog overhead instead of linear overhead.
- Relaxation of assumptions: Security of the protocol in [SW21] relies on the Decisional Linear assumption in bilinear groups; while security for our protocol follows from the existence of any rate-1 oblivious transfer (OT) protocol (instantiations of this primitive are known to exist under DDH, QR and LWE [DGI19,GHO20]).
Advanced Composition Theorems for Differential Obliviousness
Differential obliviousness (DO) is a privacy notion which mandates that the access patterns of a program satisfy differential privacy. Earlier works have shown that in numerous applications, differential obliviousness allows us to circumvent fundamental barriers pertaining to fully oblivious
algorithms, resulting in asymptotical (and sometimes even polynomial) performance improvements. Although DO has been applied to various contexts, including the design of algorithms, data structures, and protocols, its compositional properties are not explored until the recent work of Zhou et al. (Eurocrypt'23). Specifically, Zhou et al. showed that the original DO notion is not composable. They then proposed a refinement of DO called neighbor-preserving differential obliviousness (NPDO), and proved a basic composition for NPDO.
In Zhou et al.'s basic composition theorem for NPDO, the privacy loss is linear in $k$ for $k$-fold composition. In comparison, for standard differential privacy, we can enjoy roughly $\sqrt{k}$ loss for $k$-fold composition by applying the well-known advanced composition theorem. Therefore, a natural question left open by their work is whether we can also prove an analogous advanced composition for NPDO.
In this paper, we answer this question affirmatively. As a key step in proving an advanced composition theorem for NPDO, we define a more operational notion called symmetric NPDO which we prove to be equivalent to NPDO. Using symmetric NPDO as a stepping stone, we also show how to generalize
NPDO to more general notions of divergence, resulting in Rényi-NPDO, zero-concentrated NPDO, Gassian-NPDO, and $g$-NPDO notions. We also prove composition theorems for these generalized notions of NPDO.
A Theory of Composition for Differential Obliviousness
Differential obliviousness (DO) access pattern privacy is a privacy notion which guarantees that the access patterns of a program satisfy differential privacy. Differential obliviousness was studied in a sequence of recent works as a relaxation of full obliviousness. Earlier works showed that DO not only allows us to circumvent the logarithmic-overhead barrier of fully oblivious algorithms, in many cases, it also allows us to achieve polynomial speedup over full obliviousness, since it avoids "padding to the worst-case" behavior of fully oblivious algorithms.
Despite the promises of differential obliviousness (DO), a significant barrier that hinders its broad application is the lack of composability. In particular, when we apply one DO algorithm to the output of another DO algorithm, the composed algorithm may no longer be DO (with reasonable parameters). More specifically, the outputs of the first DO algorithm on two neighboring inputs may no longer be neighboring, and thus we cannot directly benefit from the DO guarantee of the second algorithm.
In this work, we are the first to explore a theory of composition for differentially oblivious algorithms. We propose a refinement of the DO notion called
$(\epsilon, \delta)$-neighbor-preserving-DO, or $(\epsilon, \delta)$-NPDO for short, and we prove that our new notion indeed provides nice compositional guarantees. In this way, the algorithm designer can easily track the privacy loss when composing multiple DO algorithms.
We give several example applications to showcase the power and expressiveness of our new NPDO notion. One of these examples is a result of independent interest: we use the compositional framework to prove an optimal privacy amplification theorem for the differentially oblivious shuffle model. In other words, we show that for a class of distributed differentially private mechanisms in the shuffle-model, one can replace the perfectly secure shuffler with a DO shuffler, and nonetheless enjoy almost the same privacy amplification
enabled by a shuffler.
Near-Optimal Private Information Retrieval with Preprocessing
In Private Information Retrieval (PIR), a client wishes to access an index $i$ from a public $n$-bit database without revealing any information about $i$. Recently, a series of works starting with the seminal paper of Corrigan-Gibbs and Kogan (EUROCRYPT 2020) considered PIR with \emph{client preprocessing} and \emph{no additional server storage}. In this setting, we now have protocols that achieve $\widetilde{O}(\sqrt{n})$ (amortized) server time and $\widetilde{O}(1)$ (amortized) bandwidth in the two-server model (Shi et al., CRYPTO 2021) as well as $\widetilde{O}(\sqrt{n})$ server time and $\widetilde{O}(\sqrt{n})$ bandwidth in the single-server model (Corrigan-Gibbs et al., EUROCRYPT 2022). Given existing lower bounds, a single-server PIR scheme with $\widetilde{O}(\sqrt{n})$ (amortized) server time and $\widetilde{O}(1)$ (amortized) bandwidth is still feasible, however, to date, no known protocol achieves such complexities. In this paper we fill this gap by constructing the first single-server PIR scheme with $\widetilde{O}(\sqrt{n})$ (amortized) server time and $\widetilde{O}(1)$ (amortized) bandwidth. Our scheme achieves near-optimal (optimal up to polylogarithmic factors) asymptotics in every relevant dimension.
Central to our approach is a new cryptographic primitive that we call an adaptable pseudorandom set: With an adaptable pseudorandom set, one can represent a large pseudorandom set with a succinct fixed-size key $k$, and can both add to and remove from the set a constant number of elements by manipulating the key $k$, while maintaining its concise description as well as its pseudorandomness (under a certain security definition).
Batch point compression in the context of advanced pairing-based protocols
This paper continues previous ones about compression of points on elliptic curves $E_b\!: y^2 = x^3 + b$ (with $j$-invariant $0$) over a finite field $\mathbb{F}_{\!q}$ of characteristic $p > 3$. It is shown in detail how any two (resp., three) points from $E_b(\mathbb{F}_{\!q})$ can be quickly compressed to two (resp., three) elements of $\mathbb{F}_{\!q}$ (apart from a few auxiliary bits) in such a way that the corresponding decompression stage requires to extract only one cubic (resp., sextic) root in $\mathbb{F}_{\!q}$. As a result, for many fields $\mathbb{F}_{\!q}$ occurring in practice, the new compression-decompression methods are more efficient than the classical one with the two (resp., three) $x$ or $y$ coordinates of the points, which extracts two (resp., three) roots in $\mathbb{F}_{\!q}$. As a by-product, it is also explained how to sample uniformly at random two (resp., three) ``independent'' $\mathbb{F}_{\!q}$-points on $E_b$ essentially at the cost of only one cubic (resp., sextic) root in $\mathbb{F}_{\!q}$. Finally, the cases of four and more points from $E_b(\mathbb{F}_{\!q})$ are commented on as well.
A New Formulation of the Linear Equivalence Problem and Shorter LESS Signatures
The Linear Equivalence Problem (LEP) asks to find a linear isometry between a given pair of linear codes; in the Hamming weight this is known as a monomial map. LEP has been used in cryptography to design the family of LESS signatures, which includes also some advanced schemes, such as ring and identity-based signatures. All of these schemes are obtained applying the Fiat-Shamir transformation to a Sigma protocol, in which the prover's responses contain a description of how the monomial map acts on all code coordinates; such a description constitutes the vast majority of the signature size. In this paper, we propose a new formulation of LEP, which we refer to as Information-Set (IS)-LEP. Exploiting IS-LEP, it is enough for the prover to provide the description of the monomial action only on an information set, instead of all the coordinates. Thanks to this new formulation, we are able to drastically reduce signature sizes for all LESS signature schemes, without any relevant computational overhead. We prove that IS-LEP and LEP are completely equivalent (indeed, the same problem), which means that improvement comes with no additional security assumption, either.
Trivial Transciphering With Trivium and TFHE
We examine the use of Trivium and Kreyvium as transciphering mechanisms for use with the TFHE FHE scheme. Originally these two ciphers were investigated for FHE transciphering only in the context of the BGV/BFV FHE schemes; this is despite Trivium and Kreyvium being particarly suited to TFHE. Recent work by Dobraunig et al. gave some initial experimental results using TFHE. We show that these two symmetric ciphers have excellent performance when homomorphically evaluated using TFHE. Indeed we improve upon the results of Dobraunig et al. by at least two orders of magnitude in terms of latency. This shows that, for TFHE at least, one can transcipher using a standardized symmetric cipher (Trivium), without the need for special FHE-friendly ciphers being employed. For applications wanting extra security, but without the benefit of relying on a standardized cipher, our work shows that Kreyvium is a good candidate.
Toward Practical Lattice-based Proof of Knowledge from Hint-MLWE
In the last decade, zero-knowledge proof of knowledge protocols have been extensively studied to achieve active security of various cryptographic protocols. However, the existing solutions simply seek zero-knowledge for both message and randomness, which is an overkill in many applications since protocols may remain secure even if some information about randomness is leaked to the adversary.
We develop this idea to improve the state-of-the-art proof of knowledge protocols for RLWE-based public-key encryption and BDLOP commitment schemes. In a nutshell, we present new proof of knowledge protocols without using noise flooding or rejection sampling which are provably secure under a computational hardness assumption, called Hint-MLWE. We also show an efficient reduction from Hint-MLWE to the standard MLWE assumption.
Our approach enjoys the best of two worlds because it has no computational overhead from repetition (abort) and achieves a polynomial overhead between the honest and proven languages. We prove this claim by demonstrating concrete parameters and compare with previous results. Finally, we explain how our idea can be further applied to other proof of knowledge providing advanced functionality.
Quantum Analysis of AES
Quantum computing is considered among the next big leaps in computer science. While a fully functional quantum computer is still in the future, there is an ever-growing need to evaluate the security of the secret-key ciphers against a potent quantum adversary. Keeping this in mind, our work explores the key recovery attack using the Grover's search on the three variants of AES (-128, -192, -256). In total, we develop a pool of 14 implementations per AES variant, by taking the state-of-the-art advancements in the relevant fields into account.
In a nutshell, we present the least Toffoli depth and full depth implementations of AES, thereby improving from Zou et al.'s Asiacrypt'20 paper by more than 98 percent for all variants of AES. We show that the qubit count - Toffoli depth product is reduced from theirs by more than 75 percent. Furthermore, we analyze the Jaques et al.'s Eurocrypt'20 implementations in details, fix the bugs (arising from some problem of the quantum computing tool used and not related to their coding) and report corrected benchmarks. To the best of our finding, our work improves from all the previous works (including the Asiacrypt'22 paper by Huang and Sun) in terms of various quantum circuit complexity metrics (such as, Toffoli depth, full depth, Toffoli depth - qubit count product, and so on).
Equipped with the basic AES implementations, we further investigate the prospect of the Grover's search. In that direction, under the MAXDEPTH constraint (specified by NIST), the circuit depth metrics (Toffoli depth, T-depth and full depth) become crucial factors and parallelization for often becomes necessary. We provide the least depth implementation in this respect, that offers the best performance in terms of metrics for circuit complexity (like, depth-squared - gate count product, depth-squared - qubit count product).
Simple Threshold (Fully Homomorphic) Encryption From LWE With Polynomial Modulus
The learning with errors (LWE) assumption is a powerful tool for building encryption schemes with useful properties, such as plausible resistance to quantum computers, or support for homomorphic computations. Despite this, essentially the only method of achieving threshold decryption in schemes based on LWE requires a modulus that is superpolynomial in the security parameter, leading to a large overhead in ciphertext sizes and computation time.
In this work, we propose a (fully homomorphic) encryption scheme that supports a simple $t$-out-of-$n$ threshold decryption protocol while allowing for a polynomial modulus. The main idea is to use the Rényi divergence (as opposed to the statistical distance as in previous works) as a measure of distribution closeness. This comes with some technical obstacles, due to the difficulty of using the Rényi divergence in decisional security notions such as standard semantic security. We overcome this by constructing a threshold scheme with a weaker notion of one-way security and then showing how to transform any one-way threshold scheme into one guaranteeing indistinguishability-based security.
Concurrent Security of Anonymous Credentials Light, Revisited
We revisit the concurrent security guarantees of the well-known Anonymous Credentials Light (ACL) scheme (Baldimtsi and Lysyanskaya, CCS'13). This scheme was originally proven secure when executed sequentially, and its concurrent security was left as an open problem.
A later work of Benhamouda et al. (EUROCRYPT'21) gave an efficient attack on ACL when executed concurrently, seemingly resolving this question once and for all.
In this work, we point out a subtle flaw in the attack of Benhamouda et al. on ACL and show, in spite of popular opinion, that it can be proven concurrently secure.
Our modular proof in the algebraic group model uses an ID scheme as an intermediate step and leads to a major simplification of the complex security argument for Abe's Blind Signature scheme by Kastner et al. (PKC'22).
Sigma Protocols from Verifiable Secret Sharing and Their Applications
Sigma protocols are one of the most common and efficient zero-knowledge proofs (ZKPs). Over the decades, a large number of Sigma protocols are proposed, yet few works pay attention to the common design principal. In this work, we propose a generic framework of Sigma protocols for algebraic statements from verifiable secret sharing (VSS) schemes. Our framework provides a general and unified approach to understanding Sigma protocols.
It not only neatly explains the classic protocols such as Schnorr, Guillou–Quisquater and Okamoto protocols, but also leads to new Sigma protocols that were not previously known.
Furthermore, we show an application of our framework in designing ZKPs for composite statements, which contain both algebraic and non-algebraic statements. We give a generic construction of non-interactive ZKPs for composite statements by combining Sigma protocols from VSS and ZKPs following MPC-in-the-head paradigm in a seamless way via a technique of \textit{witness sharing reusing}. Our construction has advantages of requiring no “glue” proofs for combining algebraic and non-algebraic statements. By instantiating our construction using Ligero++ (Bhadauria et al., CCS 2020) and designing an associated Sigma protocol from VSS, we obtain a concrete ZKP for composite statements which achieves a tradeoff between running time and proof size, thus resolving the open problem left by Backes et al. (PKC 2019).
Threshold Structure-Preserving Signatures
Structure-preserving signatures (SPS) are an important building block for privacy-preserving cryptographic primitives, such as electronic cash, anonymous credentials, and delegatable anonymous credentials. In this work, we introduce the first threshold structure-preserving signature scheme (TSPS). This enables multiple parties to jointly sign a message, resulting in a standard, single-party SPS signature, and can thus be used as a replacement for applications based on SPS.
We begin by defining and constructing SPS for indexed messages, which are messages defined relative to a unique index. We prove its security in the random oracle model under a variant of the generalized Pointcheval-Sanders assumption (PS). Moreover, we generalize this scheme to an indexed multi-message SPS for signing vectors of indexed messages, which we prove secure under the same assumption. We then formally define the notion of a TSPS and propose a construction based on our indexed multi-message SPS. Our TSPS construction is fully non-interactive, meaning that signers simply output partial signatures without communicating with the other signers. Additionally, signatures are short: they consist of 2 group elements and require 2 pairing product equations to verify. We prove the security of our TSPS under the security of our indexed multi-message SPS scheme. Finally, we show that our TSPS may be used as a drop-in replacement for UC-secure Threshold-Issuance Anonymous Credential (TIAC) schemes, such as Coconut, without the overhead of the Fischlin transform.
More Balanced Polynomials: Cube Attacks on 810- and 825-Round Trivium with Practical Complexities
The key step of the cube attack is to recover the special polynomial, the superpoly, of the target cipher. In particular, the balanced superpoly, in which there exists at least one secret variable as a single monomial and none of the other monomials contain this variable, can be exploited to reveal one-bit information about the key bits. However, as the number of rounds grows, it becomes increasingly difficult to find such balanced superpolies. Consequently, traditional methods of searching for balanced superpolies soon hit a bottleneck. Aiming at performing a cube attack on more rounds of Trivium with a practical complexity, in this paper, we present three techniques to obtain sufficient balanced polynomials.
1. Based on the structure of Trivium, we propose a variable substitution technique to simplify the superpoly.
2. Obtaining the additional balanced polynomial by combining two superpolies to cancel the two-degree terms.
3. We propose an experimental approach to construct high-quality large cubes which may contain more subcubes with balanced superpolies and a heuristic search strategy for their subcubes whose superpolies are balanced.
To illustrate the power of our techniques, we search for balanced polynomials for 810- and 825-round Trivium. As a result, we can mount cube attacks against 810- and 825-round Trivium with the time complexity of $2^{44.17}$ and $2^{53.17}$ round-reduced Trivium initializations, respectively, which can be verified in 48 minutes and 18 days on a PC with one A100 GPU. For the same level of time complexity, this improves the previous best results by $2$ and $5$ rounds, respectively.
Public-Key Encryption with Quantum Keys
In the framework of Impagliazzo's five worlds, a distinction is often made between two worlds, one where public-key encryption exists (Cryptomania), and one in which only one-way functions exist (MiniCrypt). However, the boundaries between these worlds can change when quantum information is taken into account. Recent work has shown that quantum variants of oblivious transfer and multi-party computation, both primitives that are classically in Cryptomania, can be constructed from one-way functions, placing them in the realm of quantum MiniCrypt (the so-called MiniQCrypt). This naturally raises the following question: Is it possible to construct a quantum variant of public-key encryption, which is at the heart of Cryptomania, from one-way functions or potentially weaker assumptions?
In this work, we initiate the formal study of the notion of quantum public-key encryption (qPKE), i.e., public-key encryption where keys are allowed to be quantum states. We propose new definitions of security and several constructions of qPKE based on the existence of one-way functions (OWF), or even weaker assumptions, such as pseudorandom function-like states (PRFS) and pseudorandom function-like states with proof of destruction (PRFSPD). Finally, to give a tight characterization of this primitive, we show that computational assumptions are necessary to build quantum public-key encryption. That is, we give a self-contained proof that no quantum public-key encryption scheme can provide information-theoretic security.
FESTA: Fast Encryption from Supersingular Torsion Attacks
We introduce FESTA, an efficient isogeny-based public-key encryption (PKE) protocol based on a constructive application of the SIDH attacks.
At its core, FESTA is based on a novel trapdoor function, which uses an improved version of the techniques proposed in the SIDH attacks to develop a trapdoor mechanism. Using standard transformations, we construct an efficient PKE that is IND-CCA secure in the QROM. Additionally, using a different transformation, we obtain the first isogeny-based PKE that is IND-CCA secure in the standard model.
Lastly, we propose a method to efficiently find parameters for FESTA, and we develop a proof-of-concept implementation of the protocol. We expect FESTA to offer practical performance that is competitive with existing isogeny-based constructions.
One-out-of-Many Unclonable Cryptography: Definitions, Constructions, and More
The no-cloning principle of quantum mechanics enables us to achieve amazing unclonable cryptographic primitives, which is impossible in classical cryptography. However, the security definitions for unclonable cryptography are tricky. Achieving desirable security notions for unclonability is a challenging task. In particular, there is no indistinguishable-secure unclonable encryption and quantum copy-protection for single-bit output point functions in the standard model. To tackle this problem, we introduce and study relaxed but meaningful security notions for unclonable cryptography in this work. We call the new security notion one-out-of-many unclonable security.
We obtain the following results.
- We show that one-time strong anti-piracy secure secret key single-decryptor encryption (SDE) implies one-out-of-many indistinguishable-secure unclonable encryption.
- We construct a one-time strong anti-piracy secure secret key SDE scheme in the standard model from the LWE assumption.
- We construct one-out-of-many copy-protection for single-bit output point functions from one-out-of-many indistinguishable-secure unclonable encryption and the LWE assumption.
- We construct one-out-of-many unclonable predicate encryption (PE) from one-out-of-many indistinguishable-secure unclonable encryption and the LWE assumption.
Thus, we obtain one-out-of-many indistinguishable-secure unclonable encryption, one-out-of-many copy-protection for single-bit output point functions, and one-out-of-many unclonable PE in the standard model from the LWE assumption. In addition, our one-time SDE scheme is the first SDE scheme that does not rely on any oracle heuristics and strong assumptions such as indistinguishability obfuscation and witness encryption.
Publicly Verifiable Deletion from Minimal Assumptions
We present a general compiler to add the publicly verifiable deletion property for various cryptographic primitives including public key encryption, attribute-based encryption, and quantum fully homomorphic encryption. Our compiler only uses one-way functions, or more generally hard quantum planted problems for NP, which are implied by one-way functions.
It relies on minimal assumptions and enables us to add the publicly verifiable deletion property with no additional assumption for the above primitives. Previously, such a compiler needs additional assumptions such as injective trapdoor one-way functions or pseudorandom group actions [Bartusek-Khurana-Poremba, CRYPTO 2023]. Technically, we upgrade an existing compiler for privately verifiable deletion [Bartusek-Khurana, CRYPTO 2023] to achieve publicly verifiable deletion by using digital signatures.
Areion: Highly-Efficient Permutations and Its Applications (Extended Version)
In real-world applications, the overwhelming majority of cases require (authenticated) encryption or hashing with relatively short input, say up to 2K bytes. Almost all TCP/IP packets are 40 to 1.5K bytes, and the maximum packet lengths of major protocols, e.g., Zigbee, Bluetooth low energy, and Controller Area Network (CAN), are less than 128 bytes. However, existing schemes are not well optimized for short input. To bridge the gap between real-world needs (in the future) and limited performances of state-of-the-art hash functions and authenticated encryptions with associated data (AEADs) for short input, we design a family of wide-block permutations Areion that fully leverages the power of AES instructions, which are widely deployed in many devices. As for its applications, we propose several hash functions and AEADs. Areion significantly outperforms existing schemes for short input and even competitive to relatively long messages. Indeed, our hash function is surprisingly fast, and its performance is less than three cycles/byte in the latest Intel architecture for any message size. It is significantly much faster than existing state-of-the-art schemes for short messages up to around 100 bytes, which are the most widely-used input size in real-world applications, on both the latest CPU architectures (IceLake, Tiger Lake, and Alder Lake) and mobile platforms (Pixel 7, iPhone 14, and iPad Pro with Apple M2).
Noah's Ark: Efficient Threshold-FHE Using Noise Flooding
We outline a secure and efficient methodology to do threshold distributed decryption for LWE based Fully Homomorphic Encryption schemes. Due to the smaller parameters used in some FHE schemes, such as Torus-FHE (TFHE), the standard technique of ``noise flooding'' seems not to apply. We show that noise flooding can also be used with schemes with such small parameters, by utilizing a switch to a scheme with slightly higher parameters and then utilizing the efficient bootstrapping operations which TFHE offers. Our protocol is proved secure via a simulation argument, making its integration in bigger protocols easier to manage.
SDitH in the QROM
The MPC in the Head (MPCitH) paradigm has recently led to significant improvements for signatures in the code-based setting. In this paper we consider some modifications to a recent twist of MPCitH, called Hypercube-MPCitH, that in the code-based setting provides the currently best known signature sizes. By compressing the Hypercube-MPCitH five-round code-based identification scheme into three-rounds we obtain two main benefits. On the one hand, it allows us to further develop recent techniques to provide a tight security proof in the quantum-accessible random oracle model (QROM), avoiding the catastrophic reduction losses incurred using generic QROM-results for Fiat-Shamir. On the other hand, we can reduce the already low-cost online part of the signature even further. In addition, we propose the use of proof-of-work techniques that allow to reduce the signature size. On the technical side, we develop generalizations of several QROM proof techniques and introduce a variant of the recently proposed extractable QROM.
GLEVIAN and VIGORNIAN: Robust beyond-birthday AEAD modes
The National Cyber Security Centre (NCSC) is the government organisation responsible for mitigating cyber security risks to the UK. Our work securing UK public- and private-sector networks involves (amongst many other security measures) research into cryptographic design, primarily to protect data requiring long-term security or data for which we have a particularly low tolerance of risk to its transmission and storage. Our algorithms prioritise robustness over other important considerations, such as performance, more highly than other designs.
We present GLEVIAN and VIGORNIAN: two AEAD modes with proofs of beyond-birthday security, security against nonce misuse, and against the release of unverified plaintext – both of the latter in strong notions of these security properties. We discuss our hierarchy of requirements for AEAD modes, and the rationale for the design choices made.
GLEVIAN and VIGORNIAN demonstrate we can achieve significantly improved robustness over GCM for use cases where some performance degradation is acceptable. We are not aware of other designs offering exactly the security properties of GLEVIAN and VIGORNIAN, and are publishing our designs to support the research that will inform the recently announced effort by NIST to standardise new modes of operation. We believe our work could be of interest to those with use cases similar to ours, and we offer suggestions for future research that might build on the work in this paper.
Too Many Hints - When LLL Breaks LWE
All modern lattice-based schemes build on variants of the LWE problem. Information leakage of the LWE secret $\mathbf{s} \in \mathbb{Z}_q^n$ is usually modeled via so-called hints, i.e., inner products of $\mathbf{s}$ with some known vector.
At Crypto`20, Dachman-Soled, Ducas, Gong and Rossi (DDGR) defined among other so-called perfect hints and modular hints. The trailblazing DDGR framework allows to integrate and combine hints successively into lattices, and estimates the resulting LWE security loss.
We introduce a new methodology to integrate and combine an arbitrary number of perfect and modular in a single stroke. As opposed to DDGR's, our methodology is significantly more efficient in constructing lattice bases, and thus easily allows for a large number of hints up to cryptographic dimensions -- a regime that is currently impractical in DDGR's implementation.
The efficiency of our method defines a large LWE parameter regime, in which we can fully carry out attacks faster than DDGR can solely estimate them.
The benefits of our approach allow us to practically determine which number of hints is sufficient to efficiently break LWE-based lattice schemes in practice.
E.g., for mod-$q$ hints, i.e., modular hints defined over $\mathbb{Z}_q$, we reconstruct \Kyber-512 secret keys via LLL reduction (only!) with an amount of $449$ hints.
Our results for perfect hints significantly improve over these numbers, requiring for LWE dimension $n$ roughly $n/2$ perfect hints. E.g., we reconstruct via LLL reduction \Kyber-512 keys with merely $234$ perfect hints.
If we resort to stronger lattice reduction techniques like BKZ, we need even fewer hints.
For mod-$q$ hints our method is extremely efficient, e.g., taking total time for constructing our lattice bases and secret key recovery via LLL of around 20 mins for dimension 512.
For perfect hints in dimension 512, we require around 3 hours.
Our results demonstrate that especially perfect hints are powerful in practice, and stress the necessity to properly protect lattice schemes against leakage.
Revisiting Higher-Order Differential-Linear Attacks from an Algebraic Perspective
The Higher-order Differential-Linear (HDL) attack was introduced by Biham \textit{et al.} at FSE 2005, where a linear approximation was appended to a Higher-order Differential (HD) transition.
It is a natural generalization of the Differential-Linear (DL) attack.
Due to some practical restrictions, however, HDL cryptanalysis has unfortunately attracted much less attention compared to its DL counterpart since its proposal.
In this paper, we revisit HD/HDL cryptanalysis from an algebraic perspective and provide two novel tools for detecting possible HD/HDL distinguishers, including:
(a) Higher-order Algebraic Transitional Form (HATF) for probabilistic HD/HDL attacks;
(b) Differential Supporting Function (\DSF) for deterministic HD attacks.
In general, the HATF can estimate the biases of $\ell^{th}$-order HDL approximations with complexity $\mathcal{O}(2^{\ell+d2^\ell})$ where $d$ is the algebraic degree of the function studied.
If the function is quadratic, the complexity can be further reduced to $\mathcal{O}(2^{3.8\ell})$.
HATF is therefore very useful in HDL cryptanalysis for ciphers with quadratic round functions, such as \ascon and \xoodyak.
\DSF provides a convenient way to find good linearizations on the input of a permutation, which facilitates the search for HD distinguishers.
Unsurprisingly, HD/HDL attacks have the potential to be more effective than their simpler differential/DL counterparts.
Using HATF, we found many HDL approximations for round-reduced \ascon and \xoodyak initializations, with significantly larger biases than DL ones.
For instance, there are deterministic 2$^{nd}$-order/4$^{th}$-order HDL approximations for \ascon/\xoodyak initializations, respectively (which is believed to be impossible in the simple DL case).
We derived highly biased HDL approximations for 5-round \ascon up to 8$^{th}$ order, which improves the complexity of the distinguishing attack on 5-round \ascon from $2^{16}$ to $2^{12}$ calls.
We also proposed HDL approximations for 6-round \ascon and 5-round \xoodyak (under the single-key model), which couldn't be reached with simple DL so far.
For key recovery, HDL attacks are also more efficient than DL attacks, thanks to the larger biases of HDL approximations.
Additionally, HATF works well for DL (1$^{st}$-order HDL) attacks and some well-known DL biases of \ascon and \xoodyak that could only be obtained experimentally before can now be predicted theoretically.
With \DSF, we propose a new distinguishing attack on 8-round \ascon permutation, with a complexity of $2^{48}$.
Also, we provide a new zero-sum distinguisher for the full 12-round \ascon permutation with $2^{55}$ time/data complexity. We highlight that our cryptanalyses do not threaten the security of \ascon or \xoodyak.
Efficient Registration-Based Encryption
Registration-based encryption (RBE) was recently introduced as an alternative to identity-based encryption (IBE), to resolve the key-escrow problem: In RBE, the trusted authority is substituted with a weaker entity, called the key curator, who has no knowledge of any secret key. Users generate keys on their own and then publicly register their identities and their corresponding public keys to the key curator. RBE is a promising alternative to IBE, retaining many of its advantages while removing the key-escrow problem, the major drawback of IBE. Unfortunately, all existing constructions of RBE use cryptographic schemes in a non black-box way, which makes them prohibitively expensive. It has been estimated that the size of an RBE ciphertext would be in the order of terabytes (though no RBE has even been implemented).
In this work, we propose a new approach to construct RBE, from standard assumptions in bilinear groups. Our scheme is black-box and it is concretely highly efficient—a ciphertext is 914 bytes. To substantiate this claim, we implemented a prototype of our scheme and we show that it scales to millions of users. The public parameters of the scheme are on the order of kilobytes. The most expensive operation (registration) takes at most a handful of seconds, whereas the encryption and decryption runtimes are on the order of milliseconds. This is the first-ever implementation of an RBE scheme and demonstrates that the practical deployment of RBE is already possible with today’s hardware.
Keyed-Fully Homomorphic Encryption without Indistinguishability Obfuscation
(Fully) homomorphic encryption ((F)HE) allows users to publicly evaluate circuits on encrypted data. Although publicly homomorphic evaluation property has various applications, (F)HE cannot achieve security against chosen ciphertext attacks (CCA2) due to its nature. To achieve both the CCA2 security and homomorphic evaluation property, Emura et al. (PKC 2013) introduced keyed-homomorphic public key encryption (KH-PKE) and formalized its security denoted by $\mathsf{KH\textup{-}CCA}$ security. KH-PKE has a homomorphic evaluation key that enables users to perform homomorphic operations. Intuitively, KH-PKE achieves the CCA2 security unless adversaries have a homomorphic evaluation key. Although Lai et al. (PKC 2016) proposed the first keyed-fully homomorphic encryption (keyed-FHE) scheme, its security relies on the indistinguishability obfuscation ($\mathsf{iO}$), and this scheme satisfies only a weak variant of $\mathsf{KH\textup{-}CCA}$ security. Here, we propose a generic construction of a $\mathsf{KH\textup{-}CCA}$ secure keyed-FHE scheme from an FHE scheme secure against non-adaptive chosen ciphertext attack (CCA1) and a strong dual-system simulation-sound non-interactive zero-knowledge (strong DSS-NIZK) argument system by using the Naor-Yung paradigm. We show that there are existing strong DSS-NIZK systems and IND-CCA1 secure FHE schemes that are suitable for our generic construction. This shows that there exists a keyed-FHE scheme from simpler primitives than iO.
Pseudorandomness with Proof of Destruction and Applications
Two fundamental properties of quantum states that quantum information theory explores are pseudorandomness and provability of destruction.
We introduce the notion of quantum pseudorandom states
with proofs of destruction (PRSPD) that combines both these properties.
Like standard pseudorandom states (PRS), these are efficiently
generated quantum states that are indistinguishable from random, but they can also be measured to create a classical string. This string is
verifiable (given the secret key) and certifies that the state has been destructed.
We show that, similarly to PRS, PRSPD can be constructed
from any post-quantum one-way function. As far as the authors are
aware, this is the first construction of a family of states that satisfies
both pseudorandomness and provability of destruction.
We show that many cryptographic applications that were shown
based on PRS variants using quantum communication can be based
on (variants of) PRSPD using only classical communication. This includes
symmetric encryption, message authentication, one-time signatures, commitments, and classically verifiable private quantum coins.
Generalized Special-Sound Interactive Proofs and their Knowledge Soundness
A classic result in the theory of interactive proofs shows that a special-sound $\Sigma$-protocol is automatically a proof of knowledge. This result is very useful to have, since the latter property is typically tricky to prove from scratch, while the former is often easy to argue -- if it is satisfied. While classic $\Sigma$-protocols often are special-sound, this is unfortunately not the case for many recently proposed, highly efficient interactive proofs, at least not in this strict sense. Motivated by this, the original result was recently generalized to $k$-special-sound $\Sigma$-protocols (for arbitrary, polynomially bounded $k$), and to multi-round versions thereof. This generalization is sufficient to analyze (e.g.) Bulletproofs-like protocols, but is still insufficient for many other examples.
In this work, we push the relaxation of the special soundness property to the extreme, by allowing an arbitrary access structure $\Gamma$ to specify for which subsets of challenges it is possible to compute a witness, when given correct answers to these challenges (for a fixed first message). Concretely, for any access structure $\Gamma$, we identify parameters $t_\Gamma$ and $\kappa_\Gamma$, and we show that any $\Gamma$-special-sound $\Sigma$-protocol is a proof of knowledge with knowledge error $\kappa_\Gamma$ if $t_\Gamma$ is polynomially bounded. Similarly for multi-round protocols.
We apply our general result to a couple of simple but important example protocols, where we obtain a tight knowledge error as an immediate corollary. Beyond these simple examples, we analyze the FRI protocol. Here, showing the general special soundness notion is non-trivial, but can be done (for a certain range of parameters) by recycling some of the techniques used to argue ordinary soundness of the protocol (as an IOP). Again as a corollary, we then derive that the FRI protocol, as an interactive proof by using a Merkle-tree commitment, has a knowledge extractor with almost optimal knowledge error, with the caveat that the extractor requires (expected) quasi-polynomial time.
Finally, building up on the technique for the parallel repetition of $k$-special-sound $\Sigma$-protocols, we show the same strong parallel repetition result for $\Gamma$-special-sound $\Sigma$-protocol and its multi-round variant.
Efficient Aggregatable BLS Signatures with Chaum-Pedersen Proofs
BLS signatures have fast aggregated signature verification but slow individual signature verification. We propose a three part optimisation that dramatically reduces CPU time in large distributed system using BLS signatures: First, public keys should be given on both source groups $\mathbb{G}_1$ and $\mathbb{G}_2$, with a proof-of-possession check for correctness. Second, aggregated BLS signatures should carry their particular aggregate public key in $\mathbb{G}_2$, so that verifiers can do both hash-to-curve and aggregate public key checks in $\mathbb{G}_1$. Third, individual non-aggregated BLS signatures should carry short Chaum-Pedersen DLEQ proofs of correctness, so that verifying individual signatures no longer requires pairings, which makes their verification much faster. We prove security for these optimisations. The proposed scheme is implemented and benchmarked to compare with classic BLS scheme.
Distributed Broadcast Encryption from Bilinear Groups
Distributed broadcast encryption (DBE) improves on the traditional notion of broadcast encryption by eliminating the key-escrow problem: In a DBE system, users generate their own secret keys non- interactively without the help of a trusted party. Then anyone can broadcast a message for a subset S of the users, in such a way that the resulting ciphertext size is sublinear in (and, ideally, independent of) |S|. Unfortunately, the only known constructions of DBE requires heavy cryptographic machinery, such as general-purpose indistinguishability obfuscation, or come without a security proof.
In this work, we formally show that obfuscation is not necessary for DBE, and we present two practical DBE schemes from standard assumptions in prime-order bilinear groups. Our constructions are conceptually simple, satisfy the strong notion of adaptive security, and are concretely efficient. In fact, their performance, in terms of number of group elements and efficiency of the algorithms, is comparable with that of traditional (non distributed) broadcast encryption schemes from bilinear groups.
Covercrypt: an Efficient Early-Abort KEM for Hidden Access Policies with Traceability from the DDH and LWE
Attribute-Based Encryption (ABE) is a very attractive primitive to limit access according to specific rights. While very powerful instantiations have been offered, under various computational assumptions, they rely on either classical or post-quantum problems, and are quite intricate to implement, generally resulting in poor efficiency; the construction we offer results in a powerful efficiency gap with respect to existing solutions.
With the threat of quantum computers, post-quantum solutions are important, but not yet tested enough to rely on such problems only. We thus first study an hybrid approach to rely on the best of the two worlds: the scheme is secure if at least one of the two underlying assumptions is still valid (i.e. the DDH and LWE).
Then, we address the ABE problem, with a practical solution delivering encrypted contents such that only authorized users can decrypt, without revealing the target sets, while also granting tracing capabilities. Our scheme is inspired by the Subset Cover framework where the users' rights are organized as subsets and a content is encrypted with respect to a subset covering of the target set.
Quite conveniently, we offer black-box modularity: one can easily use any public-key encryption of their choice, such as Kyber, with their favorite library, to combine it with a simple ElGamal variant of key encapsulation mechanisms, providing strong security guarantees.
Post-quantum hash functions using $\mathrm{SL}_n(\mathbb{F}_p)$
We define new families of Tillich-Zémor hash functions, using higher dimensional special linear groups over finite fields as platforms. The Cayley graphs of these groups combine fast mixing properties and high girth, which together give rise to good preimage and collision resistance of the corresponding hash functions. We justify the claim that the resulting hash functions are post-quantum secure.
Semi-Quantum Copy-Protection and More
Properties of quantum mechanics have enabled the emergence of quantum cryptographic protocols achieving important goals which are proven to be impossible classically. Unfortunately, this usually comes at the cost of needing quantum power from every party in the protocol, while arguably a more realistic scenario would be a network of classical clients, classically interacting with a quantum server.
In this paper, we focus on copy-protection, which is a quantum primitive that allows a program to be evaluated, but not copied, and has shown interest especially due to its links to other unclonable cryptographic primitives. Our main contribution is to show how to dequantize existing quantum copy-protection from hidden coset states, by giving a construction for classically-instructed remote state preparation for coset states. We then apply this dequantizer to obtain semi-quantum cryptographic protocols for copy-protection and tokenized signatures with strong unforgeability. In the process, we present the first secure copy-protection scheme for point functions in the plain model and a new direct product hardness property of coset states which immediately implies a strongly unforgeable tokenized signature scheme.
Towards Topology-Hiding Computation from Oblivious Transfer
Topology-Hiding Computation (THC) enables parties to securely compute a function on an incomplete network without revealing the network topology. It is known that secure computation on a complete network can be based on oblivious transfer (OT), even if a majority of the participating parties are corrupt. In contrast, THC in the dishonest majority setting is only known from assumptions that imply (additively) homomorphic encryption, such as Quadratic Residuosity, Decisional Diffie-Hellman, or Learning With Errors.
In this work we move towards closing the gap between MPC and THC by presenting a protocol for THC on general graphs secure against all-but-one semi-honest corruptions from constant-round constant-overhead secure two-party computation.
Our protocol is therefore the first to achieve THC on arbitrary networks without relying on assumptions with rich algebraic structure. As a technical tool, we introduce the notion of locally simulatable MPC, which we believe to be of independent interest.
Unconditionally Secure Multiparty Computation for Symmetric Functions with Low Bottleneck Complexity
Bottleneck complexity is an efficiency measure of secure multiparty computation (MPC) introduced by Boyle et al. (ICALP 2018) to achieve load-balancing. Roughly speaking, it is defined as the maximum communication complexity required by any player within the protocol execution. Since it was shown to be impossible to achieve sublinear bottleneck complexity in the number of players $n$ for all functions, a prior work constructed MPC protocols with low bottleneck complexity for specific functions. However, the previous protocol for symmetric functions needs to assume a computational primitive of garbled circuits and its unconditionally secure variant has exponentially large bottleneck complexity in the depth of an arithmetic formula computing the function, which limits the class of symmetric functions the protocol can compute with sublinear bottleneck complexity in $n$. In this work, we make the following contributions to unconditionally secure MPC protocols for symmetric functions with sublinear bottleneck complexity in $n$.
\begin{itemize}
\item We propose for the first time unconditionally secure MPC protocols computing \textit{any} symmetric function with sublinear bottleneck complexity in $n$. Technically, our first protocol is inspired by the one-time truth-table protocol by Ishai et al. (TCC 2013) but our second and third protocols use a novel technique to express the one-time truth-table as an array of two or higher dimensions and achieve better trade-offs.
\item We propose an unconditionally secure protocol tailored to the AND function with lower bottleneck complexity. It avoids pseudorandom functions used by the previous protocol for the AND function, preserving bottleneck complexity up to a logarithmic factor in $n$.
\item By combining our protocol for the AND function with Bloom filters, we construct an unconditionally secure protocol for private set intersection (PSI), which computes the intersection of players' private sets. This is the first PSI protocol with sublinear bottleneck complexity in $n$ and to the best of our knowledge, there has been no such protocol even under cryptographic assumptions.
\end{itemize}
Hidden Stabilizers, the Isogeny To Endomorphism Ring Problem and the Cryptanalysis of pSIDH
The Isogeny to Endomorphism Ring Problem (IsERP) asks to compute the endomorphism ring of the codomain of an isogeny between supersingular curves in characteristic $p$ given only a representation for this isogeny, i.e. some data and an algorithm to evaluate this isogeny on any torsion point. This problem plays a central role in isogeny-based cryptography; it underlies the security of
pSIDH protocol (ASIACRYPT 2022) and it is at the heart of the recent attacks that broke the SIDH key exchange. Prior to this work, no efficient algorithm was known to solve IsERP for a generic isogeny degree, the hardest case seemingly when the degree is prime.
In this paper, we introduce a new quantum polynomial-time algorithm to solve IsERP for isogenies whose degrees are odd and have $O(\log\log p)$ many prime factors. As main technical tools, our algorithm uses a quantum algorithm for computing hidden Borel subgroups, a group action on supersingular isogenies from EUROCRYPT 2021, various algorithms for the Deuring correspondence and a new algorithm to lift arbitrary quaternion order elements modulo an odd integer $N$ with $O(\log\log p)$ many prime factors to powersmooth elements.
As a main consequence for cryptography, we obtain a quantum polynomial-time key recovery attack on pSIDH. The technical tools we use may also be of independent interest.
From MLWE to RLWE: A Differential Fault Attack on Randomized & Deterministic Dilithium
The post-quantum digital signature scheme CRYSTALS-Dilithium has been recently selected by the NIST for standardization. Implementing CRYSTALS-Dilithium, and other post-quantum cryptography schemes, on embedded devices raises a new set of challenges, including ones related to performance in terms of speed and memory requirements, but also related to side-channel and fault injection attacks security. In this work, we investigated the latter and describe a differential fault attack on the randomized and deterministic versions of CRYSTALS-Dilithium. Notably, the attack requires a few instructions skips and is able to reduce the MLWE problem that Dilithium is based on to a smaller RLWE problem which can be practically solved with lattice reduction techniques. Accordingly, we demonstrated key recoveries using hints extracted on the secret keys from the same faulted signatures using the LWE with side-information framework introduced by Dachman-Soled et al. at CRYPTO’20. As a final contribution, we proposed algorithmic countermeasures against this attack and in particular showed that the second one can be parameterized to only induce a negligible overhead over the signature generation.
Cuckoo Commitments: Registration-Based Encryption and Key-Value Map Commitments for Large Spaces
Registration-Based Encryption (RBE) [Garg et al. TCC'18] is a public-key encryption mechanism in which users generate their own public and secret keys, and register their public keys with a central authority called the key curator.
Similarly to Identity-Based Encryption (IBE), in RBE users can encrypt by only knowing the public parameters and the public identity of the recipient. Unlike IBE, though, RBE does not suffer the key escrow problem — one of the main obstacles of IBE's adoption in practice — since the key curator holds no secret.
In this work, we put forward a new methodology to construct RBE schemes that support large users identities (i.e., arbitrary strings). Our main result is the first efficient pairing-based RBE for large identities.
Prior to our work, the most efficient RBE is that of [Glaeser et al. ePrint'22] which only supports small identities. The only known RBE schemes with large identities are realized either through expensive non-black-box techniques (ciphertexts of 3.6 TB for 1000 users), or via a specialized lattice-based construction [Döttling et al. Eurocrypt'23] (ciphertexts of 2.4 GB), or through the more complex notion of Registered Attribute-Based Encryption [Hohenberger et al. Eurocrypt’23]. By unlocking the use of pairings for RBE with large identity space, we enable a further improvement of three orders of magnitude, as our ciphertexts for a system with 1000 users are 1.7 MB.
The core technique of our approach is a novel use of cuckoo hashing in cryptography that can be of independent interest. We give two main applications. The first one is the aforementioned RBE methodology, where we use cuckoo hashing to compile an RBE with small identities into one for large identities. The second one is a way to convert any vector commitment scheme into a key-value map commitment. For instance, this leads to the first algebraic pairing-based key-value map commitments.
Improving logarithmic derivative lookups using GKR
In this informal note, we instantiate the Goldwasser-Kalai-Rothblum (GKR) protocol to prove fractional sumchecks as present in lookup arguments based on logarithmic derivatives, with the following impact on the prover cost of logUp (IACR eprint 2022/1530):
When looking up $M\geq 1$ columns in a (for the sake of simplicity) single column table, the prover has to commit only to a single extra column, i.e. the multiplicities of the table entries.
In order to carry over the GKR fractional sumcheck to the univariate setting, we furthermore introduce a simple, yet (as far as we know) novel transformation for turning a univariate polynomial commitment scheme into a multilinear one. The transformation complements existing approaches and might be of independent interest for its elegant way to prove arbitrary powers of the lexicographic shift over the Boolean hypercube.
The supersingular Endomorphism Ring and One Endomorphism problems are equivalent
The supersingular Endomorphism Ring problem is the following: given a supersingular elliptic curve, compute all of its endomorphisms. The presumed hardness of this problem is foundational for isogeny-based cryptography. The One Endomorphism problem only asks to find a single non-scalar endomorphism. We prove that these two problems are equivalent, under probabilistic polynomial time reductions.
We prove a number of consequences. First, assuming the hardness of the endomorphism ring problem, the Charles–Goren–Lauter hash function is collision resistant, and the SQIsign identification protocol is sound. Second, the endomorphism ring problem is equivalent to the problem of computing arbitrary isogenies between supersingular elliptic curves, a result previously known only for isogenies of smooth degree. Third, there exists an unconditional probabilistic algorithm to solve the endomorphism ring problem in time $\tilde O(p^{1/2})$, a result that previously required to assume the generalized Riemann hypothesis.
To prove our main result, we introduce a flexible framework for the study of isogeny graphs with additional information. We prove a general and easy-to-use rapid mixing theorem.
To attest or not to attest, this is the question – Provable attestation in FIDO2
FIDO2 is currently the main initiative for passwordless authentication in web servers. It mandates the use of secure hardware authenticators to protect the authentication protocol’s secrets from compromise. However, to ensure that only secure authenticators are being used, web servers need a method to attest their properties. The FIDO2 specifications allow for authenticators and web servers to choose between different attestation modes to prove the characteristics of an authenticator, however the properties of most these modes have not been analysed in the context of FIDO2. In this work, we analyse the security and privacy properties of FIDO2 when different attestation modes included in the standard are used, and show that they lack good balance between security, privacy and revocation of corrupted devices. For example, the basic attestation mode prevents remote servers from tracing user’s actions across different services while requiring reduced trust assumptions. However in case one device is compromised, all the devices from the same batch (e.g., of the same brand or model) need to be recalled, which can be quite complex (and arguably impractical) in consumer scenarios. As a consequence we suggest a new attestation mode based on the recently proposed TokenWeaver, which provides more convenient mechanisms for revoking a single token while maintaining user privacy.
Algebraic Attacks on Round-Reduced RAIN and Full AIM-III
Picnic is a NIST PQC Round 3 Alternate signature candidate that builds upon symmetric primitives following the MPC-in-the-head paradigm. Recently, researchers have been exploring more secure/efficient signature schemes from conservative one-way functions based on AES, or new low complexity one-way functions like Rain (CCS 2022) and AIM (CCS 2023). The signature schemes based on Rain and AIM are currently the most efficient among MPC-in-the-head-based schemes, making them promising post-quantum digital signature candidates.
However, the exact hardness of these new one-way functions deserves further study and scrutiny. This work presents algebraic attacks on RAIN and AIM for certain instances, where one-round Rain can be compromised in $2^{n/2}$ for security parameter $n\in \{128,192,256\}$, and two-round Rain can be broken in $2^{120.3}$, $2^{180.4}$, and $2^{243.1}$ encryptions, respectively. Additionally, we demonstrate an attack on AIM-III (which aims at 192-bit security) with a complexity of $2^{186.5}$ encryptions. These attacks exploit the algebraic structure of the power function over fields with characteristic 2, which provides potential insights into the algebraic structures of some symmetric primitives and thus might be of independent interest.
ZK-for-Z2K: MPC-in-the-Head Zero-Knowledge Proofs for $\mathbb{Z}_{2^k}$
In this work, we extend the MPC-in-the-head framework, used in recent efficient zero-knowledge protocols, to work over the ring $\mathbb{Z}_{2^k}$, which is the primary operating domain for modern CPUs. The proposed schemes are compatible with any threshold linear secret sharing scheme and draw inspiration from MPC protocols adapted for ring operations. Additionally, we explore various batching methodologies, leveraging Shamir's secret sharing schemes and Galois ring extensions, and show the applicability of our approach in RAM program verification. Finally, we analyse different options for instantiating the resulting ZK scheme over rings and compare their communication costs.
Parallel Hardware for Isogeny-based VDF: Attacker's Perspective
The long running time of isogeny-based cryptographic constructions has proved to be a boon in disguise for one particular type of primitive called Verifiable Delay Functions (VDFs). VDFs are characterised by sequential function evaluation but an immediate output verification. In order to ensure secure use of VDFs in real-world applications, it is important to determine the fastest implementation. Considering the point of view of an attacker (say with unbounded resources), this paper aims to achieve the fastest possible hardware implementation of isogeny-based VDFs. It is the first work that implements the $2^T$-isogeny walk involved in the evaluation step of an isogeny VDF. To meet our goal, we use redundant representations of integers and introduce a new lookup table-based algorithm for modular reduction. We also provide a survey of elliptic curve arithmetic to arrive at the most cost-effective curve computations and propose an improvement of the point doubling algorithm for better parallelism. The evaluation step of a VDF is defined to be sequential, which means that there is limited scope for parallelism. Nevertheless, taking this constraint into account our proposed design targets the highest levels of parallelism possible on an architectural level of an isogeny VDF implementation. We provide detailed analysis of all our arithmetic modules as well as estimates for their critical path delays and area consumption. Our 28nm ASIC design computes a $4^{100} = 2^{200}$-isogeny in 7.1$\mu s$. It is the first high-performance ASIC implementation for evaluation of isogeny VDFs.
Generic SCARE: reverse engineering without knowing the algorithm nor the machine
We introduce a novel side-channel-based reverse engineering technique capable of reconstructing a procedure solely from inputs, outputs, and traces of execution.
Beyond generic restrictions, we do not assume any prior knowledge of the procedure or the chip it operates on.
These restrictions confine our analysis to 8-bit RISC constant-time software implementations.
Specifically, we demonstrate the feasibility of reconstructing a symmetric cryptographic cipher, even in scenarios where traces are sampled with information loss and noise, such as when measuring the power consumption of the chip.
Incrementally Verifiable Computation via Rate-1 Batch Arguments
Non-interactive delegation schemes enable producing succinct proofs (that can be efficiently verified) that a machine $M$ transitions from $c_1$ to $c_2$ in a certain number of deterministic steps. We here consider the problem of efficiently \emph{merging} such proofs: given a proof $\Pi_1$ that $M$ transitions from $c_1$ to $c_2$, and a proof $\Pi_2$ that $M$ transitions from $c_2$ to $c_3$, can these proofs be efficiently merged into a single short proof (of roughly the same size as the original proofs) that $M$ transitions from $c_1$ to $c_3$? To date, the only known constructions of such a mergeable delegation scheme rely on strong non-falsifiable ``knowledge extraction" assumptions.
In this work, we present a provably secure construction based on the standard LWE assumption.
As an application of mergeable delegation, we obtain a construction of incrementally verifiable computation (IVC) (with polylogarithmic length proofs) for any (unbounded) polynomial number of steps based on LWE; as far as we know, this is the first such construction based on any falsifiable (as opposed to knowledge-extraction) assumption. The central building block that we rely on, and construct based on LWE, is a rate-1 batch argument (BARG): this is a non-interactive argument for NP that enables proving $k$ NP statements $x_1,..., x_k$ with communication/verifier complexity $m+o(m)$, where $m$ is the length of one witness. Rate-1 BARGs are particularly useful as they can be recursively composed a super-constant number of times.
Degree-$D$ Reverse Multiplication-Friendly Embeddings: Constructions and Applications
In the recent work of (Cheon & Lee, Eurocrypt'22), the concept of a degree-$D$ packing method was formally introduced, which captures the idea of embedding multiple elements of a smaller ring into a larger ring, so that element-wise multiplication in the former is somewhat "compatible" with the product in the latter.
Then, several optimal bounds and results are presented, and furthermore, the concept is generalized from one multiplication to degrees larger than two.
These packing methods encompass several constructions seen in the literature in contexts like secure multiparty computation and fully homomorphic encryption.
One such construction is the concept of reverse multiplication-friendly embeddings (RMFEs), which are essentially degree-2 packing methods.
In this work we generalize the notion of RMFEs to \emph{degree-$D$ RMFEs} which, in spite of being "more algebraic" than packing methods, turn out to be essentially equivalent.
Then, we present a general construction of degree-$D$ RMFEs by generalizing the ideas on algebraic geometry used to construct traditional degree-$2$ RMFEs which, by the aforementioned equivalence, leads to explicit constructions of packing methods.
Furthermore, our theory is given in an unified manner for general Galois rings, which include both rings of the form $\mathbb{Z}_{p^k}$ and fields like $\mathbb{F}_{p^k}$, which have been treated separately in prior works.
We present multiple concrete sets of parameters for degree-$D$ RMFEs (including $D=2$), which can be useful for future works.
Finally, we apply our RMFEs to the task of non-interactively generating high degree correlations for secure multiparty computation protocols.
This requires the use of Shamir secret sharing for a large number of parties, which is known to require large-degree Galois ring extensions.
Our RMFE enables the generation of such preprocessing data over small rings, without paying for the multiplicative overhead incurred by using Galois ring extensions of large degree.
For our application we also construct along the way, as a side contribution of potential independent interest, a pseudo-random secret-sharing solution for non-interactive generation of packed Shamir-sharings over Galois rings with structured secrets, inspired by the PRSS solutions from (Benhamouda et al, TCC 2021).
OpenVoting: Recoverability from Failures in Dual Voting
In this paper we address the problem of recovery from failures without re-running entire elections when elections fail to verify. We consider the setting of $\textit{dual voting}$ protocols, where the cryptographic guarantees of end-to-end verifiable voting (E2E-V) are combined with the simplicity of audit using voter-verified paper records (VVPR). We first consider the design requirements of such a system and then suggest a protocol called $\textit{OpenVoting}$, which identifies a verifiable subset of error-free votes consistent with the VVPRs, and the polling booths corresponding to the votes that fail to verify with possible reasons for the failures.
Robust Publicly Verifiable Covert Security: Limited Information Leakage and Guaranteed Correctness with Low Overhead
Protocols with \emph{publicly verifiable covert (PVC) security} offer high efficiency and an appealing feature: a covert party may deviate from the protocol, but with a probability (\eg $90\%$, referred to as the \emph{deterrence factor}), the honest party can identify this deviation and expose it using a publicly verifiable certificate. These protocols are particularly suitable for practical applications involving reputation-conscious parties.
However, in the cases where misbehavior goes undetected (\eg with a probability of $10\%$), \emph{no security guarantee is provided for the honest party}, potentially resulting in a complete loss of input privacy and output correctness.
In this paper, we tackle this critical problem by presenting a highly effective solution. We introduce and formally define an enhanced notion called \emph{robust PVC security}, such that even if the misbehavior remains undetected, the malicious party can only gain an additional $1$-bit of information about the honest party's input while maintaining the correctness of the output. We propose a novel approach leveraging \emph{dual execution} and \emph{time-lock puzzles} to design a robust PVC-secure two-party protocol with \emph{low overhead} (depending on the deterrence factor). For instance, with a deterrence factor of $90\%$, our robust PVC-secure protocol incurs \emph{only additional ${\sim}10\%$ overhead} compared to the state-of-the-art PVC-secure protocol.
Given the stronger security guarantees with low overhead, our protocol is highly suitable for practical applications of secure two-party computation.
More Insight on Deep Learning-aided Cryptanalysis
In CRYPTO 2019, Gohr showed that well-trained neural networks could perform cryptanalytic distinguishing tasks superior to differential distribution table (DDT)-based distinguishers. This suggests that the differential-neural distinguisher (ND) may use additional information besides pure ciphertext differences. However, the explicit knowledge beyond differential distribution is still unclear. In this work, we provide explicit rules that can be used alongside DDTs to enhance the effectiveness of distinguishers compared to pure DDT-based distinguishers. These rules are based on strong correlations between bit values in right pairs of XOR-differential propagation through addition modulo $2^n$. Interestingly, they can be closely linked to the earlier study of the multi-bit constraints and the recent study of the fixed-key differential probability. In contrast, combining these rules does not improve the NDs' performance. This suggests that these rules or their equivalent form have already been exploited by NDs, highlighting the power of neural networks in cryptanalysis.
In addition, we find that to enhance the differential-neural distinguisher's accuracy and the number of rounds, regulating the differential propagation is imperative. Introducing differences into the keys is typically believed to help eliminate differences in encryption states, resulting in stronger differential propagations. However, differential-neural attacks differ from traditional ones as they don't specify output differences or follow a single differential trail. This questions the usefulness of introducing differences in a key in differential-neural attacks and the resistance of Speck against such attacks in the related-key setting. This work shows that the power of differential-neural cryptanalysis in the related-key setting can exceed that in the single-key setting by successfully conducting a 14-round key recovery attack on Speck32/64.
Comparse: Provably Secure Formats for Cryptographic Protocols
Data formats used for cryptographic inputs have historically been the source of many attacks on cryptographic protocols, but their security guarantees remain poorly studied. One reason is that, due to their low-level nature, formats often fall outside of the security model. Another reason is that studying all of the uses of all of the formats within one protocol is too difficult to do by hand, and requires a comprehensive, automated framework.
We propose a new framework, “Comparse”, that specifically tackles the security analysis of data formats in cryptographic protocols. Comparse forces the protocol analyst to systematically think about data formats, formalize them precisely, and show that they enjoy strong enough properties to guarantee the security of the protocol.
Our methodology is developed in three steps. First, we introduce a high-level cryptographic API that lifts the traditional game-based cryptographic assumptions over bitstrings to work over high-level messages, using formats. This allows us to derive the conditions that secure formats must obey in order for their usage to be secure. Second, equipped with these security criteria, we implement a framework for specifying and verifying secure formats in the F* proof assistant. Our approach is based on format combinators, which enable compositional and modular proofs. In many cases, we relieve the user of having to write those combinators by hand, using compile-time term synthesis via Meta-F*. Finally, we show that our F* implementation can replace the symbolic notion of message formats previously implemented in the DY* protocol analysis framework. Our newer, bit-level precise accounting of formats closes the modeling gap, and allows DY* to reason about concrete messages and identify protocol flaws that it was previously oblivious to.
We evaluate Comparse over several classic and real-world protocols. Our largest case studies use Comparse to formalize and provide security proofs for the formats used in TLS 1.3, as well as upcoming protocols like MLS and Compact TLS 1.3 (cTLS), providing confidence and feedback in the design of these protocols.
Threshold Signatures from Inner Product Argument: Succinct, Weighted, and Multi-threshold
Threshold signatures protect the signing key by sharing it among a group of signers so that an adversary must corrupt a threshold number of signers to be able to forge signatures. Existing threshold signatures with succinct signatures and constant verification times do not work if signers have different weights. Such weighted settings are seeing increasing importance in decentralized systems, especially in the Proof-of-Stake blockchains. This paper presents a new paradigm for threshold signatures for pairing- and discrete logarithm-based cryptosystems. Our scheme has a compact verification key consisting of only 7 group elements, and a signature consisting of 8 group elements. Verifying the signature requires 1 exponentiation and 13 bilinear pairings. Our scheme supports arbitrary weight distributions among signers and arbitrary thresholds. It requires non-interactive preprocessing after a universal powers-of-tau setup. We prove the security of our scheme in the Algebraic Group Model and implement it using golang. Our evaluation shows that our scheme achieves a comparable signature size and verification time to a standard (unweighted) threshold signature. Compared to existing multisignature schemes, our scheme has a much smaller public verification key.
A Modular Treatment of Blind Signatures from Identification Schemes
We propose a modular security treatment of blind signatures derived from linear identification schemes in the random oracle model. To this end, we present a general framework that captures several well known schemes from the literature and allows to prove their security.
Our modular security reduction introduces a new security notion for identification schemes called One-More-Man In the Middle Security which we show equivalent to the classical One-More-Unforgeability notion for blind signatures.
We also propose a generalized version of the Forking Lemma due to Bellare and Neven (CCS 2006) and show how it can be used to greatly improve the understandability of the classical security proofs for blind signatures schemes by Pointcheval and Stern (Journal of Cryptology 2000).
Blockwise Rank Decoding Problem and LRPC Codes: Cryptosystems with Smaller Sizes
In this paper, we initiate the study of the Rank Decoding (RD) problem and LRPC codes with blockwise structures in rank-based cryptosystems. First, we introduce the blockwise errors ($\ell$-errors) where each error consists of $\ell$ blocks of coordinates with disjoint supports, and define the blockwise RD ($\ell$-RD) problem as a natural generalization of the RD problem whose solutions are $\ell$-errors (note that the standard RD problem is actually a special $\ell$-RD problem with $\ell=1$). We adapt the typical attacks on the RD problem to the $\ell$-RD problem, and find that the blockwise structures do not ease the problem too much: the $\ell$-RD problem is still exponentially hard for appropriate choices of $\ell>1$. Second, we introduce blockwise LRPC ($\ell$-LRPC) codes as generalizations of the standard LPRC codes whose parity-check matrices can be divided into $\ell$ sub-matrices with disjoint supports, i.e., the intersection of two subspaces generated by the entries of any two sub-matrices is a null space, and investigate the decoding algorithms for $\ell$-errors. We find that the gain of using $\ell$-errors in decoding capacity outweighs the complexity loss in solving the $\ell$-RD problem, which makes it possible to design more efficient rank-based cryptosystems with flexible choices of parameters.
As an application, we show that the two rank-based cryptosystems submitted to the NIST PQC competition, namely, RQC and ROLLO, can be greatly improved by using the ideal variants of the $\ell$-RD problem and $\ell$-LRPC codes. Concretely, for 128-bit security, our RQC has total public key and ciphertext sizes of 2.5 KB, which is not only about 50% more compact than the original RQC, but also smaller than the NIST Round 4 code-based submissions HQC, BIKE, and Classic McEliece.
Improving Privacy of Anonymous Proof-of-Stake Protocols
The proof of stake (PoS) mechanism, which allows stakeholders to issue a block with a probability proportional to their wealth instead of computational power, is believed to be an energy-efficient alternative to the proof of work (PoW). The privacy concern of PoS, however, is more subtle than that of PoW. Recent research has shown that current anonymous PoS (APoS) protocols do not suffice to protect the stakeholder's identity and stake, and the loss of privacy is theoretically inherent for any (deterministic) PoS protocol that provides liveness guarantees.
In this paper, we consider the concrete stake privacy of PoS
when considering the limitations of attacks in practice.
To quantify the concrete stake privacy of PoS, we introduce the notion of $(T, \delta, \epsilon)$-privacy. Our analysis of $(T, \delta, \epsilon)$-privacy on Cardano shows to what extent the stake privacy can be broken in practice, which also implies possible parameters setting of rational $(T, \delta, \epsilon)$-privacy for PoS in the real world.
The data analysis of Cardano demonstrates that the $(T, \delta, \epsilon)$-privacy of current APoS is not satisfactory, mainly due to the deterministic leader election predicate in current PoS constructions. Inspired by the differential privacy technique, we propose an efficient non-deterministic leader election predicate, which can be used as a plugin to APoS protocols to protect stakes against frequency analysis. Based on our leader election predicate, we construct anonymous PoS with noise (APoS-N), which can offer better $(T, \delta, \epsilon)$-privacy than state-of-the-art works. Furthermore, we propose a method of proving the basic security properties of PoS in the noise setting, which can minimize the impact of the noise on the security threshold. This method can also be applied to the setting of PoS with variable stakes, which is of independent interest.
Ramp hyper-invertible matrices and their applications to MPC protocols
Beerliová-Trubíniová and Hirt introduced hyper-invertible matrix technique to construct the first perfectly secure MPC protocol in the presence of maximal malicious corruptions $\lfloor \frac{n-1}{3} \rfloor$ with linear communication complexity per multiplication gate [5]. This matrix allows MPC protocol to generate correct shares of uniformly random secrets in the presence of malicious adversary. Moreover, the amortized communication complexity of generating each sharing is linear. Due to this prominent feature, the hyper-invertible matrix plays an important role in the construction of MPC protocol and zero-knowledge proof protocol where the randomness needs to be jointly generated. However, the downside of this matrix is that the size of its base field is linear in the size of its matrix. This means if we construct an $n$-party MPC protocol over $\mathbb{F}_q$ via hyper-invertible matrix, $q$ is at least $2n$.
In this paper, we propose the ramp hyper-invertible matrix which can be seen as the generalization of hyper-invertible matrix. Our ramp hyper-invertible matrix can be defined over constant-size field regardless of the size of this matrix. Similar to the arithmetic secret sharing scheme, to apply our ramp hyper-invertible matrix to perfectly secure MPC protocol, the maximum number of corruptions has to be compromised to $\frac{(1-\epsilon)n}{3}$. As a consequence, we present the first perfectly secure MPC protocol in the presence of $\frac{(1-\epsilon)n}{3}$ malicious corruptions with constant communication complexity. Besides presenting the variant of hyper-invertible matrix, we overcome several obstacles in the construction of this MPC protocol. Our arithmetic secret sharing scheme over constant-size field is compatible with the player elimination technique, i.e., it supports the dynamic changes of party number and corrupted party number. Moreover, we rewrite the public reconstruction protocol to support the sharings over constant-size field. Putting these together leads to the constant-size field variant of celebrated MPC protocol in [5].
We note that although it was widely acknowledged that there exists an MPC protocol with constant communication complexity by replacing Shamir secret sharing scheme with arithmetic secret sharing scheme, there is no reference seriously describing such protocol in detail. Our work fills the missing detail by providing MPC primitive for any applications relying on MPC protocol of constant communication complexity. As an application of our perfectly secure MPC protocol which implies perfect robustness in the MPC-in-the-Head framework, we present the constant-rate zero-knowledge proof with $3$ communication rounds. The previous work achieves constant-rate with $5$ communication rounds [32] due to the statistical robustness of their MPC protocol. Another application of our ramp hyper-invertible matrix is the information-theoretic multi-verifier zero-knowledge for circuit satisfiability[43]. We manage to remove the dependence of the size of circuit and security parameter from the share size.
Key-Agreement with Perfect Completeness from Random Oracles
In the Random Oracle Model (ROM) all parties have oracle access to a common random function, and the parties are limited in the number of queries they can make to the oracle. The Merkle’s Puzzles protocol, introduced by Merkle [CACM ’78], is a key-agreement protocol in the ROM with a quadratic gap between the query complexity of the honest parties and the eavesdropper. This quadratic gap is known to be optimal, by the works of Impagliazzo and Rudich [STOC ’89] and Barak and Mahmoody [Crypto ’09].
When the oracle function is injective or a permutation, Merkle’s Puzzles has perfect completeness. That is, it is certain that the protocol results in agreement between the parties. However, without such an assumption on the random function, there is a small error probability, and the parties may end up holding different keys. This fact raises the question: Is there a key-agreement protocol with perfect completeness and super-linear security in the ROM?
In this paper we give a positive answer to the above question, showing that changes to the query distribution of the parties in Merkle’s Puzzles, yield a protocol with perfect completeness and roughly the same security.
Boosting the Performance of High-Assurance Cryptography: Parallel Execution and Optimizing Memory Access in Formally-Verified Line-Point Zero-Knowledge
Despite the notable advances in the development of high-assurance, verified implementations of cryptographic protocols, such implementations typically face significant performance overheads, particularly due to the penalties induced by formal verification and automated extraction of executable code. In this paper, we address some core performance challenges facing computer-aided cryptography by presenting a formal treatment for accelerating such verified implementations based on multiple generic optimizations covering parallelism and memory access. We illustrate our techniques for addressing such performance bottlenecks using the Line-Point Zero-Knowledge (LPZK) protocol as a case study. Our starting point is a new verified implementation of LPZK that we formalize and synthesize using EasyCrypt; our first implementation is developed to reduce the proof effort and without considering the performance of the extracted executable code. We then show how such (automatically) extracted code can be optimized in three different ways to obtain a 3000x speedup and thus matching the performance of the manual implementation of LPZK. We obtain such performance gains by first modifying the algorithmic specifications, then by adopting a provably secure parallel execution model, and finally by optimizing the memory access structures. All optimizations are first formally verified inside EasyCrypt, and then executable code is automatically synthesized from each step of the formalization. For each optimization, we analyze performance gains resulting from it and also address challenges facing the computer-aided security proofs thereof, and challenges facing automated synthesis of executable code with such an optimization.
WhatsUpp with Sender Keys? Analysis, Improvements and Security Proofs
Developing end-to-end encrypted instant messaging solutions for group conversations is an ongoing challenge that has garnered significant attention from practitioners and the cryptographic community alike. Notably, industry-leading messaging apps such as WhatsApp and Signal Messenger have adopted the Sender Keys protocol, where each group member shares their own symmetric encryption key with others. Despite its widespread adoption, Sender Keys has never been formally modelled in the cryptographic literature, raising the following natural question:
What can be proven about the security of the Sender Keys protocol, and how can we practically mitigate its shortcomings?
In addressing this question, we first introduce a novel security model to suit protocols like Sender Keys, deviating from conventional group key agreement-based abstractions. Our framework allows for a natural integration of two-party messaging within group messaging sessions that may be of independent interest. Leveraging this framework, we conduct the first formal analysis of the Sender Keys protocol, and prove it satisfies a weak notion of security. Towards improving security, we propose a series of efficient modifications to Sender Keys without imposing significant performance overhead. We combine these refinements into a new protocol that we call Sender Keys+, which may be of interest both in theory and practice.
Cryptographic Key Exchange: An Innovation Outlook
This article evaluates the innovation landscape facing the challenge of generating fresh shared randomness for cryptographic key exchange and various cyber security protocols. It discusses the main innovation thrust today, focused on quantum entanglement and on efficient engineering solutions to BB84, and its related alternatives. This innovation outlook highlights non-quantum solutions, and describes NEPSAR – a mechanical complexity based solution, which is applicable to any number of key sharing parties. Short-lived secret keys are also mentioned, as well as emerging innovation routes based on Richard Feynman’s observation: “there is plenty of room at the bottom,” extracting plenty of digital randomness from tiny amounts of matter, yielding very many measurable attributes (nanotechnology).
Application of Mordell-Weil lattices with large kissing numbers to acceleration of multi-scalar multiplication on elliptic curves
This article aims to speed up (the precomputation stage of) multi-scalar multiplication (MSM) on ordinary elliptic curves of $j$-invariant $0$ with respect to specific ''independent'' (a.k.a. ''basis'') points. For this purpose, so-called Mordell--Weil lattices (up to rank $8$) with large kissing numbers (up to $240$) are employed. In a nutshell, the new approach consists in obtaining more efficiently a considerable number (up to $240$) of certain elementary linear combinations of the ``independent'' points. By scaling the point (re)generation process, it is thus possible to get a significant performance gain. As usual, the resulting curve points can be then regularly used in the main stage of an MSM algorithm to avoid repeating computations. Seemingly, this is the first usage of lattices with large kissing numbers in cryptography, while such lattices have already found numerous applications in other mathematical domains. Without exaggeration, the article results can strongly affect performance of today's real-world elliptic cryptography, since MSM is a widespread primitive (often the unique bottleneck) in modern protocols. Moreover, the new (re)generation technique is prone to further improvements by considering Mordell--Weil lattices with even greater kissing numbers.
Quantum Speed-Up for Multidimensional (Zero Correlation) Linear Distinguishers
This paper shows how to achieve a quantum speed-up for multidimensional (zero correlation) linear distinguishers.
A previous work by Kaplan et al. has already shown a quantum quadratic speed-up for one-dimensional linear distinguishers.
However, classical linear cryptanalysis often exploits multidimensional approximations to achieve more efficient attacks, and in fact it is highly non-trivial whether Kaplan et al.'s technique can be extended into the multidimensional case.
To remedy this, we investigate a new quantum technique to speed-up multidimensional linear distinguishers.
Firstly, we observe that there is a close relationship between the subroutine of Simon's algorithm and linear correlations via Fourier transform.
Specifically, a slightly modified version of Simon's subroutine, which we call Correlation Extraction Algorithm (CEA), can be used to speed-up multidimensional linear distinguishers.
CEA also leads to a speed-up for multidimensional zero correlation distinguishers, as well as some integral distinguishers through the correspondence of zero correlation and integral properties shown by Bogdanov et al.~and Sun et al.
Furthermore, we observe possibility of a more than quadratic speed-ups for some special types of integral distinguishers when multiple integral properties exist.
Especially, we show a single-query distinguisher on a 4-bit cell SPN cipher with the same integral property as 2.5-round AES.
Our attacks are the first to observe such a speed-up for classical cryptanalytic techniques without relying on hidden periods or shifts.
By replacing the Hadamard transform in CEA with the general quantum Fourier transform, our technique also speeds-up generalized linear distinguishers on an arbitrary finite abelian group.
Registered ABE via Predicate Encodings
This paper presents the first generic black-box construction of registered attribute-based encryption (Reg-ABE) via predicate encoding [TCC'14]. The generic scheme is based on $k$-Lin assumption in the prime-order bilinear group and implies the following concrete schemes that improve existing results:
- the first Reg-ABE scheme for span program in the prime-order group; prior work uses composite-order group;
- the first Reg-ABE scheme for zero inner-product predicate from $k$-Lin assumption; prior work relies on generic group model (GGM);
- the first Reg-ABE scheme for arithmetic branching program (ABP) which has not been achieved previously.
Technically, we follow the blueprint of Hohenberger et al. [EUROCRYPT'23] but start from the prime-order dual-system ABE by Chen et al. [EUROCRYPT'15], which transforms a predicate encoding into an ABE. The proof follows the dual-system method in the context of Reg-ABE: we conceptually consider helper keys as secret keys; furthermore, malicious public keys are handled via pairing-based quasi-adaptive non-interactive zero-knowledge argument by Kiltz and Wee [EUROCRYPT'15].
Stronger Lower Bounds for Leakage-Resilient Secret Sharing
Threshold secret sharing allows a dealer to split a secret $s$ into $n$ shares, such that any $t$ shares allow for reconstructing $s$, but no $t-1$ shares reveal any information about $s$. Leakage-resilient secret sharing requires that the secret remains hidden, even when an adversary additionally obtains a limited amount of leakage from every share.
Benhamouda et al. (CRYPTO'18) proved that Shamir's secret sharing scheme is one bit leakage-resilient for reconstruction threshold $t\geq0.85n$ and conjectured that the same holds for $t=c\cdot n$ for any constant $0\leq c\leq1$. Nielsen and Simkin (EUROCRYPT'20) showed that this is the best one can hope for by proving that Shamir's scheme is not secure against one-bit leakage when $t=c\cdot n/\log(n)$.
In this work, we strengthen the lower bound of Nielsen and Simkin. We consider noisy leakage-resilience, where a random subset of leakages is replaced by uniformly random noise. We prove a lower bound for Shamir's secret sharing, similar to that of Nielsen and Simkin, which holds even when a constant fraction of leakages is replaced by random noise.
To this end, we first prove a lower bound on the share size of any noisy-leakage-resilient sharing scheme. We then use this lower bound to show that there exist universal constants $c_1,c_2$, such that for infinitely many $n$, it holds that Shamir's secret sharing scheme is not noisy-leakage-resilient for $t\leq c_1\cdot n/\log(n)$, even when a $c_2$ fraction of leakages are replaced by random noise.
HELM: Navigating Homomorphic Encryption through Gates and Lookup Tables
As cloud computing continues to gain widespread adoption, safeguarding the confidentiality of data entrusted to third-party cloud service providers becomes a critical concern. While traditional encryption methods offer protection for data at rest and in transit, they fall short when it comes to where it matters the most, i.e., during data processing.
To address this limitation, we present HELM, a framework for privacy-preserving data processing using homomorphic encryption. HELM automatically transforms arbitrary programs expressed in a Hardware Description Language (HDL), such as Verilog, into equivalent homomorphic circuits, which can then be efficiently evaluated using encrypted inputs. HELM features two modes of encrypted evaluation: a) a gate mode that consists of standard Boolean gates, and b) a lookup table mode which significantly reduces the size of the circuit by combining multiple gates into lookup tables. Finally, HELM introduces a scheduler that enables embarrassingly parallel processing in the encrypted domain. We evaluate HELM with the ISCAS'85 and ISCAS'89 benchmark suites as well as real-world applications such as AES and image filtering. Our results outperform prior works by up to $65\times$.
Quantum-safe HIBE: does it cost a Latte?
The United Kingdom (UK) government is considering advanced primitives such as identity-based encryption (IBE) for adoption as they transition their public-safety communications network from TETRA to an LTE-based service. However, the current LTE standard relies on elliptic-curve-based IBE, which will be vulnerable to quantum computing attacks, expected within the next 20-30 years. Lattices can provide quantum-safe alternatives for IBE. These schemes have shown promising results in terms of practicality. To date, several IBE schemes over lattices have been proposed, but there has been little in the way of practical evaluation. This paper provides the first complete optimised practical implementation and benchmarking of Latte, a promising Hierarchical IBE (HIBE) scheme proposed by the UK National Cyber Security Centre (NCSC) in 2017 and endorsed by European Telecommunications Standards Institute (ETSI). We propose optimisations for the KeyGen, Delegate, Extract and Gaussian sampling components of Latte, to increase attack costs, reduce decryption key lengths by 2x-3x, ciphertext sizes by up to 33%, and improve speed. In addition, we conduct a precision analysis, bounding the Rényi divergence of the distribution of the real Gaussian sampling procedures from the ideal distribution in corroboration of our claimed security levels. Our resulting implementation of the Delegate function takes 0.4 seconds at 80-bit security level on a desktop machine at 4.2GHz, significantly faster than the order of minutes estimated in the ETSI technical report. Furthermore, our optimised Latte Encrypt/Decrypt implementation reaches speeds up to 9.7x faster than the ETSI implementation.