All papers in 2022 (Page 5 of 1781 results)

Last updated:  2022-10-12
How to backdoor LWE-like cryptosystems
Tobias Hemmert
We present a rather generic backdoor mechanism that can be applied to many LWE-like public-key cryptosystems. Our construction manipulates the key generation algorithm of such schemes in a way that allows a malicious adversary in possession of secret backdoor information to recover generated secret keys from corresponding public keys. To any user of the cryptosystem however, the output of our backdoored key generation is indistinguishable from output of the legitimate key generation algorithm. Our construction relies on elliptic-curve cryptography and draws on existing work on encoding of elliptic curve points as bit strings. Our backdoor mechanism can be applied to public-key cryptosystems where the secret key is generated from a secret seed and the public key includes a public seed of the same length. This holds - though not exclusively - for many cryptosystems based on LWE. In particular, we point out that our construction can be applied to backdoor HQC, FrodoKEM, Kyber and Dilithium. We also suggest a countermeasure that makes our backdoor detectable by users of the cryptosystem. To this end, we modify the key generation such that the public and secret key are pseudorandomly generated from a single seed which is included in the generated secret key. This allows any user of the key generation algorithm to regenerate keys using an independent implementation, making our backdooring attempt detectable.
Last updated:  2022-10-12
Post-Quantum Zero-Knowledge with Space-Bounded Simulation
Prabhanjan Ananth, Alex B. Grilo
The traditional definition of quantum zero-knowledge stipulates that the knowledge gained by any quantum polynomial-time verifier in an interactive protocol can be simulated by a quantum polynomial-time algorithm. One drawback of this definition is that it allows the simulator to consume significantly more computational resources than the verifier. We argue that this drawback renders the existing notion of quantum zero-knowledge not viable for certain settings, especially when dealing with near-term quantum devices. In this work, we initiate a fine-grained notion of post-quantum zero-knowledge that is more compatible with near-term quantum devices. We introduce the notion of $(s,f)$ space-bounded quantum zero-knowledge. In this new notion, we require that an $s$-qubit malicious verifier can be simulated by a quantum polynomial-time algorithm that uses at most $f(s)$-qubits, for some function $f(\cdot)$, and no restriction on the amount of the classical memory consumed by either the verifier or the simulator. We explore this notion and establish both positive and negative results: - For verifiers with logarithmic quantum space $s$ and (arbitrary) polynomial classical space, we show that $(s,f)$-space-bounded QZK, for $f(s)=2s$, can be achieved based on the existence of post-quantum one-way functions. Moreover, our protocol runs in constant rounds. - For verifiers with super-logarithmic quantum space $s$, assuming the existence of post-quantum secure one-way functions, we show that $(s,f)$-space-bounded QZK protocols, with fully black-box simulation (classical analogue of black-box simulation) can only be achieved for languages in BQP.
Last updated:  2022-10-12
Zero-Knowledge Optimal Monetary Policy under Stochastic Dominance
David Cerezo Sánchez
Optimal simple rules for the monetary policy of the first stochastically dominant crypto-currency are derived in a Dynamic Stochastic General Equilibrium (DSGE) model, in order to provide optimal responses to changes in inflation, output, and other sources of uncertainty. The optimal monetary policy stochastically dominates all the previous crypto-currencies, thus the efficient portfolio is to go long on the stochastically dominant crypto-currency: a strategy-proof arbitrage featuring a higher Omega ratio with higher expected returns, inducing an investment-efficient Nash equilibrium over the crypto-market. Zero-knowledge proofs of the monetary policy are committed on the blockchain: an implementation is provided.
Last updated:  2022-10-12
A Fast Hash Family for Memory Integrity
Qiming Li, Sampo Sovio
We give a first construction of an ϵ-balanced hash family based on linear transformations of vectors in $\mathbb{F}_2$, where ϵ = 1/(2n − 1) for n-bit hash values, regardless of the message size. The parameter n is also the bit length of the input blocks and the internal state, and can be chosen arbitrarily without design changes, This hash family is fast, easily parallelized, and requires no initial setup. A secure message authentication code can be obtained by combining the hash family with a pseudo random function. These features make the hash family attractive for memory integrity protection, while allowing generic use cases.
Last updated:  2022-10-20
Improved Differential and Linear Trail Bounds for ASCON
Solane El Hirch, Silvia Mella, Alireza Mehrdad, Joan Daemen
ASCON is a family of cryptographic primitives for authenticated encryption and hashing introduced in 2015. It is selected as one of the ten finalists in the NIST Lightweight Cryptography competition. Since its introduction, ASCON has been extensively cryptanalyzed, and the results of these analyses can indicate the good resistance of this family of cryptographic primitives against known attacks, like differential and linear cryptanalysis. Proving upper bounds for the differential probability of differential trails and for the squared correlation of linear trails is a standard requirement to evaluate the security of cryptographic primitives. It can be done analytically for some primitives like AES. For other primitives, computer assistance is required to prove strong upper bounds for differential and linear trails. Computer-aided tools can be classified into two categories: tools based on general-purpose solvers and dedicated tools. General-purpose solvers such as SAT and MILP are widely used to prove these bounds, however they seem to have lower capabilities and thus yield less powerful bounds compared to dedicated tools. In this work, we present a dedicated tool for trail search in ASCON. We arrange 2-round trails in a tree and traverse this tree in an efficient way using a number of new techniques we introduce. Then we extend these trails to more rounds, where we also use the tree traversal technique to do it efficiently. This allows us to scan much larger spaces of trails faster than the previous methods using general-purpose solvers. As a result, we prove tight bounds for 3-rounds linear trails, and for both differential and linear trails, we improve the existing upper bounds for other number of rounds. In particular, for the first time, we prove bounds beyond $2^{-128}$ for 6 rounds and beyond $2^{-256}$ for 12 rounds of both differential and linear trails.
Last updated:  2022-10-12
Modeling Effective Lifespan of Payment Channels
Soheil Zibakhsh Shabgahi, Seyed Mahdi Hosseini, Seyed Pooya Shariatpanahi, Behnam Bahrak
While being decentralized, secure, and reliable, Bitcoin and many other blockchain-based cryptocurrencies suffer from scalability issues. One of the promising proposals to address this problem is off-chain payment channels. Since, not all nodes are connected directly to each other, they can use a payment network to route their payments. Each node allocates a balance that is frozen during the channel's lifespan. Spending and receiving transactions will shift the balance to one side of the channel. A channel becomes unbalanced when there is not sufficient balance in one direction. In this case, we say the effective lifespan of the channel has ended. In this paper, we develop a mathematical model to predict the expected effective lifespan of a channel based on the network's topology. We investigate the impact of channel unbalancing on the payment network and individual channels. We also discuss the effect of certain characteristics of payment channels on their lifespan. Our case study on a snapshot of the Lightning Network shows how the effective lifespan is distributed, and how it is correlated with other network characteristics. Our results show that central unbalanced channels have a drastic effect on the network performance.
Last updated:  2023-04-23
From the Hardness of Detecting Superpositions to Cryptography: Quantum Public Key Encryption and Commitments
Minki Hhan, Tomoyuki Morimae, Takashi Yamakawa
Recently, Aaronson et al. (arXiv:2009.07450) showed that detecting interference between two orthogonal states is as hard as swapping these states. While their original motivation was from quantum gravity, we show its applications in quantum cryptography. 1. We construct the first public key encryption scheme from cryptographic non-abelian group actions. Interestingly, ciphertexts of our scheme are quantum even if messages are classical. This resolves an open question posed by Ji et al. (TCC ’19). We construct the scheme through a new abstraction called swap-trapdoor function pairs, which may be of independent interest. 2. We give a simple and efficient compiler that converts the flavor of quantum bit commitments. More precisely, for any prefix X, Y $\in$ {computationally,statistically,perfectly}, if the base scheme is X-hiding and Y-binding, then the resulting scheme is Y-hiding and X-binding. Our compiler calls the base scheme only once. Previously, all known compilers call the base schemes polynomially many times (Crépeau et al., Eurocrypt ’01 and Yan, Asiacrypt ’22). For the security proof of the conversion, we generalize the result of Aaronson et al. by considering quantum auxiliary inputs.
Last updated:  2022-10-12
Efficient Public Key Searchable Encryption Schemes from Standard Hard Lattice Problems for Cloud Computing
Lijun Qi, Jincheng Zhuang
Cloud storage and computing offers significant convenience and management efficiency in the information era. Privacy protection is a major challenge in cloud computing. Public key encryption with keyword search (PEKS) is an ingenious tool for ensuring privacy and functionality in certain scenario, such as ensuring privacy for data retrieval appearing in the cloud computing. Despite many attentions received, PEKS schemes still face several challenges in practical applications, such as low computational efficiency, high end-to-end delay, vulnerability to inside keyword guessing attacks(IKGA) and key management defects in the multi-user environment. In this work, we introduce three Ring-LWE/ISIS based PEKS schemes: (1) Our basic PEKS scheme achieves high level security in the standard model. (2) Our PAEKS scheme utilizes the sender's private key to generate an authentication when encrypting, which can resist IKGA. (3) Our IB-PAEKS scheme not only can resist IKGA, but also significantly reduces the complexity of key management in practical applications. Experimental results indicate that the first scheme provides lower end-to-end delay and higher computational efficiency compared to similar ones, and that our last two schemes can provide more secure properties with little additional overhead.
Last updated:  2022-10-12
ZKBdf: A ZKBoo-based Quantum-Secure Verifiable Delay Function with Prover-secret
Teik Guan Tan, Vishal Sharma, Zengpeng Li, Pawel Szalachowski, Jianying Zhou
Since the formalization of Verifiable Delay Functions (VDF) by Boneh et al. in 2018, VDFs have been adopted for use in blockchain consensus protocols and random beacon implementations. However, the impending threat to VDF-based applications comes in the form of Shor’s algorithm running on quantum computers in the future which can break the discrete logarithm and integer factorization problems that existing VDFs are based on. Clearly, there is a need for quantum-secure VDFs. In this paper, we propose ZKBdf, which makes use of ZKBoo, a zero knowledge proof system for verifiable computation, as the basis for realizing a quantum-secure VDF. We describe the algorithm, provide the security proofs, implement the scheme and measure the execution and size requirements. In addition, as ZKBdf extends the standard VDF with an extra “Prover-secret” feature, new VDF use-cases are also explored.
Last updated:  2022-10-17
Security and Quantum Computing: An Overview
Prasannna Ravi, Anupam Chattopadhyay, Shivam Bhasin
The promise of scalable quantum computing is causing major upheaval in the domain of cryptography and security. In this perspective paper, we review the progress towards the realization of large-scale quantum computing. We further summarize the imminent threats towards existing cryptographic primitives. To address this challenges, there is a consolidated effort towards the standardization of new cryptographic primitives, namely post-quantum cryptography (PQC). We discuss the underlying mathematical problems that define different classes of PQC candidates, and their resistance to an adversary having access to large Quantum computer. In parallel to this thread of research, several classical cryptographic primitives have been ported to the Quantum world as well. We discuss, in that context - Quantum Key Distribution (QKD), Physically Unclonable Function (PUF) and True Random Number Generator (TRNG). For those implementations, we take a sneak preview in the resulting implementation-related vulnerabilities.
Last updated:  2023-09-13
On the Security of KOS
Benjamin E. Diamond
We study the security of the random oblivious transfer extension protocol of Keller, Orsini, and Scholl (CRYPTO '15), whose security proof was recently invalidated by Roy (CRYPTO '22). We show that KOS is asymptotically secure. Our proof involves a subtle analysis of the protocol's "correlation check", and introduces several new techniques. We also study the protocol's concrete security. We establish concrete security for security parameter values on the order of 5,000. We present evidence that a stronger result than ours—if possible—is likely to require radically new ideas.
Last updated:  2022-11-23
A New Post-Quantum Key Agreement Protocol and Derived Cryptosystem Based on Rectangular Matrices
Hugo Daniel Scolnik, Juan Pedro Hecht
In this paper, we present an original algorithm to generate session keys and a subsequent generalized ElGamal-type cryptosystem. The scheme presented here has been designed to prevent both linear and brute force attacks using rectangular matrices and to achieve high complexity. Our algorithm includes a new generalized Diffie-Hellmann scheme based on rectangular matrices and polynomial field operations. Two variants are presented, the first with a double exchange between the parties and the second with a single exchange, thus speeding up the generation of session keys.
Last updated:  2023-09-26
Network-Agnostic Security Comes (Almost) for Free in DKG and MPC
Renas Bacho, Daniel Collins, Chen-Da Liu-Zhang, and Julian Loss
Distributed key generation (DKG) protocols are an essential building block for threshold cryptosystems. Many DKG protocols tolerate up to $t_s<n/2$ corruptions assuming a well-behaved synchronous network, but become insecure as soon as the network delay becomes unstable. On the other hand, solutions in the asynchronous model operate under arbitrary network conditions, but only tolerate $t_a<n/3$ corruptions, even when the network is well-behaved. In this work, we ask whether one can design a protocol that achieves security guarantees in either scenario. We show a complete characterization of network-agnostic DKG protocols, showing that the tight bound is $t_a+2t_s <n$. As a second contribution, we provide an optimized version of the network-agnostic MPC protocol by Blum, Liu-Zhang and Loss [CRYPTO'20] which improves over the communication complexity of their protocol by a linear factor. Moreover, using our DKG protocol, we can instantiate our MPC protocol in the plain PKI model, i.e., without the need to assume an expensive trusted setup. Our protocols incur the same communication complexity as state-of-the-art DKG and MPC protocols with optimal resilience in their respective purely synchronous and asynchronous settings, thereby showing that network-agnostic security comes (almost) for free.
Last updated:  2023-02-28
Functional Commitments for All Functions, with Transparent Setup and from SIS
Leo de Castro, Chris Peikert
A *functional commitment* scheme enables a user to concisely commit to a function from a specified family, then later concisely and verifiably reveal values of the function at desired inputs. Useful special cases, which have seen applications across cryptography, include vector commitments and polynomial commitments. To date, functional commitments have been constructed (under falsifiable assumptions) only for functions that are essentially *linear*, with one recent exception that works for arbitrarily complex functions. However, that scheme operates in a strong and non-standard model, requiring an online, trusted authority to generate special keys for any opened function inputs. In this work, we give the first functional commitment scheme for nonlinear functions---indeed, for *all functions* of any bounded complexity---under a standard setup and a falsifiable assumption. Specifically, the setup is ``transparent,'' requiring only public randomness (and not any trusted entity), and the assumption is the hardness of the standard Short Integer Solution (SIS) lattice problem. Our construction also has other attractive features, including: *stateless updates* via generic composability; excellent *asymptotic efficiency* for the verifier, and also for the committer in important special cases like vector and polynomial commitments, via preprocessing; and *post-quantum security*, since it is based on SIS.
Last updated:  2023-09-26
Agile Cryptography: A Universally Composable Approach
Christian Badertscher, Michele Ciampi, and Aggelos Kiayias
Being capable of updating cryptographic algorithms is an inevitable and essential practice in cryptographic engineering. This cryptographic agility, as it has been called, is a fundamental desideratum for long term cryptographic system security that still poses significant challenges from a modeling perspective. For instance, current formulations of agility fail to express the fundamental security that is expected to stem from timely implementation updates, namely the fact that the system retains some of its security properties provided that the update is performed prior to the deprecated implementation becoming exploited. In this work we put forth a novel framework for expressing updateability in the context of cryptographic primitives within the universal composition model. Our updatable ideal functionality framework provides a general template for expressing the security we expect from cryptographic agility capturing in a fine-grained manner all the properties that can be retained across implementation updates. We exemplify our framework over two basic cryptographic primitives, digital signatures and non-interactive zero-knowledge (NIZK), where we demonstrate how to achieve updateability with consistency and backwards-compatibility across updates in a composable manner. We also illustrate how our notion is a continuation of a much broader scope of the concept of agility introduced by Acar, Belenkiy, Bellare, and Cash in Eurocrypt 2010 in the context of symmetric cryptographic primitives.
Last updated:  2022-10-11
Two remarks on the vectorization problem
Wouter Castryck, Natan Vander Meeren
We share two small but general observations on the vectorization problem for group actions, which appear to have been missed by the existing literature. The first observation is pre-quantum: explicit examples show that, for classical adversaries, the vectorization problem cannot in general be reduced to the parallelization problem. The second observation is post-quantum: by combining a method for solving systems of linear disequations due to Ivanyos with a Kuperberg-style sieve, one can solve the hidden shift problem, and therefore the vectorization problem, for any finite abelian $2^tp^k$-torsion group in polynomial time and using mostly classical work; here $t, k$ are any fixed non-negative integers and $p$ is any fixed prime number.
Last updated:  2023-06-15
Chainable Functional Commitments for Unbounded-Depth Circuits
David Balbás, Dario Catalano, Dario Fiore, Russell W. F. Lai
A functional commitment (FC) scheme allows one to commit to a vector $\vec{x}$ and later produce a short opening proof of $(f, f(\vec{x}))$ for any admissible function $f$. Since their inception, FC schemes supporting ever more expressive classes of functions have been proposed. In this work, we introduce a novel primitive that we call chainable functional commitment (CFC), which extends the functionality of FCs by allowing one to 1) open to functions of multiple inputs $f(\vec x_1, \ldots, \vec x_m)$ that are committed independently, 2) while preserving the output also in committed form. We show that CFCs for quadratic polynomial maps generically imply FCs for circuits. Then, we efficiently realize CFCs for quadratic polynomials over pairing groups and lattices, resulting in the first FC schemes for circuits of unbounded depth based on either pairing-based or lattice-based falsifiable assumptions. Our FCs require fixing a-priori only the maximal width of the circuit to be evaluated, and have opening proofs whose size only depends on the depth of the circuit. Additionally, our FCs feature other nice properties such as being additively homomorphic and supporting sublinear-time verification after offline preprocessing. Using a recent transformation that constructs homomorphic signatures (HS) from FCs, we obtain the first pairing- and lattice-based realisations of HS for bounded-width, but unbounded-depth, circuits. Prior to this work, the only HS for general circuits is lattice-based and requires bounding the circuit depth at setup time.
Last updated:  2023-11-18
On Polynomial Functions Modulo $p^e$ and Faster Bootstrapping for Homomorphic Encryption
Robin Geelen, Ilia Iliashenko, Jiayi Kang, and Frederik Vercauteren
In this paper, we perform a systematic study of functions $f: \mathbb{Z}_{p^e} \to \mathbb{Z}_{p^e}$ and categorize those functions that can be represented by a polynomial with integer coefficients. More specifically, we cover the following properties: necessary and sufficient conditions for the existence of an integer polynomial representation; computation of such a representation; and the complete set of equivalent polynomials that represent a given function. As an application, we use the newly developed theory to speed up bootstrapping for the BGV and BFV homomorphic encryption schemes. The crucial ingredient underlying our improvements is the existence of null polynomials, i.e. non-zero polynomials that evaluate to zero in every point. We exploit the rich algebraic structure of these null polynomials to find better representations of the digit extraction function, which is the main bottleneck in bootstrapping. As such, we obtain sparse polynomials that have 50% fewer coefficients than the original ones. In addition, we propose a new method to decompose digit extraction as a series of polynomial evaluations. This lowers the time complexity from $\mathcal{O}(\sqrt{pe})$ to $\mathcal{O}(\sqrt{p}\sqrt[^4]{e})$ for digit extraction modulo $p^e$, at the cost of a slight increase in multiplicative depth. Overall, our implementation in HElib shows a significant speedup of a factor up to 2.6 over the state-of-the-art.
Last updated:  2023-11-18
Bootstrapping for BGV and BFV Revisited
Robin Geelen and Frederik Vercauteren
We unify the state-of-the-art bootstrapping algorithms for BGV and BFV in a single framework, and show that both schemes can be bootstrapped with identical complexity. This result corrects a claim by Chen and Han (Eurocrypt 2018) that BFV is more efficient to bootstrap than BGV. We also fix an error in their optimized procedure for power-of-two cyclotomics, which occurs for some parameter sets. Our analysis is simpler, yet more general than earlier work, in that it simultaneously covers both BGV and BFV. Furthermore, we also design and implement a high-level open source software library for bootstrapping in the Magma Computer Algebra System. It is the first library to support both BGV and BFV bootstrapping in full generality, with all recent techniques (including the above fixes) and trade-offs.
Last updated:  2024-01-05
ALLOSAUR: Accumulator with Low-Latency Oblivious Sublinear Anonymous credential Updates with Revocations
Samuel Jaques, Michael Lodder, and Hart Montgomery
A cryptographic accumulator is a space- and time-efficient data structure with associated algorithms used for secure membership testing. In the growing space of digital credentials, accumulators found in managing a set of valid credentials, giving efficient and anonymous methods for credential holders to prove their validity. Unlike traditional credentials like digital signatures, one can easily revoke credentials with an accumulator; however, each revocation forces existing credential holders to engage in an expensive update process. Previous works make this faster and easier by sacrificing anonymity. To improve performance without compromising privacy, we present ALLOSAUR, a multi-party accumulator based on pairings. In ALLOSAUR, we eliminate the cost of accumulating new credentials, let "credential managers" manage the accumulator values with secure multiparty computation, and allow anonymous credential updates with a square-root reduction in communication costs as compared to existing work. A deployed digital credential system is a vast and complicated system, and existing formalisms do not fully address the scope or power of a real-world adversary. We develop a thorough UC-style formalism that allows arbitrary malicious behaviour from an adversary controlling a minority of credential managers and arbitrary numbers of users, credentials, and verifiers. In our new formalism we present a novel definition of privacy that captures as much anonymity as possible while accounting for inevitable losses from interaction with the system. The detail in our formalism reveals real-world issues in existing accumulator constructions, all of which ALLOSAUR avoids. Our proof-of-concept implementation can update over 1000 revocations with less than half a second of total computation and 16 kB communication, at least a 5x improvement over the previous state-of-the-art in both metrics.
Last updated:  2022-10-11
Correlation Electromagnetic Analysis on an FPGA Implementation of CRYSTALS-Kyber
Rafael Carrera Rodriguez, Florent Bruguier, Emanuele Valea, Pascal Benoit
Post-quantum cryptography represents a category of cryptosystems resistant to quantum algorithms. Recently, NIST launched a process to standardize one or more of such algorithms in the key encapsulation mechanism and signature categories. Such schemes are under the scrutiny of their mathematical security, but they are not side-channel secure at the algorithm level. That is why their side-channel vulnerabilities must be assessed by the research community. In this paper, we present a non-profiled correlation electromagnetic analysis against an FPGA implementation of the chosen NIST key-encapsulation mechanism standard, CRYSTALS-Kyber. The attack correlates an electromagnetic radiation model of the polynomial multiplication execution with the captured traces. With 166,620 traces, this attack correctly recovers 100% of the subkeys. Furthermore, a countermeasure is presented for securing the target implementation against the presented attack.
Last updated:  2023-12-22
One for All, All for One: A Unified Evaluation Framework for Univariate DPA Attacks
Jiangshan Long, Chenxu Wang, Changhai Ou, Zhu Wang, Yongbin Zhou, and Ming Tang
Success Rate (SR) is one of the most popular security metrics measuring the efficiency of side-channel attacks. Theoretical expression reveals the functional dependency on critical parameters such as number of measurements and Signal-to-Noise Ratio (SNR), helping evaluators understand the threat of an attack as well as how one can mitigate it with proper countermeasures. However so far, existing works have exposed fundamental problems such as: (i) the evaluations are restricted to a very limited number of distinguishers and the methods in the literature seem specialized (i.e., hard to be extended). (ii) the evaluations assume an a-priori perfect leakage model which lacks practical relevance and ignores the fact that inaccurate profiling may lead to information loss and distorted SR. In this paper, we tackle above problems by providing an evaluation framework where different univariate DPA distinguishers are intuitively unified as linear maximum likelihood attack seeking for the closest `distance' between vectors in Euclidean space. We argue that this is an intrinsic property of the DPA mechanism and is independent of the leakage model. Then, we abstract the concept of SR and derive the theoretical expression in a geometric way. Finally, the theory allows a further study on leakage model where we formalize criterion explaining the impact of model errors as well as guaranteeing robust performance. We transfer the model effects to a degraded SNR parameter. Experimental results are inline with the theory, confirming that our theoretical expression coincides with the empirical ones.
Last updated:  2024-02-08
Probabilistic Hash-and-Sign with Retry in the Quantum Random Oracle Model
Haruhisa Kosuge and Keita Xagawa
A hash-and-sign signature based on a preimage-sampleable function (Gentry et al., STOC 2008) is secure in the quantum random oracle model if the preimage-sampleable function is collision-resistant (Boneh et al., ASIACRYPT 2011) or one-way (Zhandry, CRYPTO 2012). However, trapdoor functions in code-based and multivariate-quadratic-based signatures are not preimage-sampleable functions; for example, underlying trapdoor functions of the Courtois-Finiasz-Sendrier, Unbalanced Oil and Vinegar (UOV), and Hidden Field Equations (HFE) signatures are not surjections. Thus, such signature schemes adopt probabilistic hash-and-sign with retry. While Sakumoto et al. in PQCRYPTO 2011 showed the security of this paradigm in the classical random oracle model, their proof contains an error. Also, there is currently no known security proof for the probabilistic hash-and-sign with retry in the quantum random oracle model. We correct the proof in the random oracle model and give the first security proof in the quantum random oracle model for the probabilistic hash-and-sign with retry, assuming that the underlying trapdoor function is non-invertible, that is, it is hard to find a preimage of a given random value in the range. Our reduction from the non-invertibility assumption is tighter than the existing ones that apply only to signature schemes based on preimage-sampleable functions. We apply the security proof to code-based and multivariate-quadratic-based signatures. Additionally, we extend the proof into the multi-key setting and propose a generic method that provides security reduction without any security loss in the number of keys.
Last updated:  2022-11-04
Commitments to Quantum States
Sam Gunn, Nathan Ju, Fermi Ma, Mark Zhandry
What does it mean to commit to a quantum state? In this work, we propose a simple answer: a commitment to quantum messages is binding if, after the commit phase, the committed state is hidden from the sender's view. We accompany this new definition with several instantiations. We build the first non-interactive succinct quantum state commitments, which can be seen as an analogue of collision-resistant hashing for quantum messages. We also show that hiding quantum state commitments (QSCs) are implied by any commitment scheme for classical messages. All of our constructions can be based on quantum-cryptographic assumptions that are implied by but are potentially weaker than one-way functions. Commitments to quantum states open the door to many new cryptographic possibilities. Our flagship application of a succinct QSC is a quantum-communication version of Kilian's succinct arguments for any language that has quantum PCPs with constant error and polylogarithmic locality. Plugging in the PCP theorem, this yields succinct arguments for NP under significantly weaker assumptions than required classically; moreover, if the quantum PCP conjecture holds, this extends to QMA. At the heart of our security proof is a new rewinding technique for extracting quantum information.
Last updated:  2023-09-21
A Theory of Composition for Differential Obliviousness
Mingxun Zhou, Elaine Shi, T-H. Hubert Chan, and Shir Maimon
Differential obliviousness (DO) access pattern privacy is a privacy notion which guarantees that the access patterns of a program satisfy differential privacy. Differential obliviousness was studied in a sequence of recent works as a relaxation of full obliviousness. Earlier works showed that DO not only allows us to circumvent the logarithmic-overhead barrier of fully oblivious algorithms, in many cases, it also allows us to achieve polynomial speedup over full obliviousness, since it avoids "padding to the worst-case" behavior of fully oblivious algorithms. Despite the promises of differential obliviousness (DO), a significant barrier that hinders its broad application is the lack of composability. In particular, when we apply one DO algorithm to the output of another DO algorithm, the composed algorithm may no longer be DO (with reasonable parameters). More specifically, the outputs of the first DO algorithm on two neighboring inputs may no longer be neighboring, and thus we cannot directly benefit from the DO guarantee of the second algorithm. In this work, we are the first to explore a theory of composition for differentially oblivious algorithms. We propose a refinement of the DO notion called $(\epsilon, \delta)$-neighbor-preserving-DO, or $(\epsilon, \delta)$-NPDO for short, and we prove that our new notion indeed provides nice compositional guarantees. In this way, the algorithm designer can easily track the privacy loss when composing multiple DO algorithms. We give several example applications to showcase the power and expressiveness of our new NPDO notion. One of these examples is a result of independent interest: we use the compositional framework to prove an optimal privacy amplification theorem for the differentially oblivious shuffle model. In other words, we show that for a class of distributed differentially private mechanisms in the shuffle-model, one can replace the perfectly secure shuffler with a DO shuffler, and nonetheless enjoy almost the same privacy amplification enabled by a shuffler.
Last updated:  2022-10-10
A fully classical LLL algorithm for modules
Gabrielle De Micheli, Daniele Micciancio
The celebrated LLL algorithm for Euclidean lattices is central to cryptanalysis of well- known and deployed protocols as it provides approximate solutions to the Shortest Vector Problem (SVP). Recent interest in algebrically structured lattices (e.g., for the efficient implementation of lattice- based cryptography) has prompted adapations of LLL to such structured lattices, and, in particular, to module lattices, i.e., lattices that are modules over algebraic ring extensions of the integers. One of these adaptations is a quantum algorithm proposed by Lee, Pellet-Mary, Stehlé and Wallet (Asiacrypt 2019). In this work, we dequantize the algorithm of Lee et al., and provide a fully classical LLL-type algorithm for arbitrary module lattices that achieves same SVP approximation factors, single exponential in the rank of the input module. Just like the algorithm of Lee et al., our algorithm runs in polynomial time given an oracle that solves the Closest Vector Problem (CVP) in a certain, fixed lattice L_K that depends only on the number field K.
Last updated:  2023-12-21
HyperPlonk: Plonk with Linear-Time Prover and High-Degree Custom Gates
Binyi Chen, Benedikt Bünz, Dan Boneh, and Zhenfei Zhang
Plonk is a widely used succinct non-interactive proof system that uses univariate polynomial commitments. Plonk is quite flexible: it supports circuits with low-degree ``custom'' gates as well as circuits with lookup gates (a lookup gate ensures that its input is contained in a predefined table). For large circuits, the bottleneck in generating a Plonk proof is the need for computing a large FFT. We present HyperPlonk, an adaptation of Plonk to the boolean hypercube, using multilinear polynomial commitments. HyperPlonk retains the flexibility of Plonk but provides several additional benefits. First, it avoids the need for an FFT during proof generation. Second, and more importantly, it supports custom gates of much higher degree than Plonk without harming the running time of the prover. Both of these can dramatically speed up the prover's running time. Since HyperPlonk relies on multilinear polynomial commitments, we revisit two elegant constructions: one from Orion and one from Virgo. We show how to reduce the Orion opening proof size to less than 10kb (an almost factor 1000 improvement) and show how to make the Virgo FRI-based opening proof simpler and shorter.
Last updated:  2022-10-10
Embracing Hellman: A Simple Proof-of-Space Search consensus algorithm with stable block times using Logarithmic Embargo
Marijn F. Stollenga
Cryptocurrencies have become tremendously popular since the creation of Bitcoin. However, its central Proof-of-Work consensus mechanism is very power hungry. As an alternative, Proof-of-Space (PoS) was introduced that uses storage instead of computations to create a consensus. However, current PoS implementations are complex and sensitive to the Nothing-at-Stake problem, and use mitigations that affect their permissionless and decentralised nature. We introduce Proof-of-Space Search (PoSS) which embraces Hellman's time-memory trade-off to create a much simpler algorithm that avoids the Nothing-at-Stake problem. Additionally, we greatly stabilise block-times using a novel dynamic Logarithmic Embargo (LE) rule. Combined, we show that PoSSLE is a simple and stable alternative to PoW with many of its properties, while being an estimated 10 times more energy efficient and sustaining consistent block times.
Last updated:  2023-09-21
Anonymous Permutation Routing
Paul Bunn, Eyal Kushilevitz, and Rafail Ostrovsky
The Non-Interactive Anonymous Router (NIAR) model was introduced by Shi and Wu [SW21] as an alternative to conventional solutions to the anonymous routing problem, in which a set of senders wish to send messages to a set of receivers. In contrast to most known approaches to support anonymous routing (e.g. mix-nets, DC-nets, etc.) which rely on a network of routers communicating with users via interactive protocols, the NIAR model assumes a $single$ router and is inherently $non$-$interactive$ (after an initial setup phase). In addition to being non-interactive, the NIAR model is compelling due to the security it provides: instead of relying on the honesty of some subset of the routers, the NIAR model requires anonymity even if the router (as well as an arbitrary subset of senders/receivers) is corrupted. In this paper, we present a protocol for the NIAR model that improves upon the results from [SW21] in two ways: - Improved computational efficiency (quadratic to near linear): Our protocol matches the communication complexity of [SW21] for each sender/receiver, while reducing the computational overhead for the router to polylog overhead instead of linear overhead. - Relaxation of assumptions: Security of the protocol in [SW21] relies on the Decisional Linear assumption in bilinear groups; while security for our protocol follows from the existence of any rate-1 oblivious transfer (OT) protocol (instantiations of this primitive are known to exist under DDH, QR and LWE [DGI19,GHO20]).
Last updated:  2022-11-10
aPlonK : Aggregated PlonK from Multi-Polynomial Commitment Schemes
Miguel Ambrona, Marc Beunardeau, Anne-Laure Schmitt, Raphaël R. Toledo
PlonK is a prominent universal and updatable zk-SNARK for general circuit satisfiability. We present aPlonK, a variant of PlonK that reduces the proof size and verification time when multiple statements are proven in a batch. Both the aggregated proof size and the verification complexity of aPlonK are logarithmic in the number of aggregated statements. Our main building block, inspired by the techniques developed in SnarkPack (Gailly, Maller, Nitulescu, FC 2022), is a multi-polynomial commitment scheme, a new primitive that generalizes polynomial commitment schemes. Our techniques also include a mechanism for involving committed data into PlonK statements very efficiently, which can be of independent interest. We also implement an open-source industrial-grade library for zero-knowledge PlonK proofs with support for aPlonK. Our experimental results show that our techniques are suitable for real-world applications (such as blockchain rollups), achieving significant performance improvements in proof size and verification time.
Last updated:  2023-02-21
Better Steady than Speedy: Full break of SPEEDY-7-192
Christina Boura, Nicolas David, Rachelle Heim Boissier, Maria Naya-Plasencia
Differential attacks are among the most important families of cryptanalysis against symmetric primitives. Since their introduction in 1990, several improvements to the basic technique as well as many dedicated attacks against symmetric primitives have been proposed. Most of the proposed improvements concern the key-recovery part. However, when designing a new primitive, the security analysis regarding differential attacks is often limited to finding the best trails over a limited number of rounds with branch and bound techniques, and a poor heuristic is then applied to deduce the total number of rounds a differential attack could reach. In this work we analyze the security of the SPEEDY family of block ciphers against differential cryptanalysis and show how to optimize many of the steps of the key-recovery procedure for this type of attacks. For this, we implemented a search for finding optimal trails for this cipher and their associated multiple probabilities under some constraints and applied non-trivial techniques to obtain optimal data and key-sieving. This permitted us to fully break SPEEDY-7-192, the 7-round variant of SPEEDY supposed to provide 192-bit security. Our work demonstrates among others the need to better understand the subtleties of differential cryptanalysis in order to get meaningful estimates on the security offered by a cipher against these attacks.
Last updated:  2023-02-24
Rai-Choo! Evolving Blind Signatures to the Next Level
Lucjan Hanzlik, Julian Loss, Benedikt Wagner
Blind signatures are a fundamental tool for privacy-preserving applications. Known constructions of concurrently secure blind signature schemes either are prohibitively inefficient or rely on non-standard assumptions, even in the random oracle model. A recent line of work (ASIACRYPT `21, CRYPTO `22) initiated the study of concretely efficient schemes based on well-understood assumptions in the random oracle model. However, these schemes still have several major drawbacks: 1) The signer is required to keep state; 2) The computation grows linearly with the number of signing interactions, making the schemes impractical; 3) The schemes require at least five moves of interaction. In this paper, we introduce a blind signature scheme that eliminates all of the above drawbacks at the same time. Namely, we show a round-optimal, concretely efficient, concurrently secure, and stateless blind signature scheme in which communication and computation are independent of the number of signing interactions. Our construction also naturally generalizes to the partially blind signature setting. Our scheme is based on the CDH assumption in the asymmetric pairing setting and can be instantiated using a standard BLS curve. We obtain signature and communication sizes of 9KB and 36KB, respectively. To further improve the efficiency of our scheme, we show how to obtain a scheme with better amortized communication efficiency. Our approach batches the issuing of signatures for multiple messages.
Last updated:  2022-10-10
Invertibility of multiple random functions and its application to symmetric ciphers
Xiutao Feng, Xiaoshan GAO, Zhangyi WANG, Xiangyong ZENG
The invertibility of a random function (IRF, in short) is an important problem and has wide applications in cryptography. For ex- ample, searching a preimage of Hash functions, recovering a key of block ciphers under the known-plaintext-attack model, solving discrete loga- rithms over a prime field with large prime, and so on, can be viewed as its instances. In this work we describe the invertibility of multiple random functions (IMRF, in short), which is a generalization of the IRF. In order to solve the IMRF, we generalize the birthday theorem. Based on the generalized birthday theorem and time-memory tradeoff (TMTO, in short) method, we present an efficient TMTO method of solving an IMRF, which can be viewed as a generalization of three main TMTO attacks, that is, Hellman’s attack, Biryukov and Shamir’s attack with BSW sampling, and Biryukov, Mukhopadhyay and Sarkar’s time- memory-key tradeoff attack. Our method is highly parallel and suitable for distributed computing environments. As a generalization of Hellman’s attack, our method overcomes its shortcoming of using only one pair of known plaintext and ciphertext and first admits more than one datum in a TMTO on block ciphers at the single key scenario. As a generaliza- tion of Biryukov and Shamir’s attack with BSW sampling, our method overcomes its shortcoming of using only a few data with specific prefix in stream ciphers and can utilize all data without any waste. As appli- cations, we get two new tradeoff curves: N2 = TM2D3, N = PD and D=τforblockciphers,andN2 =τ3TM2D2,N=τPDandD≥τ for stream ciphers, where τ is the number of random functions, that is, the number of independent computing units available to an attacker, N is the size of key space (for block ciphers) or state (for stream ci- phers) space, D the number of data captured by the attacker, and T, M, P the time/memory/precomputation cost consumed at each computing unit respectively. As examples, assume that 4096 computing units can be available for the attacker. Denote by 5-tuple (τ, T, M, D, P ) the costof our method. Then the cost of breaking DES, AES-128 and A5/1 is (212, 225.3, 225.3, 212, 244), (212, 273.3, 273.3, 212, 2116) and (212, 222.7, 217.3,217.3, 234.7) respectively
Last updated:  2022-10-09
ABE for DFA from LWE against Bounded Collusions, Revisited
Hoeteck Wee
We present a new public-key ABE for DFA based on the LWE assumption, achieving security against collusions of a-priori bounded size. Our scheme achieves ciphertext size $\tilde{O}(\ell + B)$ for attributes of length $\ell$ and collusion size $B$. Prior LWE-based schemes has either larger ciphertext size $\tilde{O}(\ell \cdot B)$, or are limited to the secret-key setting. Along the way, we introduce a new technique for lattice trapdoor sampling, which we believe would be of independent interest. Finally, we present a simple candidate public-key ABE for DFA for the unbounded collusion setting.
Last updated:  2023-03-29
Broadcast, Trace and Revoke with Optimal Parameters from Polynomial Hardness
Shweta Agrawal, Simran Kumari, Anshu Yadav, Shota Yamada
A broadcast, trace and revoke system generalizes broadcast encryption as well as traitor tracing. In such a scheme, an encryptor can specify a list $L \subseteq N$ of revoked users so that (i) users in $L$ can no longer decrypt ciphertexts, (ii) ciphertext size is independent of $L$, (iii) a pirate decryption box supports tracing of compromised users. The ``holy grail'' of this line of work is a construction which resists unbounded collusions, achieves all parameters (including public and secret key) sizes independent of $|L|$ and $|N|$, and is based on polynomial hardness assumptions. In this work we make the following contributions: 1. Public Trace Setting: We provide a construction which (i) achieves optimal parameters, (ii) supports embedding identities (from an exponential space) in user secret keys, (iii) relies on polynomial hardness assumptions, namely compact functional encryption (${\sf FE}$) and a key-policy attribute based encryption (${\sf ABE}$) with special efficiency properties, and (iv) enjoys adaptive security with respect to the revocation list. The previous best known construction by Nishimaki, Wichs and Zhandry (Eurocrypt 2016) which achieved optimal parameters and embedded identities, relied on indistinguishability obfuscation, which is considered an inherently subexponential assumption and achieved only selective security with respect to the revocation list. 2. Secret Trace Setting: We provide the first construction with optimal ciphertext, public and secret key sizes and embedded identities from any assumption outside Obfustopia. In detail, our construction relies on Lockable Obfuscation which can be constructed using ${\sf LWE}$ (Goyal, Koppula, Waters and Wichs, Zirdelis, Focs 2017) and two ${\sf ABE}$ schemes: (i) the key-policy scheme with special efficiency properties by Boneh et al. (Eurocrypt 2014) and (ii) a ciphertext-policy ${\sf ABE}$ for ${\sf P}$ which was recently constructed by Wee (Eurocrypt 2022) using a new assumption called {\it evasive and tensor} ${\sf LWE}$. This assumption, introduced to build an ${\sf ABE}$, is believed to be much weaker than lattice based assumptions underlying ${\sf FE}$ or ${\sf iO}$ -- in particular it is required even for lattice based broadcast, without trace. Moreover, by relying on subexponential security of ${\sf LWE}$, both our constructions can also support a super-polynomial sized revocation list, so long as it allows efficient representation and membership testing. Ours is the first work to achieve this, to the best of our knowledge.
Last updated:  2022-10-09
Generic Signature from Noisy Systems
Trey Li
This paper provides a cryptographic application to our previous paper [Li22h], where we considered noisy systems of discrete exponential equations over a land, which is a monoid without the requirement of associativity. In this paper we give a general methodology for signature scheme construction from noisy systems.
Last updated:  2023-07-07
Revisiting Security Estimation for LWE with Hints from a Geometric Perspective
Dana Dachman-Soled, Huijing Gong, Tom Hanson, Hunter Kippen
The Distorted Bounded Distance Decoding Problem (DBDD) was introduced by Dachman-Soled et al. [Crypto ’20] as an intermediate problem between LWE and unique-SVP (uSVP). They presented an approach that reduces an LWE instance to a DBDD instance, integrates side information (or “hints”) into the DBDD instance, and finally reduces it to a uSVP instance, which can be solved via lattice reduction. They showed that this principled approach can lead to algorithms for side-channel attacks that perform better than ad-hoc algorithms that do not rely on lattice reduction. The current work focuses on new methods for integrating hints into a DBDD instance. We view hints from a geometric perspective, as opposed to the distributional perspective from the prior work. Our approach provides the rigorous promise that, as hints are integrated into the DBDD instance, the correct solution remains a lattice point contained in the specified ellipsoid. We instantiate our approach with two new types of hints: (1) Inequality hints, corresponding to the region of intersection of an ellipsoid and a halfspace; (2) Combined hints, corresponding to the region of intersection of two ellipsoids. Since the regions in (1) and (2) are not necessarily ellipsoids, we replace them with ellipsoidal approximations that circumscribe the region of intersection. Perfect hints are reconsidered as the region of intersection of an ellipsoid and a hyperplane, which is itself an ellipsoid. The compatibility of “approximate,” “modular,” and “short vector” hints from the prior work is examined. We apply our techniques to the decryption failure and side-channel attack settings. We show that “inequality hints” can be used to model decryption failures, and that our new approach yields a geometric analogue of the “failure boosting” technique of D’anvers et al. [ePrint, ’18]. We also show that “combined hints” can be used to fuse information from a decryption failure and a side-channel attack, and provide rigorous guarantees despite the data being non-Gaussian. We provide experimental data for both applications. The code that we have developed to implement the integration of hints and hardness estimates extends the Toolkit from prior work and has been released publicly.
Last updated:  2022-10-08
Discrete Exponential Equations and Noisy Systems
Trey Li
The history of equations dates back to thousands of years ago, though the equals sign "=" was only invented in 1557. We formalize the processes of "decomposition" and "restoration" in mathematics and physics by defining "discrete exponential equations" and "noisy equation systems" over an abstract structure called a "land", which is more general than fields, rings, groups, and monoids. Our abstract equations and systems provide general languages for many famous computational problems such as integer factorization, ideal factorization, isogeny factorization, learning parity with noise, learning with errors, learning with rounding, etc. From the abstract equations and systems we deduce a list of new decomposition problems and noisy learning problems. We also give algorithms for discrete exponential equations and systems over algebraic integers. Our motivations are to develop a theory of decomposition and restoration; to unify the scattered studies of decomposition problems and noisy learning problems; and to further permeate the ideas of decomposition and restoration into all possible branches of mathematics. A direct application is a methodology for finding new hardness assumptions for cryptography.
Last updated:  2023-10-15
Improved Progressive BKZ with Lattice Sieving and a Two-Step Mode for Solving uSVP
Wenwen Xia, Leizhang Wang, GengWang, Dawu Gu, and Baocang Wang
The unique Shortest Vector Problem (uSVP) is one of the core hard problems in lattice-based cryptography. In NIST PQC standardization (Kyber, Dilithium), leaky-LWE-Estimator is used to estimate the hardness of LWE-based cryptosystems by reducing LWE to uSVP and considers the primal attack using Progressive BKZ (ProBKZ). ProBKZ trivially increases blocksize β and lifts the shortest vector in the final BKZ block to find the unique shortest vector in the full lattice. In this paper, we show that a ProBKZ algorithm as above (we call it a BKZ-only mode) is not the best way to solve uSVP. So we present a two-step mode to solve it, where the ProBKZ algorithm is followed by a sieving algorithm with the dimension larger than the blocksize of BKZ. While instantiating our two-step mode with the sieving algorithm Pump and Pump-and-jump BKZ (PnjBKZ) presented in G6K, which are the state-of-art sieving and BKZ implementations, we show that our algorithm is not only better than the BKZ-only mode but also better than the heuristic uSVP solving algorithm in G6K. However, a ProBKZ with the heuristic parameter selection in leaky-LWE-Estimator or the optimized parameter selection in the literature (Yoshinori Aono et al. at Asiacrypt 2016), is insufficient in optimizing the efficiency of a two-step solving algorithm. To find the best param- eters, we design a PnjBKZ simulator which allows the choice of value jump to be more than 1. Based on the newly designed simulator, we give a blocksize and jump strategy selection algorithm, which can achieve the best simulated efficiency in solving uSVP instances. Combining all the things above, we get a new lattice solving algorithm called Improved Progressive PnjBKZ (ProPnjBKZ for short). We test the efficiency of our ProPnjBKZ with the TU Darmstadt LWE Challenge. The experiment result shows that our ProPnjBKZ is 7.6∼12.9 times more efficient than the heuristic uSVP solving algorithm in G6K. Besides, we break the TU Darmstadt LWE Challenges with (n, α) ∈{(40, 0.035), (40, 0.040), (50, 0.025), (55, 0.020), (90, 0.005)}. Finally, we give a newly refined security estimator of LWE. The evaluation results indicate that the concrete hardness of the lattice-based NIST candidate schemes from LWE primal attack will decrease by 1.9∼4.2 bits when using our optimized blocksize and jump selection strategy and two-step solving mode. In addition, when using the list-decoding technology proposed by MATZOV in 2022, it further decreased by 8∼10.7 bits.
Last updated:  2023-06-24
Block Cipher Doubling for a Post-Quantum World
Ritam Bhaumik, André Chailloux, Paul Frixons, Bart Mennink, María Naya-Plasencia
In order to maintain a similar security level in a post-quantum setting, many symmetric primitives should have to double their keys and increase their state sizes. So far, no generic way for doing this is known that would provide convincing quantum security guarantees. In this paper we propose a new generic construction, QuEME, that allows to double the key and the state size of a block cipher. The QuEME design is inspired by the ECB-Mix-ECB (EME) construction, but is defined for a different choice of mixing function that withstands our new quantum superposition attack that exhibits a periodic property found in collisions and that breaks EME and a large class of variants of it. We prove that QuEME achieves $n$-bit security in the classical setting, where $n$ is the block size of the underlying block cipher, and at least $n/6$-bit security in the quantum setting. We propose a concrete instantiation of this construction, called Double-AES, that is built with variants of AES-128.
Last updated:  2022-10-07
LaBRADOR: Compact Proofs for R1CS from Module-SIS
Ward Beullens, Gregor Seiler
The most compact quantum-safe proof systems for large circuits are PCP-type systems such as Ligero, Aurora, and Shockwave, that only use weak cryptographic assumptions, namely hash functions modeled as random oracles. One would expect that by allowing for stronger assumptions, such as the hardness of Module-SIS, it should be possible to design more compact proof systems. But alas, despite considerable progress in lattice-based proofs, no such proof system was known so far. We rectify this situation by introducing a Lattice-Based Recursively Amortized Demonstration Of R1CS (LaBRADOR), with more compact proof sizes than known hash-based proof systems, both asymptotically and concretely for all relevant circuit sizes. LaBRADOR proves knowledge of a solution for an R1CS mod $2^{64}+1$ with $2^{20}$ constraints, with a proof size of only 58 KB, an order of magnitude more compact than previous quantum-safe proofs.
Last updated:  2023-05-23
Understanding the Duplex and Its Security
Bart Mennink
At SAC 2011, Bertoni et al. introduced the keyed duplex construction as a tool to build permutation based authenticated encryption schemes. The construction was generalized to full-state absorption by Mennink et al. (ASIACRYPT 2015). Daemen et al. (ASIACRYPT 2017) generalized it further to cover much more use cases, and proved security of this general construction, and Dobraunig and Mennink (ASIACRYPT 2019) derived a leakage resilience security bound for this construction. Due to its generality, the full-state keyed duplex construction that we know today has plethora applications, but the flip side of the coin is that the general construction is hard to grasp and the corresponding security bounds are very complex. Consequently, the state-of-the-art results on the full-state keyed duplex construction are not used to the fullest. In this work, we revisit the history of the duplex construction, give a comprehensive discussion of its possibilities and limitations, and demonstrate how the two security bounds (of Daemen et al. and Dobraunig and Mennink) can be interpreted in particular applications of the duplex.
Last updated:  2023-11-21
CCA-1 Secure Updatable Encryption with Adaptive Security
Huanhuan Chen, Yao Jiang Galteland, and Kaitai Liang
Updatable encryption (UE) enables a cloud server to update ciphertexts using client-generated tokens. There are two types of UE: ciphertext-independent (c-i) and ciphertext-dependent (c-d). In terms of construction and efficiency, c-i UE utilizes a single token to update all ciphertexts. The update mechanism relies mainly on the homomorphic properties of exponentiation, which limits the efficiency of encryption and updating. Although c-d UE may seem inconvenient as it requires downloading parts of the ciphertexts during token generation, it allows for easy implementation of the Dec-then-Enc structure. This methodology significantly simplifies the construction of the update mechanism. Notably, the c-d UE scheme proposed by Boneh et al. (ASIACRYPT’20) has been reported to be 200 times faster than prior UE schemes based on DDH hardness, which is the case for most existing c-i UE schemes. Furthermore, c-d UE ensures a high level of security as the token does not reveal any information about the key, which is difficult for c-i UE to achieve. However, previous security studies on c-d UE only addressed selective security; the studies for adaptive security remain an open problem. In this study, we make three significant contributions to ciphertextdependent updatable encryption (c-d UE). Firstly, we provide stronger security notions compared to previous work, which capture adaptive security and also consider the adversary’s decryption capabilities under the adaptive corruption setting. Secondly, we propose a new c-d UE scheme that achieves the proposed security notions. The token generation technique significantly differs from the previous Dec-then-Enc structure, while still preventing key leakages. At last, we introduce a packing technique that enables the simultaneous encryption and updating of multiple messages within a single ciphertext. This technique helps alleviate the cost of c-d UE by reducing the need to download partial ciphertexts during token generation.
Last updated:  2022-10-07
Privacy-Preserving Authenticated Key Exchange: Stronger Privacy and Generic Constructions
Sebastian Ramacher, Daniel Slamanig, Andreas Weninger
Authenticated key-exchange (AKE) protocols are an important class of protocols that allow two parties to establish a common session key over an insecure channel such as the Internet to then protect their communication. They are widely deployed in security protocols such as TLS, IPsec and SSH. Besides the confidentiality of the communicated data, an orthogonal but increasingly important goal is the protection of the confidentiality of the identities of the involved parties (aka privacy). For instance, the Encrypted Client Hello (ECH) mechanism for TLS 1.3 has been designed for exactly this reason. Recently, a series of works (Zhao CCS'16, Arfaoui et al. PoPETS'19, Schäge et al. PKC'20) studied privacy guarantees of (existing) AKE protocols by integrating privacy into AKE models. We observe that these so called privacy-preserving AKE (PPAKE) models are typically strongly tailored to the specific setting, i.e., concrete protocols they investigate. Moreover, the privacy guarantees in these models might be too weak (or even are non-existent) when facing active adversaries. In this work we set the goal to provide a single PPAKE model that captures privacy guarantees against different types of attacks, thereby covering previously proposed notions as well as so far not achieved privacy guarantees. In doing so, we obtain different "degrees" of privacy within a single model, which, in its strongest forms also capture privacy guarantees against powerful active adversaries. We then proceed to investigate (generic) constructions of AKE protocols that provide strong privacy guarantees in our PPAKE model. This includes classical Diffie-Hellman type protocols as well as protocols based on generic building blocks, thus covering post-quantum instantiations.
Last updated:  2023-08-25
How to Enumerate LWE Keys as Narrow as in Kyber/Dilithium
Timo Glaser and Alexander May
In the Learning with Errors (LWE) problem we are given a matrix $A \in \mathbb{Z}_q^{N \times N}$ and a target vector $\vec{t} \in \mathbb{Z}_q^N$ such that there exists small-norm $\vec{s}, \vec{e}\in \mathbb{Z}_q^N$ satisfying $A\cdot\vec{s} = \vec{t} + \vec{e} \bmod q$. Modern cryptosystems often sample $\vec{s}, \vec{e}$ from narrow distributions that take integer values in a small range $[-\eta, \eta].$ Kyber and Dilithium both choose $\eta=2$ and $\eta=3$ using either a Centered Binomial distribution (Kyber), or a uniform distribution (Dilithium). In this work, we address the fundamental question how hard the enumeration of LWE secret keys for narrow distributions with $\eta \leq 3$ is. At Crypto 21, May proposed a representation-based algorithm for enumerating ternary keys, i.e. the case $\eta = 1$, with a fixed number of $\pm 1$ entries. In this work, we extend May's algorithm in several ways. First, we show how to deal with keys sampled from a probability distribution as in many modern systems like Kyber and Dilithium, rather than with keys having a fixed number of entries. Second, we generalize to larger values $\eta= 2, 3$, thereby achieving asymptotic key guess complexities that are not far off from lattice estimates. E.g. for Kyber's Centered Binomial distribution we achieve heuristic time/memory complexities of $\mathcal{O}(2^{0.36N})$ for $\eta=2$, and $\mathcal{O}(2^{0.37N})$ for $\eta=3$. For Dilithium's uniform distribution we achieve heuristic complexity $\mathcal{O}(2^{0.38N})$ for $\eta=2$. Let $S$ be the Shannon entropy of Kyber/Dilithium keys. Then our algorithms runs in time about ${S}^{\frac 16}$, which greatly improves over the standard combinatorial Meet-in-the-Middle attack with complexity ${S}^{\frac 1 2}$. Our results also compare well to current lattice asymptotics of $2^{0.29 \beta}$, where the lattice parameter $\beta$ is roughly of size $\frac 4 5 N$. Thus, our analysis supports that Kyber secret keys are indeed hard to enumerate. Yet, we find it remarkable that a purely combinatorial key search is almost competitive with highly evolved lattice sieving techniques.
Last updated:  2024-05-08
One-Wayness in Quantum Cryptography
Tomoyuki Morimae and Takashi Yamakawa
The existence of one-way functions is one of the most fundamental assumptions in classical cryptography. In the quantum world, on the other hand, there are evidences that some cryptographic primitives can exist even if one-way functions do not exist [Morimae and Yamakawa, CRYPTO 2022; Ananth, Qian, and Yuen, CRYPTO 2022]. We therefore have the following important open problem in quantum cryptography: What is the most fundamental element in quantum cryptography? In this direction, Brakerski, Canetti, and Qian [arXiv:2209.04101] recently defined a notion called EFI pairs, which are pairs of efficiently generatable states that are statistically distinguishable but computationally indistinguishable, and showed its equivalence with some cryptographic primitives including commitments, oblivious transfer, and general multi-party computations. However, their work focuses on decision-type primitives and does not cover search-type primitives like quantum money and digital signatures. In this paper, we study properties of one-way state generators (OWSGs), which are a quantum analogue of one-way functions proposed by Morimae and Yamakawa. We first revisit the definition of OWSGs and generalize it by allowing mixed output states. Then we show the following results. (1) We define a weaker version of OWSGs, which we call weak OWSGs, and show that they are equivalent to OWSGs. It is a quantum analogue of the amplification theorem for classical weak one-way functions. (2) (Bounded-time-secure) quantum digital signatures with quantum public keys are equivalent to OWSGs. (3) Private-key quantum money schemes (with pure money states) imply OWSGs. (4) Quantum pseudo one-time pad schemes imply both OWSGs and EFI pairs. For EFI pairs, single-copy security suffices. (5) We introduce an incomparable variant of OWSGs, which we call secretly-verifiable and statistically-invertible OWSGs, and show that they are equivalent to EFI pairs.
Last updated:  2023-09-20
Revisiting Higher-Order Differential-Linear Attacks from an Algebraic Perspective
Kai Hu, Thomas Peyrin, Quan Quan Tan, and Trevor Yap
The Higher-order Differential-Linear (HDL) attack was introduced by Biham \textit{et al.} at FSE 2005, where a linear approximation was appended to a Higher-order Differential (HD) transition. It is a natural generalization of the Differential-Linear (DL) attack. Due to some practical restrictions, however, HDL cryptanalysis has unfortunately attracted much less attention compared to its DL counterpart since its proposal. In this paper, we revisit HD/HDL cryptanalysis from an algebraic perspective and provide two novel tools for detecting possible HD/HDL distinguishers, including: (a) Higher-order Algebraic Transitional Form (HATF) for probabilistic HD/HDL attacks; (b) Differential Supporting Function (\DSF) for deterministic HD attacks. In general, the HATF can estimate the biases of $\ell^{th}$-order HDL approximations with complexity $\mathcal{O}(2^{\ell+d2^\ell})$ where $d$ is the algebraic degree of the function studied. If the function is quadratic, the complexity can be further reduced to $\mathcal{O}(2^{3.8\ell})$. HATF is therefore very useful in HDL cryptanalysis for ciphers with quadratic round functions, such as \ascon and \xoodyak. \DSF provides a convenient way to find good linearizations on the input of a permutation, which facilitates the search for HD distinguishers. Unsurprisingly, HD/HDL attacks have the potential to be more effective than their simpler differential/DL counterparts. Using HATF, we found many HDL approximations for round-reduced \ascon and \xoodyak initializations, with significantly larger biases than DL ones. For instance, there are deterministic 2$^{nd}$-order/4$^{th}$-order HDL approximations for \ascon/\xoodyak initializations, respectively (which is believed to be impossible in the simple DL case). We derived highly biased HDL approximations for 5-round \ascon up to 8$^{th}$ order, which improves the complexity of the distinguishing attack on 5-round \ascon from $2^{16}$ to $2^{12}$ calls. We also proposed HDL approximations for 6-round \ascon and 5-round \xoodyak (under the single-key model), which couldn't be reached with simple DL so far. For key recovery, HDL attacks are also more efficient than DL attacks, thanks to the larger biases of HDL approximations. Additionally, HATF works well for DL (1$^{st}$-order HDL) attacks and some well-known DL biases of \ascon and \xoodyak that could only be obtained experimentally before can now be predicted theoretically. With \DSF, we propose a new distinguishing attack on 8-round \ascon permutation, with a complexity of $2^{48}$. Also, we provide a new zero-sum distinguisher for the full 12-round \ascon permutation with $2^{55}$ time/data complexity. We highlight that our cryptanalyses do not threaten the security of \ascon or \xoodyak.
Last updated:  2022-10-07
Post-Quantum Signature from Subset Product with Errors
Trey Li
We propose a new identification scheme and a new signature scheme from the multiple modular subset product with errors problem; as well as a new identification scheme and a new signature scheme from the multiple modular subset sum with errors problem.
Last updated:  2023-07-21
Fast Fully Oblivious Compaction and Shuffling
Sajin Sasy, Aaron Johnson, Ian Goldberg
Several privacy-preserving analytics frameworks have been proposed that use trusted execution environments (TEEs) like Intel SGX. Such frameworks often use compaction and shuffling as core primitives. However, due to advances in TEE side-channel attacks, these primitives, and the applications that use them, should be _fully oblivious_; that is, perform instruction sequences and memory accesses that do not depend on the secret inputs. Such obliviousness would eliminate the threat of leaking private information through memory or timing side channels, but achieving it naively can result in a significant performance cost. In this work, we present fast, fully oblivious algorithms for compaction and shuffling. We implement and evaluate our designs to show that they are practical and outperform the state of the art. Our oblivious compaction algorithm, ORCompact, is always faster than the best alternative and can yield up to a 5x performance improvement. For oblivious shuffling, we provide two novel algorithms: ORShuffle and BORPStream. ORShuffle outperforms prior fully oblivious shuffles in all experiments, and it provides the largest speed increases—up to 1.8x—when shuffling a large number of small items. BORPStream outperforms all other algorithms when shuffling a large number of large items, with a speedup of up to 1.4x in such cases. It can obtain even larger performance improvements in application settings where the items to shuffle arrive incrementally over time, obtaining a speedup of as much as 4.2x. We additionally give parallel versions of all of our algorithms, prove that they have low parallel step complexity, and experimentally show a 5–6x speedup on an 8-core processor. Finally, ours is the first work with the explicit goal of ensuring full obliviousness of complex functionalities down to the implementation level. To this end, we design Fully Oblivious Assembly Verifier (FOAV), a tool that verifies the binary has no secret-dependent conditional branches.
Last updated:  2023-06-01
On the Classic Protocol for MPC Schnorr Signatures
Nikolaos Makriyannis
In this paper, we prove that the classic three-round protocol for MPC Schnorr Signatures is fully-adaptive UC-secure. Furthermore, we show that a simple variant of the Classic protocol achieves tight security, i.e.~the security of the resulting, modified, protocol tightly reduces to the security of the underlying non-MPC scheme.
Last updated:  2022-10-06
Additive-Homomorphic Functional Commitments and Applications to Homomorphic Signatures
Dario Catalano, Dario Fiore, Ida Tucker
Functional Commitments (FC) allow one to reveal functions of committed data in a succinct and verifiable way. In this paper we put forward the notion of additive-homomorphic FC and show two efficient, pairing-based, realizations of this primitive supporting multivariate polynomials of constant degree and monotone span programs, respectively. We also show applications of the new primitive in the contexts of homomorphic signatures: we show that additive-homomorphic FCs can be used to realize homomorphic signatures (supporting the same class of functionalities as the underlying FC) in a simple and elegant way. Using our new FCs as underlying building blocks, this leads to the (seemingly) first expressive realizations of multi-input homomorphic signatures not relying on lattices or multilinear maps.
Last updated:  2022-12-23
Hybrid Dual and Meet-LWE Attack
Lei Bi, Xianhui Lu, Junjie Luo, Kunpeng Wang
The Learning with Errors (LWE) problem is one of the most prominent problems in lattice-based cryptography. Many practical LWE-based schemes, including Fully Homomorphic encryption (FHE), use sparse ternary secret for the sake of efficiency. Several (hybrid) attacks have been proposed that benefit from such sparseness, thus researchers believe the security of the schemes with sparse ternary secrets is not well-understood yet. Recently, May [Crypto 2021] proposed an efficient meet-in-the-middle attack named Meet-LWE for LWE with ternary se- cret, which significantly improves Odlyzko’s algorithm. In this work, we generalize May’s Meet-LWE and then introduce a new hybrid attack which combines Meet-LWE with lattice dual attack. We implement our algorithm to FHE-type parameters of LWE problem and compare it with the previous hybrid dual attacks. The result shows that our attack outperforms other attacks in a large range of parameters. We note that our attack has no impact on the LWE-based schemes in the PQC Standardization held by NIST as their secrets are not sparse and/or ternary.
Last updated:  2022-10-06
New Time-Memory Trade-Offs for Subset Sum -- Improving ISD in Theory and Practice
Andre Esser, Floyd Zweydinger
We propose new time-memory trade-offs for the random subset sum problem defined on $(a_1,\ldots,a_n,t)$ over $\mathbb{Z}_{2^n}$. Our trade-offs yield significant running time improvements for every fixed memory limit $M\geq2^{0.091n}$. Furthermore, we interpolate to the running times of the fastest known algorithms when memory is not limited. Technically, our design introduces a pruning strategy to the construction by Becker-Coron-Joux (BCJ) that allows for an exponentially small success probability. We compensate for this reduced probability by multiple randomized executions. Our main improvement stems from the clever reuse of parts of the computation in subsequent executions to reduce the time complexity per iteration. As an application of our construction, we derive the first non-trivial time-memory trade-offs for Information Set Decoding (ISD) algorithms. Our new algorithms improve on previous (implicit) trade-offs asymptotically as well as practically. Moreover, our optimized implementation also improves on running time, due to reduced memory access costs. We demonstrate this by obtaining a new record computation in decoding quasi-cyclic codes (QC-3138). Using our newly obtained data points we then extrapolate the hardness of suggested parameter sets for the NIST PQC fourth round candidates McEliece, BIKE and HQC, lowering previous estimates by up to 6 bits and further increasing their reliability.
Last updated:  2023-06-20
Revisiting Nearest-Neighbor-Based Information Set Decoding
Andre Esser
The syndrome decoding problem lies at the heart of code-based cryptographic constructions. Information Set Decoding (ISD) algorithms are commonly used to assess the security of these systems. The most efficient ISD algorithms rely heavily on nearest neighbor search techniques. However, the runtime result of the fastest known ISD algorithm by Both-May (PQCrypto '17) was recently challenged by Carrier et al. (Asiacrypt '22), which introduce themselves a new technique called RLPN decoding which yields improvements over ISD for codes with small rates $\frac{k}{n}\leq 0.3$. In this work we first revisit the Both-May algorithm, by giving a clean exposition and a corrected analysis. In this context we confirm the result by Carrier et al. that the initial analysis is flawed and conclude with the same runtime exponent. Our work aims at fully substantiating the corrected runtime exponent by a detailed analysis. Furthermore, we show that the Both-May algorithm significantly improves on memory complexity over previous algorithms. Our first main contribution is therefore to give the correct perspective on the significance of the Both-May algorithm and to clarify any remaining doubts on the corrected baseline. As a second main contribution we detail a possible strategy for future improvements of the Both-May algorithm. This strategy is based on introducing a novel technique to combine the list construction step and the list filtering step commonly applied by ISD algorithms. Therefore we treat the nearest neighbor routine in a non-blackbox fashion which allows us to embed the filtering into the nearest neighbor search. In this context we introduce the fixed-weight nearest neighbor problem, and propose a first algorithm to solve this problem. Even though, our current analysis does not yet yield a gain in complexity over the Both-May algorithm, we point out different ways to further improve the proposed technique.
Last updated:  2022-10-06
Post-Quantum Public Key Cryptosystem from Subset Product with Errors
Trey Li
We give a new post-quantum public key cryptosystem from the multiple modular subset product with errors problem.
Last updated:  2022-10-06
Survey: Non-malleable code in the split-state model
Divesh Aggarwal, Marshall Ball, Maciej Obremski
Non-malleable codes are a natural relaxation of error correction and error detection codes applicable in scenarios where error-correction or error-detection is impossible. Over the last decade, non-malleable codes have been studied for a wide variety of tampering families. Among the most well studied of these is the split-state family of tampering channels, where the codeword is split into two or more parts and each part is tampered independently. We survey various constructions and applications of non-malleable codes in the split-state model.
Last updated:  2022-10-05
Efficient and Complete Formulas for Binary Curves
Thomas Pornin
Binary elliptic curves are elliptic curves defined over finite fields of characteristic 2. On software platforms that offer carryless multiplication opcodes (e.g. pclmul on x86), they have very good performance. However, they suffer from some drawbacks, in particular that non-supersingular binary curves have an even order, and that most known formulas for point operations have exceptional cases that are detrimental to safe implementation. In this paper, we show how to make a prime order group abstraction out of standard binary curves. We describe a new canonical compression scheme that yields a canonical and compact encoding. We also describe complete formulas for operations on the group. The formulas have no exceptional case, and are furthermore faster than previously known complete and incomplete formulas (general point addition in cost 8M+2S+2mb on all curves, 7M+2S+2mb on half of the curves). We also show how the same formulas can be applied to computations on the entire original curve, if full backward compatibility with standard curves is needed. Finally, we implemented our method over the standard NIST curves B-233 and K-233. Our strictly constant-time code achieves generic point multiplication by a scalar on curve K-233 in as little as 29600 clock cycles on an Intel x86 CPU (Coffee Lake core).
Last updated:  2022-10-05
Adaptive Multiparty NIKE
Venkata Koppula, Brent Waters, Mark Zhandry
We construct adaptively secure multiparty non-interactive key exchange (NIKE) from polynomially hard indistinguishability obfuscation and other standard assumptions. This improves on all prior such protocols, which required sub-exponential hardness. Along the way, we establish several compilers which simplify the task of constructing new multiparty NIKE protocols, and also establish a close connection with a particular type of constrained PRF.
Last updated:  2023-12-11
On Constructing One-Way Quantum State Generators, and More
Shujiao Cao and Rui Xue
As a quantum analogue of one-way function, the notion of one-way quantum state generator is recently proposed by Morimae and Yamakawa (CRYPTO'22), which is proved to be implied by the pseudorandom state and can be used to devise the one-time secure digital signature. Due to Kretschmer's result (TQC'20), it's believed that pseudorandom state generator requires less than post-quantum secure one-way function. Unfortunately, it remains to be unknown how to achieve the one-way quantum state generator without the existence of post-quantum secure one-way function. In this paper, we mainly study that problem and obtain the following results: Two variants of one-way quantum state generator are proposed, called the weak one-way quantum state generator and distributionally one-way quantum state generator. Then the equivalence between weak and strong one-way state generator is obtained, and the equivalence between weak and distributionally one-way quantum state generator is shown in the symmetric setting. We construct the symmetric distributionally one-way quantum state generator from average-case hardness assumption of a promise problem belongs to $\textsf{QSZK}$. We construct quantum bit commitment with statistical binding (sum-binding) and computational hiding directly from the average-case hardness of $\textsf{QSZK}$. To show the non-triviality of the constructions above, a quantum oracle $\mathcal{U}$ is devised relative to which such promise problem in $ \textsf{QSZK}$ doesn't belong to $\mathsf{QMA}^{\mathcal{U}}$. Our results present the first non-trivial construction of one-way quantum state generator from the hardness assumption of complexity class, and give another evidence that one-way quantum state generator probably requires less than post-quantum secure one-way function.
Last updated:  2023-11-19
Efficient Linkable Ring Signature from Vector Commitment inexplicably named Multratug
Anton A. Sokolov
In this paper we revise the idea of our previous work ‘Lin2-Xor lemma and Log-size Linkable Threshold Ring Signature’ and introduce another lemma, called Lin2-Choice, which extends the Lin2-Xor lemma. Using an novel zero-knowledge membership proof argument defined in the Lin2-Choice lemma, we create a compact general-purpose trusted-setup-free log-size linkable threshold ring signature called EFLRSL. The signature size is 2log(n+1)+3l+1, where n is the ring size and l is the threshold. By extending the membership argument of the Lin2-Choice lemma, we create a multifunctional version of the EFLRSL signature aliased as Multratug, of size 2log(n+l+1)+7l+4. In addition to signing a message, Multratug simultaneously proves balance and allows for easy multiparty signing. We use an arbitrary vector commitment argument in the role of the pivotal building block for both versions of our signature, considering it as a black box. Only the black-boxed pivot contributes components that depend on the ring size n into the signature sizes. This makes both of EFLRSL and Multratug combinable with other proofs, with overall size reduction. All this takes place in a prime-order group without bilinear parings under the decisional Diffie-Hellman assumption in the random oracle model. Both versions of our signature are proved unforgeable w.r.t. insider corruption and existentially unforgeable under chosen message attack. They remain anonymous even for non-uniformly distributed and malformed keys, which makes it possible to use them as a log-size drop-in replacement for LSAG-based schemes.
Last updated:  2023-04-15
cuZK: Accelerating Zero-Knowledge Proof with A Faster Parallel Multi-Scalar Multiplication Algorithm on GPUs
Tao Lu, Chengkun Wei, Ruijing Yu, Chaochao Chen, Wenjing Fang, Lei Wang, Zeke Wang, Wenzhi Chen
Zero-knowledge proof is a critical cryptographic primitive. Its most practical type, called zero-knowledge Succinct Non-interactive ARgument of Knowledge (zkSNARK), has been deployed in various privacy-preserving applications such as cryptocurrencies and verifiable machine learning. Unfortunately, zkSNARK like Groth16 has a high overhead on its proof generation step, which consists of several time-consuming operations, including large-scale matrix-vector multiplication (MUL), number-theoretic transform (NTT), and multi-scalar multiplication (MSM). Therefore, this paper presents cuZK, an efficient GPU implementation of zkSNARK with the following three techniques to achieve high performance. First, we propose a new parallel MSM algorithm. This MSM algorithm achieves nearly perfect linear speedup over the Pippenger algorithm, a well-known serial MSM algorithm. Second, we parallelize the MUL operation. Along with our self-designed MSM scheme and well-studied NTT scheme, cuZK achieves the parallelization of all operations in the proof generation step. Third, cuZK reduces the latency overhead caused by CPU-GPU data transfer by 1) reducing redundant data transfer and 2) overlapping data transfer and device computation. The evaluation results show that our MSM module provides over 2.08× (up to 2.94×) speedup versus the state-of-the-art GPU implementation. cuZK achieves over 2.65× (up to 4.86×) speedup on standard benchmarks and 2.18× speedup on a GPU-accelerated cryptocurrency application, Filecoin.
Last updated:  2023-03-28
Boosting Batch Arguments and RAM Delegation
Yael Tauman Kalai, Alex Lombardi, Vinod Vaikuntanathan, Daniel Wichs
We show how to generically improve the succinctness of non-interactive publicly verifiable batch argument ($\mathsf{BARG}$) systems. In particular, we show (under a mild additional assumption) how to convert a $\mathsf{BARG}$ that generates proofs of length $\mathsf{poly} (m)\cdot k^{1-\epsilon}$, where $m$ is the length of a single instance and $k$ is the number of instances being batched, into one that generates proofs of length $\mathsf{poly} (m)\cdot \mathsf{poly} \log k$, which is the gold standard for succinctness of $\mathsf{BARG}$s. By prior work, such $\mathsf{BARG}$s imply the existence of $\mathsf{SNARG}$s for deterministic time $T$ computation with optimal succinctness $\mathsf{poly}\log T$. Our result reduces the long-standing challenge of building publicly-verifiable delegation schemes to a much easier problem: building a batch argument system that beats the trivial construction. It also immediately implies new constructions of $\mathsf{BARG}$s and $\mathsf{SNARG}$s with polylogarithmic succinctness based on either bilinear maps or a combination of the $\mathsf{DDH}$ and $\mathsf{QR}$ assumptions. Along the way, we prove an equivalence between $\mathsf{BARG}$s and a new notion of $\mathsf{SNARG}$s for (deterministic) $\mathsf{RAM}$ computations that we call ``flexible $\mathsf{RAM}$ $\mathsf{SNARG}$s with partial input soundness." This is the first demonstration that $\mathsf{SNARG}$s for deterministic computation (of any kind) imply $\mathsf{BARG}$s. Our $\mathsf{RAM}$ $\mathsf{SNARG}$ notion is of independent interest and has already been used in a recent work on constructing rate-1 $\mathsf{BARG}$s (Devadas et. al. FOCS 2022).
Last updated:  2022-10-05
Post-Quantum Key Exchange from Subset Product With Errors
Trey Li
We introduce a new direction for post-quantum key exchange based on the multiple modular subset product with errors problem.
Last updated:  2022-10-08
General Partially Fair Multi-Party Computation with VDFs
Bolton Bailey, Andrew Miller, Or Sattath
Gordon and Katz, in "Partial Fairness in Secure Two-Party Computation", present a protocol for two-party computation with partial fairness which depends on presumptions on the size of the input or output of the functionality. They also show that for some other functionalities, this notion of partial fairness is impossible to achieve. In this work, we get around this impossibility result using verifiable delay functions, a primitive which brings in an assumption on the inability of an adversary to compute a certain function in a specified time. We present a gadget using VDFs which allows for any MPC to be carried out with ≈ 1/R partial fairness, where R is the number of communication rounds.
Last updated:  2023-11-01
On the Optimal Succinctness and Efficiency of Functional Encryption and Attribute-Based Encryption
Aayush Jain, Huijia Lin, and Ji Luo
We investigate the optimal (asymptotic) efficiency of functional encryption (FE) and attribute-based encryption (ABE) by proving inherent space-time trade-offs and constructing nearly optimal schemes. We consider the general notion of partially hiding functional encryption (PHFE), capturing both FE and ABE, and the most efficient computation model of random-access machines (RAM). In PHFE, a secret key $\mathsf{sk}_f$ is associated with a function $f$, whereas a ciphertext $\mathsf{ct}_x(y)$ is tied to a public input $x$ and encrypts a private input $y$. Decryption reveals $f(x,y)$ and nothing else about $y$. We present the first PHFE for RAM solely based on the necessary assumption of FE for circuits. Significantly improving upon the efficiency of prior schemes, our construction achieves nearly optimal succinctness and computation time: - Its secret key $\mathsf{sk}_f$ is of *constant size* (optimal), independent of the function description length $|f|$, i.e., ${|\mathsf{sk}_f|=\operatorname{poly}(\lambda)}$. - Its ciphertext $\mathsf{ct}_x(y)$ is *rate-2* in the private input length $|y|$ (nearly optimal) and *independent* of the public input length $|x|$ (optimal), i.e., ${|\mathsf{ct}_x(y)|=2|y|+\operatorname{poly}(\lambda)}$. - Decryption time is *linear* in the *instance* RAM running time $T$, plus the function and public/private input lengths, i.e., ${T_{\mathsf{Dec}}=(T+|f|+|x|+|y|)\operatorname{poly}(\lambda)}$. As a corollary, we obtain the first ABE with both keys and ciphertexts being constant-size, while enjoying the best-possible decryption time matching the lower bound by Luo [ePrint '22]. We also separately achieve several other PHFE and ABE schemes. We study the barriers to further efficiency improvements. We prove the first unconditional space-time trade-offs for (PH-)FE: - *No* secure (PH-)FE can have $|\mathsf{sk}_f|$ and $T_{\mathsf{Dec}}$ *both* sublinear in $|f|$. - *No* secure PHFE can have $|\mathsf{ct}_x(y)|$ and $T_{\mathsf{Dec}}$ *both* sublinear in $|x|$. Our lower bounds apply even to the weakest secret-key 1-key 1-ciphertext selective schemes. Furthermore, we demonstrate a conditional barrier towards the optimal decryption time ${T_{\mathsf{Dec}}=T\operatorname{poly}(\lambda)}$ while keeping linear size dependency — any such (PH-)FE scheme implies doubly efficient private information retrieval (DE-PIR) with ideal efficiency, for which so far there is no satisfactory candidate.
Last updated:  2022-10-04
TurboPack: Honest Majority MPC with Constant Online Communication
Daniel Escudero, Vipul Goyal, Antigoni Polychroniadou, Yifan Song
We present a novel approach to honest majority secure multiparty computation in the preprocessing model with information theoretic security that achieves the best online communication complexity. The online phase of our protocol requires $12$ elements in total per multiplication gate with circuit-dependent preprocessing, or $20$ elements in total with circuit-independent preprocessing. Prior works achieved linear online communication complexity in $n$, the number of parties, with the best prior existing solution involving $1.5n$ elements per multiplication gate. Only one recent work (Goyal et al, CRYPTO'22) achieves constant online communication complexity, but the constants are large ($108$ elements for passive security, and twice that for active security). That said, our protocol offers a very efficient information theoretic online phase for any number of parties. The total end-to-end communication cost with the preprocessing phase is linear in $n$, i.e., $10n + 44$, which is larger than the $4n$ complexity of the state-of-the-art protocols. The gap is not significant when the online phase must be optimized as a priority and a reasonably large number of parties is involved. Unlike previous works based on packed secret-sharing to reduce communication complexity, we further reduce the communication by avoiding the use of complex and expensive network routing or permutations tools. Furthermore, we also allow for a maximal honest majority adversary, while most previous works require the set of honest parties to be strictly larger than a majority. Our protocol is simple and offers concrete efficiency. To illustrate this we present a full-fledged implementation together with experimental results that show improvements in online phase runtimes that go up to $5\times$ in certain settings (e.g. $45$ parties, LAN network, circuit of depth $10$ with $1$M gates).
Last updated:  2022-10-04
Hitchhiker’s Guide to a Practical Automated TFHE Parameter Setup for Custom Applications
Jakub Klemsa
Also referred to as the Holy Grail of Cryptography, Fully Homomorphic Encryption (FHE) allows for arbitrary calculations over encrypted data. As a basic use-case, FHE enables a User to delegate a computation over her sensitive data to a semi-trusted Cloud: in such a setup, the User provides her data $x$ encrypted to the Cloud, the Cloud evaluates a function $f$ over the encrypted data without ever decrypting it, and sends the (encrypted) result back to the User, who finally decrypts it and obtains the desired value $f(x)$. However, even after more than twelve years of advances in this field, FHE schemes are still fairly slow in evaluation, therefore any optimization is welcome. Among existing FHE schemes, in this contribution we focus on the TFHE Scheme by Chillotti et al., which currently achieves the best evaluation times for generic functions. To be instantiated, TFHE however requires an extensive set of parameters. These parameters affect several aspects, including but not limited to the cleartext size, the bit-security level, the probability of errors and also the evaluation time. We propose, implement and evaluate a (semi-)automated approach to generate a set of TFHE parameters with particular respect to the evaluation time, given just the desired security level, cleartext precision, and a parameter that relates to the properties of the evaluated function $f$. With our tool, we re-generate some of the existing TFHE parameters, while achieving up to 39% better execution times in an equivalent setup.
Last updated:  2022-10-04
Hash Gone Bad: Automated discovery of protocol attacks that exploit hash function weaknesses
Vincent Cheval, Cas Cremers, Alexander Dax, Lucca Hirschi, Charlie Jacomme, Steve Kremer
Most cryptographic protocols use cryptographic hash functions as a building block. The security analyses of these protocols typically assume that the hash functions are perfect (such as in the random oracle model). However, in practice, most widely deployed hash functions are far from perfect -- and as a result, the analysis may miss attacks that exploit the gap between the model and the actual hash function used. We develop the first methodology to systematically discover attacks on security protocols that exploit weaknesses in widely deployed hash functions. We achieve this by revisiting the gap between theoretical properties of hash functions and the weaknesses of real-world hash functions, from which we develop a lattice of threat models. For all of these threat models, we develop fine-grained symbolic models. Our methodology's fine-grained models cannot be directly encoded in existing state-of-the-art analysis tools by just using their equational reasoning. We therefore develop extensions for the two leading tools, Tamarin and Proverif. In extensive case studies using our methodology, the extended tools rediscover all attacks that were previously reported for these protocols and discover several new variants.
Last updated:  2023-05-26
Bounded Surjective Quadratic Functions over $\mathbb F_p^n$ for MPC-/ZK-/FHE-Friendly Symmetric Primitives
Lorenzo Grassi
Motivated by new applications such as secure Multi-Party Computation (MPC), Fully Homomorphic Encryption (FHE), and Zero-Knowledge proofs (ZK), many MPC-, FHE- and ZK-friendly symmetric-key primitives that minimize the number of multiplications over $\mathbb F_p$ for a large prime $p$ have been recently proposed in the literature. These symmetric primitives are usually defined via invertible functions, including (i) Feistel and Lai-Massey schemes and (ii) SPN constructions instantiated with invertible non-linear S-Boxes. However, the ``invertibility'' property is actually never required in any of the mentioned applications. In this paper, we discuss the possibility to set up MPC-/FHE-/ZK-friendly symmetric primitives instantiated with non-invertible bounded surjective functions. With respect to one-to-one functions, any output of a $l$-bounded surjective function admits at most $l$ pre-images. The simplest example is the square map $x\mapsto x^2$ over $\mathbb F_p$ for a prime $p\ge 3$, which is (obviously) $2$-bounded surjective. When working over $\mathbb F_p^n$ for $n\ge 2$, we set up bounded surjective functions by re-considering the recent results proposed by Grassi, Onofri, Pedicini and Sozzi at FSE/ToSC 2022 as starting points. Given a quadratic local map $F:\mathbb F_p^m \rightarrow \mathbb F_p$ for $m\in\{1,2,3\}$, they proved that the shift-invariant non-linear function over $\mathbb F_p^n$ defined as $\mathcal S_F(x_0, x_1, \ldots, x_{n-1}) = y_0\| y_1\| \ldots \| y_{n-1}$ where $y_i := F(x_i, x_{i+1})$ is never invertible for any $n\ge 2\cdot m-1$. Here, we prove that - the quadratic function $F:\mathbb F_p^m \rightarrow \mathbb F_p$ for $m\in\{1,2\}$ that minimizes the probability of having a collision for $\mathcal S_F$ over $\mathbb F_p^n$ is of the form $F(x_0, x_1) = x_0^2 + x_1$ (or equivalent); - the function $\mathcal S_F$ over $\mathbb F_p^n$ defined as before via $F(x_0, x_1) = x_0^2 + x_1$ (or equivalent) is $2^n$-bounded surjective. As concrete applications, we propose modified versions of the MPC-friendly schemes MiMC, HadesMiMC, and (partially of) Hydra, and of the FHE-friendly schemes Masta, Pasta, and Rubato. By instantiating them with the bounded surjective quadratic functions proposed in this paper, we are able to improve the security and/or the performances in the target applications/protocols.
Last updated:  2022-10-04
Multiple Modular Unique Factorization Domain Subset Product with Errors
Trey Li
We propose the multiple modular subset product with errors problem over unique factorization domains and give search-to-decision reduction as well as average-case-solution to worst-case-solution reduction for it.
Last updated:  2023-02-15
Fully Adaptive Decentralized Multi-Authority ABE
Pratish Datta, Ilan Komargodski, Brent Waters
Decentralized multi-authority attribute-based encryption (𝖬𝖠-𝖠𝖡𝖤) is a distributed generalization of standard (ciphertext-policy) attribute-based encryption where there is no trusted central authority: any party can become an authority and issue private keys, and there is no requirement for any global coordination other than the creation of an initial set of common reference parameters. We present the first multi-authority attribute-based encryption schemes that are provably fully adaptively secure. Namely, our construction is secure against an attacker that may corrupt some of the authorities as well as perform key queries adaptively throughout the life-time of the system. Our main construction relies on a prime order bilinear group where the 𝑘-linear assumption holds as well as on a random oracle. Along the way, we present a conceptually simpler construction relying on a composite order bilinear group with standard subgroup decision assumptions as well as on a random oracle. Prior to this work, there was no construction that could resist adaptive corruptions of authorities, no matter the assumptions used. In fact, we point out that even standard complexity leveraging style arguments do not work in the multi-authority setting.
Last updated:  2022-10-03
Power Residue Symbol Order Detecting Algorithm for Subset Product over Algebraic Integers
Trey Li
We give a probabilistic polynomial time algorithm for high F_ell-rank subset product problem over the order O_K of any algebraic field K with O_K a principal ideal domain and the ell-th power residue symbol in O_K polynomial time computable, for some rational prime ell.
Last updated:  2022-11-06
MPC as a service using Ethereum Registry Smart Contracts - dCommon CIP
Matt Shams(Anis), Bingsheng Zhang, Justinas Zaliaduonis
In this paper we introduce dCommon - auditable and programmable MPC as a service for solving multichain governance coordination problems throughout DeFi and Web3; Along with its on-chain part Common Interest Protocol (CIP) - an autonomous and immutable registry smart contract suite. CIP enables arbitrary business logic for off-chain computations using dCommon’s network/subnetworks with Ethereum smart contracts. In Stakehouse, CIP facilitates a trustless recovery of signing keys and key management for Ethereum validators on demand. The paper elucidates a formal overview of the MPC system cryptography mechanics and its smart contract business logic for the Stakehouse CIP (SH-CIP) application implementation.
Last updated:  2022-10-02
Jacobi Symbol Parity Checking Algorithm for Subset Product
Trey Li
It is well-known that the subset product problem is NP-hard. We give a probabilistic polynomial time algorithm for the special case of high F_2-rank.
Last updated:  2022-10-03
BLOOM: Bimodal Lattice One-Out-of-Many Proofs and Applications
Vadim Lyubashevsky, Ngoc Khanh Nguyen
We give a construction of an efficient one-out-of-many proof system, in which a prover shows that he knows the pre-image for one element in a set, based on the hardness of lattice problems. The construction employs the recent zero-knowledge framework of Lyubashevsky et al. (Crypto 2022) together with an improved, over prior lattice-based one-out-of-many proofs, recursive procedure, and a novel rejection sampling proof that allows to use the efficient bimodal rejection sampling throughout the protocol. Using these new primitives and techniques, we give instantiations of the most compact lattice-based ring and group signatures schemes. The improvement in signature sizes over prior works ranges between $25\%$ and $2$X. Perhaps of even more significance, the size of the user public keys, which need to be stored somewhere publicly accessible in order for ring signatures to be meaningful, is reduced by factors ranging from $7$X to $15$X. In what could be of independent interest, we also provide noticeably improved proofs for integer relations which, together with one-out-of-many proofs are key components of confidential payment systems.
Last updated:  2022-10-01
Single-shuffle Full-open Card-based Protocols Imply Private Simultaneous Messages Protocols
Kazumasa Shinagawa, Koji Nuida
In this note, we introduce a class of card-based protocols called single-shuffle full-open (SSFO) protocols and show that any SSFO protocol for a function $f: \{0,1\}^n \rightarrow [d]$ using $k$ cards is generically converted to a private simultaneous messages (PSM) protocol for $f$ with $(nk)$-bit communication. As an example application, we obtain an 18-bit PSM protocol for the three-bit equality function from the six-card trick (Heather-Schneider-Teague, Formal Aspects of Computing 2014), which is an SSFO protocol in our terminology. We then generalize this result to another class of protocols which we name single-shuffle single-branch (SSSB) protocols, which contains SSFO protocols as a subclass. As an example application, we obtain an 8-bit PSM protocol for the two-bit AND function from the four-card trick (Mizuki-Kumamoto-Sone, ASIACRYPT 2012), which is an SSSB protocol in our terminology.
Last updated:  2022-10-01
Subset Product with Errors over Unique Factorization Domains and Ideal Class Groups of Dedekind Domains
Trey Li
It has been half a century since the first several NP-complete problems were discovered by Cook, Karp and Levin in the early 1970s. Till today, thousands of NP-complete problems have been found. Most of them are of combinatorial flavor. We discover new possibilities in purer mathematics and introduce more structures to the theory of computation. We propose a family of abstract problems related to the subset product problem. To describe hardness of abstract problems, we propose a new hardness notion called global-case hardness, which is stronger than worst-case hardness and incomparable with average-case hardness. It is about whether all prespecified subproblems of a problem are NP-hard. We prove that our problems are generally NP-hard in all/a wide range of unique factorization domains with efficient multiplication or all/a wide range of ideal class groups of Dedekind domains with efficient ideal multiplication.
Last updated:  2022-09-30
Unifying Quantum Verification and Error-Detection: Theory and Tools for Optimisations
Theodoros Kapourniotis, Elham Kashefi, Dominik Leichtle, Luka Music, Harold Ollivier
With the recent availability of cloud quantum computing services, the question of verifying quantum computations delegated by a client to a quantum server is becoming of practical interest. While Verifiable Blind Quantum Computing (VBQC) has emerged as one of the key approaches to address this challenge, current protocols still need to be optimised before they are truly practical. To this end, we establish a fundamental correspondence between error-detection and verification and provide sufficient conditions to both achieve security in the Abstract Cryptography framework and optimise resource overheads of all known VBQC-based protocols. As a direct application, we demonstrate how to systematise the search for new efficient and robust verification protocols for $\mathsf{BQP}$ computations. While we have chosen Measurement-Based Quantum Computing (MBQC) as the working model for the presentation of our results, one could expand the domain of applicability of our framework via direct known translation between the circuit model and MBQC.
Last updated:  2023-10-06
Fast and Clean: Auditable high-performance assembly via constraint solving
Amin Abdulrahman, Hanno Becker, Matthias J. Kannwischer, and Fabien Klein
Handwritten assembly is a widely used tool in the development of high-performance cryptography: By providing full control over instruction selection, instruction scheduling, and register allocation, highest performance can be unlocked. On the flip side, developing handwritten assembly is not only time-consuming, but the artifacts produced also tend to be difficult to review and maintain – threatening their suitability for use in practice. In this work, we present SLOTHY (Super (Lazy) Optimization of Tricky Handwritten assemblY), a framework for the automated superoptimization of assembly with respect to instruction scheduling, register allocation, and loop optimization (software pipelining): With SLOTHY, the developer controls and focuses on algorithm and instruction selection, providing a readable “base” implementation in assembly, while SLOTHY automatically finds optimal and traceable instruction scheduling and register allocation strategies with respect to a model of the target (micro)architecture. We demonstrate the flexibility of SLOTHY by instantiating it with models of the Cortex-M55, Cortex-M85, Cortex-A55 and Cortex-A72 microarchitectures, implementing the Armv8.1-M+Helium and AArch64+Neon architectures. We use the resulting tools to optimize three workloads: First, for Cortex-M55 and Cortex-M85, a radix-4 complex Fast Fourier Transform (FFT) in fixed-point and floating-point arithmetic, fundamental in Digital Signal Processing. Second, on Cortex-M55, Cortex-M85, Cortex-A55 and Cortex-A72, the instances of the Number Theoretic Transform (NTT) underlying CRYSTALS-Kyber and CRYSTALS-Dilithium, two recently announced winners of the NIST Post-Quantum Cryptography standardization project. Third, for Cortex-A55, the scalar multiplication for the elliptic curve key exchange X25519. The SLOTHY-optimized code matches or beats the performance of prior art in all cases, while maintaining compactness and readability.
Last updated:  2022-09-30
Private Certifier Intersection
Bishakh Chandra Ghosh, Sikhar Patranabis, Dhinakaran Vinayagamurthy, Venkatraman Ramakrishna, Krishnasuri Narayanam, Sandip Chakraborty
We initiate the study of Private Certifier Intersection (PCI), which allows mutually distrusting parties to establish a trust basis for cross-validation of claims if they have one or more trust authorities (certifiers) in common. This is one of the essential requirements for verifiable presentations in Web 3.0, since it provides additional privacy without compromising on decentralization. A PCI protocol allows two or more parties holding certificates to identify a common set of certifiers while additionally validating the certificates issued by such certifiers, without leaking any information about the certifiers not in the output intersection. In this paper, we formally define the notion of multi-party PCI in the Simplified-UC framework for two different settings depending on whether certificates are required for any of the claims (called PCI-Any) or all of the claims (called PCI-All). We then design and implement two provably secure and practically efficient PCI protocols supporting validation of digital signature-based certificates: a PCI-Any protocol for ECDSA-based certificates and a PCI-All protocol for BLS-based certificates. The technical centerpiece of our proposals is the first secretsharing-based MPC framework supporting efficient computation of elliptic curve-based arithmetic operations, including elliptic curve pairings, in a black-box way. We implement this framework by building on top of the well-known MP-SPDZ library using OpenSSL and RELIC for elliptic curve operations, and use this implementation to benchmark our proposed PCI protocols in the LAN and WAN settings. In an intercontinental WAN setup with parties located in different continents, our protocols execute in less than a minute on input sets of size 40, which demonstrates the practicality of our proposed solutions.
Last updated:  2022-10-19
On the Invalidity of Lin16/Lin17 Obfuscation Schemes
Hu Yupu, Dong Siyue, Wang Baocang, Dong Xingting
Indistinguishability obfuscation (IO) is at the frontier of cryptography research. Lin16/Lin17 obfuscation schemes are famous progresses towards simplifying obfuscation mechanism. Their basic structure can be described in the following way: to obfuscate a polynomial-time-computable Boolean function $c(x)$, first divide it into a group of component functions with low-degree and low-locality by using randomized encoding, and then hide the shapes of these component functions by using constant-degree multilinear maps (rather than polynomial degree ones). In this short paper we point out that Lin16/Lin17 schemes are invalid. More detailedly, they cannot achieve reusability, therefore they are not true IO schemes, but rather garbling schemes which are one-time schemes. Besides, this short paper presents more observations, to show that component functions cannot be overly simple.
Last updated:  2022-10-04
Garrison: A Novel Watchtower Scheme for Bitcoin
Arash Mirzaei, Amin Sakzad, Jiangshan Yu, Ron Steinfeld
In this paper, we propose Garrison, which is a payment channel with watchtower for Bitcoin. For this scheme, the storage requirements of both channel parties and their watchtower would be O(log(N)) with N being the maximum number of channel updates. Furthermore, using properties of the adaptor signature, Garrison avoids state duplication, meaning that both parties store the same version of transactions for each state and hence the number of transactions does not exponentially increase with the number of applications on top of the channel.
Last updated:  2023-02-15
Addax: A fast, private, and accountable ad exchange infrastructure
Ke Zhong, Yiping Ma, Yifeng Mao, Sebastian Angel
This paper proposes Addax, a fast, verifiable, and private online ad exchange. When a user visits an ad-supported site, Addax runs an auction similar to those of leading exchanges; Addax requests bids, selects the winner, collects payment, and displays the ad to the user. A key distinction is that bids in Addax’s auctions are kept private and the outcome of the auction is publicly verifiable. Addax achieves these properties by adding public verifiability to the affine aggregatable encodings in Prio (NSDI’17) and by building an auction protocol out of them. Our implementation of Addax over WAN with hundreds of bidders can run roughly half the auctions per second as a non-private and non-verifiable exchange, while delivering ads to users in under 600 ms with little additional bandwidth requirements. This efficiency makes Addax the first architecture capable of bringing transparency to this otherwise opaque ecosystem.
Last updated:  2022-10-18
BLEACH: Cleaning Errors in Discrete Computations over CKKS
Nir Drucker, Guy Moshkowich, Tomer Pelleg, Hayim Shaul
Approximated homomorphic encryption (HE) schemes such as CKKS are commonly used to perform computations over encrypted real numbers. It is commonly assumed that these schemes are not “exact” and thus they cannot execute circuits with unbounded depth over discrete sets, such as binary or integer numbers, without error overflows. These circuits are usually executed using BGV and B/FV for integers and TFHE for binary numbers. This artificial separation can cause users to favor one scheme over another for a given computation, without even exploring other, perhaps better, options. We show that by treating step functions as “clean-up” utilities and by leveraging the SIMD capabilities of CKKS, we can extend the homomorphic encryption toolbox with efficient tools. These tools use CKKS to run unbounded circuits that operate over binary and small-integer elements and even combine these circuits with fixed-point real numbers circuits. We demonstrate the results using the Turing-complete Conway’s Game of Life. In our evaluation, for boards of size 128x128, these tools achieved an order of magnitude faster latency than previous implementations using other HE schemes. We argue and demonstrate that for large enough real-world inputs, performing binary circuits over CKKS, while considering it as an “exact” scheme, results in comparable or even better performance than using other schemes tailored for similar inputs.
Last updated:  2022-12-07
Toward a Post-Quantum Zero-Knowledge Verifiable Credential System for Self-Sovereign Identity
Simone Dutto, Davide Margaria, Carlo Sanna, Andrea Vesco
The advent of quantum computers brought a large interest in post-quantum cryptography and in the migration to quantum-resistant systems. Protocols for Self-Sovereign Identity (SSI) are among the fundamental scenarios touched by this need. The core concept of SSI is to move the control of digital identity from third-party identity providers directly to individuals. This is achieved through Verificable Credentials (VCs) supporting anonymity and selective disclosure. In turn, the implementation of VCs requires cryptographic signature schemes compatible with a proper Zero-Knowledge Proof (ZKP) framework. We describe the two main ZKP VCs schemes based on classical cryptographic assumptions, that is, the signature scheme with efficient protocols of Camenisch and Lysyanskaya, which is based on the strong RSA assumption, and the BBS+ scheme of Boneh, Boyen and Shacham, which is based on the strong Diffie-Hellman assumption. Since these schemes are not quantum-resistant, we select as one of the possible post-quantum alternatives a lattice-based scheme proposed by Jeudy, Roux-Langlois, and Sander, and we try to identify the open problems for achieving VCs suitable for selective disclosure, non-interactive renewal mechanisms, and efficient revocation.
Last updated:  2023-10-07
Efficient Asymmetric Threshold ECDSA for MPC-based Cold Storage
Constantin Blokh, Nikolaos Makriyannis, and Udi Peled
Motivated by applications to cold-storage solutions for ECDSA-based cryptocurrencies, we present a new threshold ECDSA protocol between $n$ ``online'' parties and a single ``offline'' (aka.~cold) party. The primary objective of this protocol is to minimize the exposure of the offline party in terms of connected time and bandwidth. This is achieved through a unique asymmetric signing phase, in which the majority of computation, communication, and interaction is handled by the online parties. Our protocol supports a very efficient non-interactive pre-signing stage; the parties calculate preprocessed data for future signatures where each party (offline or online) sends a single independently-generated short message per future signature. Then, to calculate the signature, the offline party simply receives a single short message (approx.~300B) and outputs the signature. All previous ECDSA protocols either have high exposure for all parties, or rely on non-standard coding assumptions. (We assume strong RSA, DCR, DDH and enhanced unforgeability of ECDSA.) To achieve the above, we present a new batching technique for proving in zero-knowledge that the plaintexts of practically any number of Paillier ciphertexts all lie in a given range. The cost of the resulting batch proof is very close to that of the non-batch proof for a single ciphertext, and the technique is applicable to arbitrary Schnorr-style protocols.
Last updated:  2022-10-01
Daric: A Storage Efficient Payment Channel With Penalization Mechanism
Arash Mirzaei, Amin Sakzad, Jiangshan Yu, Ron Steinfeld
Lightning Network (LN), the most widely deployed payment channel for Bitcoin, requires channel parties to generate and store distinct revocation keys for all n payments of a channel to resolve fraudulent channel closures. To reduce the required storage in a payment channel, eltoo introduces a new signature type for Bitcoin to enable payment versioning. This allows a channel party to revoke all old payments by using a payment with a higher version number, reducing the storage complexity from O(n) to O(1). However, eltoo fails to achieve bounded closure, enabling a dishonest channel party to significantly delay the channel closure process. Eltoo also lacks a punishment mechanism, which may incentivize profit-driven channel parties to close a payment channel with an old state, to their own advantage. This paper introduces Daric, a payment channel with unlimited lifetime for Bitcoin that achieves optimal storage and bounded closure. Moreover, Daric implements a punishment mechanism and simultaneously avoids the methods other schemes commonly use to enable punishment: 1) state duplication which leads to exponential increase in the number of transactions with the number of applications on top of each other or 2) dedicated design of adaptor signatures which introduces compatibility issues with BLS or most post-quantum resistant digital signatures. We also formalise Daric and prove its security in the Universal Composability model.
Last updated:  2023-02-19
What Can Cryptography Do For Decentralized Mechanism Design?
Elaine Shi, Hao Chung, Ke Wu
Recent works of Roughgarden (EC'21) and Chung and Shi (SODA'23) initiate the study of a new decentralized mechanism design problem called transaction fee mechanism design (TFM). Unlike the classical mechanism design literature, in the decentralized environment, even the auctioneer (i.e., the miner) can be a strategic player, and it can even collude with a subset of the users facilitated by binding side contracts. Chung and Shi showed two main impossibility results that rule out the existence of a dream TFM. First, any TFM that provides incentive compatibility for individual users and miner-user coalitions must always have zero miner revenue, no matter whether the block size is finite or infinite. Second, assuming finite block size, no non-trivial TFM can simultaenously provide incentive compatibility for any individual user, and for any miner-user coalition. In this work, we explore what new models and meaningful relaxations can allow us to circumvent the impossibility results of Chung and Shi. Besides today’s model that does not employ cryptography, we introduce a new MPC-assisted model where the TFM is implemented by a joint multi-party computation (MPC) protocol among the miners. We prove several feasibility and infeasibility results for achieving strict and approximate incentive compatibility, respectively, in the plain model as well as the MPC-assisted model. We show that while cryptography is not a panacea, it indeed allows us to overcome some impossibility results pertaining to the plain model, leading to non-trivial mechanisms with useful guarantees that are otherwise impossible in the plain model. Our work is also the first to characterize the mathematical landscape of transaction fee mechanism design under approximate incentive compatibility, as well as in a cryptography-assisted model.
Last updated:  2022-09-28
Improving the Efficiency of Report and Trace Ring Signatures
Xavier Bultel, Ashley Fraser, Elizabeth A. Quaglia
Ring signatures allow signers to produce verifiable signatures and remain anonymous within a set of signers (i.e., the ring) while doing so. They are well-suited to protocols that target anonymity as a primary goal, for example, anonymous cryptocurrencies. However, standard ring signatures do not ensure that signers are held accountable if they act maliciously. Fraser and Quaglia (CANS'21) introduced a ring signature variant that they called report and trace ring signatures which balances the anonymity guarantee of standard ring signatures with the need to hold signers accountable. In particular, report and trace ring signatures introduce a reporting system whereby ring members can report malicious message/signature pairs. A designated tracer can then revoke the signer's anonymity if, and only if, a ring member submits a report to the tracer. Fraser and Quaglia present a generic construction of a report and trace ring signature scheme and outline an instantiation for which it is claimed that the complexity of signing is linear in the size of the ring $|R|$. In this paper, we introduce a new instantiation of Fraser and Quaglia's generic report and trace ring signature construction. Our instantiation uses a pairing-based variant of ElGamal that we define. We demonstrate that our instantiation is more efficient. In fact, we highlight that the efficiency of Fraser and Quaglia's instantiation omits a scaling factor of $\lambda$ where $\lambda$ is a security parameter. As such, the complexity of signing for their instantiation grows linearly in $\lambda \cdot |R|$. Our instantiation, on the other hand, achieves signing complexity linear in $|R|$. We also introduce a new pairing-free report and trace ring signature construction reaching a similar signing complexity. Whilst this construction requires some additional group exponentiations, it can be instantiated over any prime order group for which the Decisional Diffie-Hellman assumption holds.
Last updated:  2022-09-28
Bet-or-Pass: Adversarially Robust Bloom Filters
Moni Naor, Noa Oved
A Bloom filter is a data structure that maintains a succinct and probabilistic representation of a set $S\subseteq U$ of elements from a universe $U$. It supports approximate membership queries. The price of the succinctness is allowing some error, namely false positives: for any $x\notin S$, it might answer `Yes' but with a small (non-negligible) probability. When dealing with such data structures in adversarial settings, we need to define the correctness guarantee and formalize the requirement that bad events happen infrequently and those false positives are appropriately distributed. Recently, several papers investigated this topic, suggesting different robustness definitions. In this work we unify this line of research and propose several robustness notions for Bloom filters that allow the adaptivity of queries. The goal is that a robust Bloom filter should behave like a random biased coin even against an adaptive adversary. The robustness definitions are expressed by the type of test that the Bloom filter should withstand. We explore the relationships between these notions and highlight the notion of Bet-or-Pass as capturing the desired properties of such a data structure.
Last updated:  2022-09-28
sMGM: parameterizable AEAD-mode
Liliya Akhmetzyanova, Evgeny Alekseev, Alexandra Babueva, Andrey Bozhko, Stanislav Smyshlyaev
The paper introduces a new AEAD-mode called sMGM (strong Multilinear Galois Mode). The proposed construction can be treated as an extension of the Russian standardized MGM mode and its modification MGM2 mode presented at the CTCrypt'21 conference. The distinctive feature of the new mode is that it provides an interface allowing one to choose specific security properties required for a certain application case. Namely, the mode has additional parameters allowing to switch on/off misuse-resistance or re-keying mechanisms. The sMGM mode consists of two main "building blocks" that are a CTR-style gamma generation function with incorporated re-keying and a multilinear function that lies in the core of the original MGM mode. Different ways of using these functions lead to achieving different sets of security properties. Such an approach to constructing parameterizable AEAD-mode allows for reducing the code size which can be crucial for constrained devices. We provide security bounds for the proposed mode. We focus on proving the misuse-resistance of the sMGM mode, since the standard security properties were already analyzed during the development of the original MGM and MGM2 modes.
Last updated:  2022-09-28
Bool Network: An Open, Distributed, Secure Cross-chain Notary Platform
Zeyuan Yin, Bingsheng Zhang, Jingzhong Xu, Kaiyu Lu, Kui Ren
With the advancement of blockchain technology, hundreds of cryptocurrencies have been deployed. The bloom of heterogeneous blockchain platforms brings a new emerging problem: typically, various blockchains are isolated systems, how to securely identify and/or transfer digital properties across blockchains? There are three main kinds of cross-chain approaches: sidechains/relays, notaries, and hashed time-lock contracts. Among them, notary-based cross-chain solutions have the best compatibility and user-friendliness, but they are typically centralized. To resolve this issue, we present Bool Network -- an open, distributed, secure cross-chain notary platform powered by MPC-based distributed key management over evolving hidden committees. More specifically, to protect the identities of the committee members, we propose a Ring verifiable random function (Ring VRF) protocol, where the real public key of a VRF instance can be hidden among a ring, which may be of independent interest to other cryptographic protocols. Furthermore, all the key management procedures are executed in the TEE, such as Intel SGX, to ensure the privacy and integrity of partial key components. A prototype of the proposed Bool Network is implemented in Rust language, using Polkadot Substrate.
Last updated:  2022-12-22
Exploring RNS for Isogeny-based Cryptography
David Jacquemin, Ahmet Can Mert, Sujoy Sinha Roy
Isogeny-based cryptography suffers from a long-running time due to its requirement of a great amount of large integer arithmetic. The Residue Number System (RNS) can compensate for that drawback by making computation more efficient via parallelism. However, performing a modular reduction by a large prime which is not part of the RNS base is very expensive. In this paper, we propose a new fast and efficient modular reduction algorithm using RNS. Also, we evaluate our modular reduction method by realizing a cryptoprocessor for isogeny-based SIDH key exchange. On a Xilinx Ultrascale+ FPGA, the proposed cryptoprocessor consumes 151,009 LUTs, 143,171 FFs and 1,056 DSPs. It achieves 250 MHz clock frequency and finishes the key exchange for SIDH in 3.8 and 4.9 ms.
Last updated:  2022-09-28
Round-Optimal Black-Box Secure Computation from Two-Round Malicious OT
Yuval Ishai, Dakshita Khurana, Amit Sahai, Akshayaram Srinivasan
We give round-optimal {\em black-box} constructions of two-party and multiparty protocols in the common random/reference string (CRS) model, with security against malicious adversaries, based on any two-round oblivious transfer (OT) protocol in the same model. Specifically, we obtain two types of results. \begin{enumerate} \item {\bf Two-party protocol.} We give a (two-round) {\it two-sided NISC} protocol that makes black-box use of two-round (malicious-secure) OT in the CRS model. In contrast to the standard setting of non-interactive secure computation (NISC), two-sided NISC allows communication from both parties in each round and delivers the output to both parties at the end of the protocol. Prior black-box constructions of two-sided NISC relied on idealized setup assumptions such as OT correlations, or were proven secure in the random oracle model. \item {\bf Multiparty protocol.} We give a three-round secure multiparty computation protocol for an arbitrary number of parties making black-box use of a two-round OT in the CRS model. The round optimality of this construction follows from a black-box impossibility proof of Applebaum et al. (ITCS 2020). Prior constructions either required the use of random oracles, or were based on two-round malicious-secure OT protocols that satisfied additional security properties. \end{enumerate}
Last updated:  2022-10-03
On a Conjecture From a Failed CryptoAnalysis
Shengtong Zhang
Let $P(x, y)$ be a bivariate polynomial with coefficients in $\mathbb{C}$. Form the $n \times n$ matrices $L_n$ whose elements are defined by $P(i, j)$. Define the matrices $M_n = I_n - L_n $. We show that $\mu_n = \det(M_n)$ is a polynomial in $n$, thus answering a conjecture of Naccache and Yifrach-Stav.
Last updated:  2023-06-20
ZEBRA: SNARK-based Anonymous Credentials for Practical, Private and Accountable On-chain Access Control
Deevashwer Rathee, Guru Vamsi Policharla, Tiancheng Xie, Ryan Cottone, Dawn Song
Restricting access to certified users is not only desirable for many blockchain applications, it is also legally mandated for decentralized finance (DeFi) applications to counter malicious actors. Existing solutions, however, are either (i) non-private, i.e., they reveal the link between users and their wallets to the authority granting credentials, or (ii) they introduce additional trust assumptions by relying on a decentralized oracle to verify anonymous credentials (ACs). To remove additional trust in the latter approach, we propose verifying credentials on-chain in this work. We find that this approach has impractical costs with prior AC schemes, and propose a new AC scheme ZEBRA that crucially relies on zkSNARKs to provide efficient on-chain verification for the first time. In addition to the standard unlinkability property that provides privacy for users, ZEBRA also supports auditability, revocation, traceability, and theft detection, which adds accountability for malicious users and convenience for honest users to our access control solution. Even with these properties, ZEBRA reduces the gas cost incurred on the Ethereum Virtual Machine (EVM) by 14.3x when compared to Coconut [NDSS 2019], the state-of-the-art AC scheme for blockchains that only provides unlinkability. This improvement translates to a reduction in transaction fees from 176 USD to 12 USD on Ethereum in May 2023. Since 12 USD is still high for most applications, ZEBRA further drives down credential verification costs through batched verification. For a batch of 512 layer-1 and layer-2 wallets, the transaction fee on Ethereum is reduced to just 0.44 USD and 0.02 USD, respectively, which is comparable to the minimum transaction costs on Ethereum.
Last updated:  2022-09-27
Lower Bounds for the Number of Decryption Updates in Registration-Based Encryption
Mohammad Mahmoody, Wei Qi, Ahmadreza Rahimi
Registration-based encryption (Garg, Hajiabadi, Mahmoody, Rahimi, TCC'18) aims to offer what identity-based encryption offers without the key-escrow problem, which refers to the ability of the private-key generator to obtain parties' decryption keys at wish. In RBE, parties generate their own secret and public keys and register their public keys to the key curator (KC) who updates a compact public parameter after each registration. The updated public parameter can then be used to securely encrypt messages to registered identities. A major drawback of RBE, compared with IBE, is that in order to decrypt, parties might need to periodically request so-called decryption updates from the KC. Current RBE schemes require $\Omega(\log n)$ number of updates after $n$ registrations, while the public parameter is of length $\text{poly}(\log n)$. Clearly, it would be highly desirable to have RBEs with only, say, a constant number of updates. This leads to the following natural question: are so many (logarithmic) updates necessary for RBE schemes, or can we decrease the frequency of updates significantly? In this paper, we prove an almost tight lower bound for the number of updates in RBE schemes, as long as the times that parties receive updates only depend on the registration time of the parties, which is a natural property that holds for all known RBE constructions. More generally, we prove a trade-off between the number of updates in RBEs and the length of the public parameter for any scheme with fixed update times. Indeed, we prove that for any such RBE scheme, if there are $n \geq \binom{k+d}{d+1}$ identities that receive at most $d$ updates, the public parameter needs to be of length $\Omega(k)$. As a corollary, we find that RBE systems with fixed update times and public parameters of length $\text{poly} (\log n)$, require $\Omega(\log n/\text{loglog}\ n)$ decryption updates, which is optimal up to a $O(\text{loglog}\ n)$ factor.
Last updated:  2023-10-28
(Inner-Product) Functional Encryption with Updatable Ciphertexts
Valerio Cini, Sebastian Ramacher, Daniel Slamanig, Christoph Striecks, and Erkan Tairi
We propose a novel variant of functional encryption which supports ciphertext updates, dubbed ciphertext-updatable functional encryption (CUFE). Such a feature further broadens the practical applicability of the functional-encryption paradigm and allows for fine-grained access control even after a ciphertext is generated. Updating ciphertexts is carried out via so-called update tokens which a dedicated party can use to convert ciphertexts. However, allowing update tokens requires some care for the security definition. Our contribution is three-fold: a) We define our new primitive with a security notion in the indistinguishability setting. Within CUFE, functional decryption keys and ciphertexts are labeled with tags such that only if the tags of the decryption key and the ciphertext match, then decryption succeeds. Furthermore, we allow ciphertexts to switch their tags to any other tag via update tokens. Such tokens are generated by the holder of the main secret key and can only be used in the desired direction. b) We present a generic construction of CUFE for any functionality as well as predicates different from equality testing on tags which relies on the existence of indistinguishability obfuscation (iO). c) We present a practical construction of CUFE for the inner-product functionality from standard assumptions (i.e., LWE) in the random-oracle model. On the technical level, we build on the recent functional-encryption schemes with fine-grained access control and linear operations on encrypted data (Abdalla et al., AC'20) and introduce an additional ciphertext-updatability feature. Proving security for such a construction turned out to be non-trivial, particularly when revealing keys for the updated challenge ciphertext is allowed. Overall, such construction enriches the set of known inner-product functional-encryption schemes with the additional updatability feature of ciphertexts.
Last updated:  2022-09-27
A Note on Reimplementing the Castryck-Decru Attack and Lessons Learned for SageMath
Rémy Oudompheng, Giacomo Pope
This note describes the implementation of the Castryck-Decru key recovery attack on SIDH using the computer algebra system, SageMath. We describe in detail alternate computation methods for the isogeny steps of the original attack ($(2,2)$-isogenies from a product of elliptic curves and from a Jacobian), using explicit formulas to compute values of these isogenies at given points, motivated by both performance considerations and working around SageMath limitations. A performance analysis is provided, with focus given to the various algorithmic and SageMath specific improvements made during development, which in total accumulated in approximately an eight-fold performance improvement compared with a naïve reimplementation of the proof of concept.
Last updated:  2023-04-18
Comparing Key Rank Estimation Methods
Rebecca Young, Luke Mather, Elisabeth Oswald
Recent works on key rank estimation methods claim that algorithmic key rank estimation is too slow, and suggest two new ideas: replacing repeat attacks with simulated attacks (PS-TH-GE rank estimation), and a shortcut rank estimation method that works directly on distinguishing vector distributions (GEEA). We take these ideas and provide a comprehensive comparison between them and a performant implementation of a classical, algorithmic ranking approach, as well as some earlier work on estimating distinguisher distributions. Our results show, in contrast to the recent work, that the algorithmic ranking approach outperforms GEEA, and that simulation based ranks are unreliable.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.