Papers updated in last 183 days (1491 results)

Last updated:  2024-05-29
On the practical CPAD security of “exact” and threshold FHE schemes and libraries
Marina Checri, Renaud Sirdey, Aymen Boudguiga, and Jean-Paul Bultel
In their 2021 seminal paper, Li and Micciancio presented a passive attack against the CKKS approximate FHE scheme and introduced the notion of CPAD security. The current status quo is that this line of attacks does not apply to ``exact'' FHE. In this paper, we challenge this status quo by exhibiting a CPAD key recovery attack on the linearly homomorphic Regev cryptosystem which easily generalizes to other xHE schemes such as BFV, BGV and TFHE showing that these cryptosystems are not CPAD secure in their basic form. We also show that existing threshold variants of BFV, BGV and CKKS are particularily exposed to CPAD attackers and would be CPAD-insecure without smudging noise addition after partial decryption. Finally we successfully implement our attack against several mainstream FHE libraries and discuss a number of natural countermeasures as well as their consequences in terms of FHE practice, security and efficiency. The attack itself is quite practical as it typically takes less than an hour on an average laptop PC, requiring a few thousand ciphertexts as well as up to around a million evaluations/decryptions, to perform a full key recovery.
Last updated:  2024-05-29
Fast Secure Computations on Shared Polynomials and Applications to Private Set Operations
Pascal Giorgi, Fabien Laguillaumie, Lucas Ottow, and Damien Vergnaud
Secure multi-party computation aims to allow a set of players to compute a given function on their secret inputs without revealing any other information than the result of the computation. In this work, we focus on the design of secure multi-party protocols for shared polynomial operations. We consider the classical model where the adversary is honest-but-curious, and where the coefficients (or any secret values) are either encrypted using an additively homomorphic encryption scheme or shared using a threshold linear secret-sharing scheme. Our protocols terminate after a constant number of rounds and minimize the number of secure multiplications. In their seminal article at PKC 2006, Mohassel and Franklin proposed constant-rounds protocols for the main operations on (shared) polynomials. In this work, we improve the fan-in multiplication of nonzero polynomials, the multi-point polynomial evaluation and the polynomial interpolation (on secret points) to reach a quasi-linear complexity (instead of quadratic in Mohassel and Franklin's work) in the degree of shared input/output polynomials. Computing with shared polynomials is a core component of designing multi-party protocols for privacy-preserving operations on private sets, like private disjointness test or private set intersection. Using our new protocols, we are able to improve the complexity of such protocols and to design the first variant which always returns a correct result.
Last updated:  2024-05-29
$\mathsf{Cougar}$: Cubic Root Verifier Inner Product Argument under Discrete Logarithm Assumption
Hyeonbum Lee, Seunghun Paik, Hyunjung Son, and Jae Hong Seo
An inner product argument (IPA) is a cryptographic primitive used to construct a zero-knowledge proof system, which is a notable privacy-enhancing technology. We propose a novel efficient IPA called $\mathsf{Cougar}$. $\mathsf{Cougar}$ features cubic root verifier and logarithmic communication under the discrete logarithm (DL) assumption. At Asiacrypt2022, Kim et al. proposed two square root verifier IPAs under the DL assumption. Our main objective is to overcome the limitation of square root complexity in the DL setting. To achieve this, we combine two distinct square root IPAs from Kim et al.: one with pairing ($\mathsf{Protocol3}$; one was later named $\mathsf{Leopard}$) and one without pairing ($\mathsf{Protocol4}$). To construct $\mathsf{Cougar}$, we first revisit $\mathsf{Protocol4}$ and reconstruct it to make it compatible with the proof system for the homomorphic commitment scheme. Next, we utilize $\mathsf{Protocol3}$ as the proof system for the reconstructed $\mathsf{Protocol4}$. Finally, to facilitate proving the relation between elliptic curve points appearing in $\mathsf{Protocol4}$, we introduce a novel $\mathsf{Plonkish}$-based proof system equipped with custom gates for mixed elliptic curve addition. We show that $\mathsf{Cougar}$ indeed satisfies all the claimed features, along with providing a soundness proof under the DL assumption. In addition, we implemented $\mathsf{Cougar}$ in Rust, demonstrating that the verification time of $\mathsf{Cougar}$ increases much slowly as the length of the witness $N$ grows, compared to other IPAs under the DL assumption and transparatent setup: BulletProofs and $\mathsf{Leopard}$. Concretely, $\mathsf{Cougar}$ takes 0.346s for verification in our setting when $N = 2^{20}$, which is a $50\times$ speed-up from BulletProofs.
Last updated:  2024-05-29
Keeping Up with the KEMs: Stronger Security Notions for KEMs and automated analysis of KEM-based protocols
Cas Cremers, Alexander Dax, and Niklas Medinger
Key Encapsulation Mechanisms (KEMs) are a critical building block for hybrid encryption and modern security protocols, notably in the post-quantum setting. Given the asymmetric public key of a recipient, the primitive establishes a shared secret key between sender and recipient. In recent years, a large number of abstract designs and concrete implementations of KEMs have been proposed, e.g., in the context of the NIST process for post-quantum primitives. In this work, we (i) establish stronger security notions for KEMs, and (ii) develop a symbolic analysis method to analyze security protocols that use KEMs. First, we generalize existing security notions for KEMs in the computational setting, introduce several stronger security notions, and prove their relations. Our new properties formalize in which sense outputs of the KEM uniquely determine, i.e., bind, other values. Our new binding properties can be used, e.g., to prove the absence of attacks that were not captured by prior security notions, such as re-encapsulation attacks. Second, we develop a family of fine-grained symbolic models that correspond to our hierarchy of computational security notions, and are suitable for the automated analysis of KEM-based security protocols. We encode our models as a library in the framework of the Tamarin prover. Given a KEM-based protocol, our approach can automatically derive the minimal binding properties required from the KEM; or, if also given a concrete KEM, can analyze if the protocols meets its security goals. In case studies, Tamarin automatically discovers, e.g., that the key exchange protocol proposed in the original Kyber paper requires stronger properties from the KEM than were proven in the paper.
Last updated:  2024-05-29
FlexHi: A Flexible Hierarchical Threshold Signature Scheme
Muhammed Ali Bingol, Sermin Kocaman, Ali Dogan, and Sibel Kurt Toplu
Threshold signature schemes have gained prominence in enhancing the security and flexibility of digital signatures, allowing a group of participants to collaboratively create signatures while maintaining a predefined threshold of participants for validity. However, conventional threshold signatures treat all participants equally, lacking the capability to accommodate hierarchical structures often seen in real-world applications. Hierarchical Threshold Signature Schemes (HTSS) naturally extend the concept of simple threshold signatures, offering a solution that aligns with hierarchical organizational structures. Our paper introduces a novel, efficient, and flexible HTSS that employs independent polynomials at each hierarchical level, removing limitations on threshold values. This adaptability enables us to tailor the scheme to diverse requirements, whether signing requires only top-level nodes or lower-level participants' involvement. Based on our analysis, our FlexHi integrated into the FROST scheme outperforms Tassa's hierarchical scheme on FROST and operates approximately 30% to 40% faster, depending on the number of participants and the chosen threshold values. This demonstrates that, in addition to flexibility, our scheme has practical benefits through improved performance.
Last updated:  2024-05-29
On the Guaranteed Number of Activations in XS-circuits
Sergey Agievich
XS-circuits describe cryptographic primitives that utilize 2 operations on binary words of fixed length: X) bitwise modulo 2 addition and S) substitution. The words are interpreted as elements of a field of characteristic 2. In this paper, we develop a model of XS-circuits according to which several instances of a simple round circuit containing only one S operation are linked together and form a compound circuit called a cascade. S operations of a cascade are interpreted as independent round oracles. When a cascade processes a pair of different inputs, some round oracles get different queries, these oracles are activated. The more activations, the higher security guarantees against differential cryptanalysis the cascade provides. We introduce the notion of the guaranteed number of activations, that is, the minimum number of activations over all choices of the base field, round oracles and pairs of inputs. We show that the guaranteed number of activations is related to the minimum distance of the linear code associated with the cascade. It is also related to the minimum number of occurrences of units in segments of binary linear recurrence sequences whose characteristic polynomial is determined by the round circuit. We provide an algorithm for calculating the guaranteed number of activations. We show how to use the algorithm to deal with linear activations related to linear cryptanalysis.
Last updated:  2024-05-29
XS-circuits in Block Ciphers
Sergey Agievich
XS-circuits describe block ciphers that utilize 2 operations: X) bitwise modulo 2 addition of binary words and S) substitution of words using key-dependent S-boxes with possibly complicated internal structure. We propose a model of XS-circuits which, despite the simplicity, covers a rather wide range of block ciphers. In our model, several instances of a simple round circuit, which contains only one S~operation, are linked together and form a compound circuit called a cascade. S operations of a cascade are interpreted as independent round oracles. We deal with diffusion characteristics of cascades. These characteristics are related to the cryptographic strength of corresponding block ciphers. We obtain results on invertibility, transitivity and 2-transitivity of mappings induced by round circuits and their cascades. We provide estimates on the first and second activation times where the i-th activation time is the minimum number of rounds which guarantees that at least i round oracles get different queries while processing two different cascade's inputs. The activation times are related to differential cryptanalysis. We introduce the similarity and duality relations between round circuits. Cascades of related circuits have the same or dual diffusion characteristics. We find canonical representatives of classes of similar circuits and show that the duality between circuits is related to duality between differential and linear attacks against corresponding block ciphers. We discuss families of circuits with growing number of inputs. Such families can be used to build wide-block ciphers.
Last updated:  2024-05-29
MPC in the head using the subfield bilinear collision problem
Janik Huth and Antoine Joux
In this paper, we introduce the subfield bilinear collision problem and use it to construct an identification protocol and a signature scheme. This construction is based on the MPC-in-the-head paradigm and uses the Fiat-Shamir transformation to obtain a signature.
Last updated:  2024-05-28
Attacks Against the INDCPA-D Security of Exact FHE Schemes
Jung Hee Cheon, Hyeongmin Choe, Alain Passelègue, Damien Stehlé, and Elias Suvanto
A recent security model for fully homomorphic encryption (FHE), called IND-CPA^D security and introduced by Li and Micciancio [Eurocrypt'21], strengthens IND-CPA security by giving the attacker access to a decryption oracle for ciphertexts for which it should know the underlying plaintexts. This includes ciphertexts that it (honestly) encrypted and those obtained from the latter by evaluating circuits that it chose. Li and Micciancio singled out the CKKS FHE scheme for approximate data [Asiacrypt'17] by giving an IND-CPA^D attack on it and claiming that IND-CPA security and IND-CPA^D security coincide for exact FHE schemes. We correct the widespread belief according to which IND-CPA^D attacks are specific to approximate homomorphic computations. Indeed, the equivalency formally proved by Li and Micciancio assumes that the schemes have a negligible probability of incorrect decryption. However, almost all competitive implementations of exact FHE schemes give away strong correctness by analyzing correctness heuristically and allowing noticeable probabilities of incorrect decryption. We exploit this imperfect correctness to mount efficient non-adaptive indistinguishability and key-recovery attacks against all major exact FHE schemes. We illustrate their strength by implementing them for BFV using OpenFHE and simulating an attack for the default parameter set of the CGGI implementation of TFHE-rs (the attack experiment is too expensive to be run on commodity desktops, because of the cost of CGGI bootstrapping). Our attacks extend to CKKS for discrete data, and threshold versions of the exact FHE schemes, when the correctness is similarly loose.
Last updated:  2024-05-28
Round Optimal Fully Secure Distributed Key Generation
Jonathan Katz
Protocols for distributed (threshold) key generation (DKG) in the discrete-logarithm setting have received a tremendous amount of attention in the past few years. Several synchronous DKG protocols have been proposed, but most such protocols are not fully secure: they either allow corrupted parties to bias the key, or are not robust and allow malicious parties to prevent successful generation of a key. We explore the round complexity of fully secure DKG in the honest-majority setting where it is feasible. We show the impossibility of one-round, unbiased DKG protocols (even satisfying weaker notions of security), regardless of any prior setup. On the positive side, we show various round-optimal protocols for fully secure DKG offering tradeoffs in terms of their efficiency, necessary setup, and required assumptions.
Last updated:  2024-05-28
How to Redact the Bitcoin Backbone Protocol
Mehmet Sabir Kiraz, Enrique Larraia, and Owen Vaughan
We explain how to extend the Bitcoin backbone model of Garay et al. (Eurocrypt, 2015) to accommodate for redactable blockchains. Our extension captures fluid blockchain-based databases (with mutability requirements) and compliance with existing legislation, such as the GDPR right to be forgotten, or the need to erase offending data from nodes’ databases that would otherwise provoke legal shutdowns. Our redactable backbone protocol retains the essential properties of blockchains. Leveraging zero-knowledge proofs, old data can be erased without requiring trusted third parties or heuristics about past chain validation. Our solution can be implemented on Bitcoin immediately without hard-forks, and it is scalable. It allows the redaction of data from UTXOs or unconfirmed transactions that have not yet flooded the network, while guaranteeing invariance of the Bitcoin state. Thus, offending data does not need to persist in the system, not even temporarily.
Last updated:  2024-05-28
On vectorial functions mapping strict affine subspaces of their domain into strict affine subspaces of their co-domain, and the strong D-property
Claude Carlet and Enrico Piccione
Given three positive integers $n<N$ and $M$, we study those functions $\mathcal{F}$ from the vector space $\mathbb{F}_2^N$ (possibly endowed with the field structure) to $\mathbb{F}_2^M$, which map at least one $n$-dimensional affine subspace of $\mathbb{F}_2^N$ into an affine subspace whose dimension is less than $M$, possibly equal to $n$. This provides functions from $\mathbb{F}_2^n$ to $\mathbb{F}_2^m$ for some $m$ (and, in some cases, permutations) that have a simple representation over $\mathbb{F}_2^N$ or over $\mathbb{F}_{2^N}$. We show that the nonlinearity of $\mathcal{F}$ must not be too large for allowing this, and we observe that if it is zero, there automatically exists a strict affine subspace of its domain that is mapped by $\mathcal{F}$ into a strict affine subspace of its co-domain. In this case, we show that the nonlinearity of the restriction may be large. We study the other cryptographic properties of such a restriction, viewed as an $(n,m)$-function (resp. an $(n,n)$-permutation). We then focus on the case of an $(N,N)$-function $\mathcal{F}$ of the form $\psi(\mathcal{G}(x))$ where $\mathcal{G}$ is almost perfect nonlinear (APN) and $\psi$ is a linear function with a kernel of dimension $1.$ We observe that the restriction of $\mathcal{G}$ to an affine hyperplane $A$ has the D-property (introduced by Taniguchi after a result from Dillon) as an $(N-1,N)$-function, if and only if, for every such $\psi$, the restriction of $\mathcal{F}(x)=\psi(\mathcal{G}(x))$ to $A$ is not an APN $(N-1,N-1)$-function. If this holds for all affine hyperplanes $A,$ we say that $\mathcal{G}$ has the strong D-property. We note that not satisfying this cryptographically interesting property also has a positive aspect, since it allows constructing APN $(N-1,N-1)$-functions from $\mathcal{G}$. We give a characterization of the strong D-property for crooked functions (a particular case of APN functions) and their compositional inverse (if it exists) by means of their ortho-derivatives, and we prove that the Gold APN function {and its inverse} in dimension $N$ odd big enough both have the strong D-property. We also prove in simpler a way than Taniguchi in 2023 that the strong D-property of the Gold APN function holds for $N\geq 6$ even. In conclusion, we have that the Gold APN function has the strong D-property if and only if $N=6$ or $N\geq 8$ and {that the inverse of the Gold APN function has the strong D-property if and only if $N\geq 7$ is odd.} Then we give a partial result on the Dobbertin APN power function, and on the basis of this result, we conjecture that it has the strong D-property as well. We then move our focus to two known infinite families of differentially 4-uniform $(N-1,N-1)$-permuta\-tions constructed as the restrictions of $(N,N)$-functions $\mathcal{F}(x)=\psi(\mathcal{G}(x))$ or $\mathcal{F}(x)=\psi(\mathcal{G}(x))+x$ where $\psi$ is linear with a kernel of dimension $1$ and $\mathcal{G}$ is an APN permutation. After a deeper investigation on these classes, we provide proofs (which were missing) that they are not APN in dimension $n=N-1$ even. Then we present our own construction by relaxing some hypothesis on $\psi$ and $\mathcal{G}$.
Last updated:  2024-05-28
Don’t Forget Pairing-Friendly Curves with Odd Prime Embedding Degrees
Uncategorized
Yu Dai, Fangguo Zhang, and Chang-an Zhao
Show abstract
Uncategorized
Pairing-friendly curves with odd prime embedding degrees at the 128-bit security level, such as BW13-310 and BW19-286, sparked interest in the field of public-key cryptography as small sizes of the prime fields. However, compared to mainstream pairing-friendly curves at the same security level, i.e., BN446 and BLS12-446, the performance of pairing computations on BW13-310 and BW19-286 is usually considered ineffcient. In this paper we investigate high performance software implementations of pairing computation on BW13-310 and corresponding building blocks used in pairing-based protocols, including hashing, group exponentiations and membership testings. Firstly, we propose effcient explicit formulas for pairing computation on this curve. Moreover, we also exploit the state-of-art techniques to implement hashing in G1 and G2, group exponentiations and membership testings. In particular, for exponentiations in G2 and GT , we present new optimizations to speed up computational effciency. Our implementation results on a 64-bit processor show that the gap in the performance of pairing computation between BW13-310 and BN446 (resp. BLS12-446) is only up to 4.9% (resp. 26%). More importantly, compared to BN446 and BLS12-446, BW13- 310 is about 109.1% − 227.3%, 100% − 192.6%, 24.5% − 108.5% and 68.2% − 145.5% faster in terms of hashing to G1, exponentiations in G1 and GT , and membership testing for GT , respectively. These results reveal that BW13-310 would be an interesting candidate in pairing-based cryptographic protocols.
Last updated:  2024-05-28
Stateless Deterministic Multi-Party EdDSA Signatures with Low Communication
Qi Feng, Kang Yang, Kaiyi Zhang, Xiao Wang, Yu Yu, Xiang Xie, and Debiao He
EdDSA, standardized by both IRTF and NIST, is a variant of the well-known Schnorr signature scheme based on Edwards curves, benefitting from stateless and deterministic derivation of nonces (i.e., it does not require a reliable source of randomness or state continuity). Recently, NIST called for multi-party threshold EdDSA signatures in one mode of verifying such nonce derivation via zero-knowledge (ZK) proofs. However, it is challenging to translate the stateless and deterministic benefits of EdDSA to the multi-party threshold setting, as no fresh randomness is available for signing the same message. In this paper, we present a new stateless and deterministic multi-party EdDSA protocol in the full-threshold setting, tolerating all-but-one malicious corruptions. Compared to the state-of-the-art multi-party EdDSA protocol by Garillot et al. (Crypto'21), we improve the communication cost by a factor of 56x and have the same three rounds, at the cost of increasing the computational cost by about 2.25x. We adopt information-theoretic message authenticated codes (IT-MACs) in the multi-verifier setting to authenticate values, and convert them from a Boolean domain to an arithmetic domain by refining multi-verifier extended doubly-authenticated bits (\edabits). We adopt pseudorandom correlation function (\PCF) to generate IT-MACs statelessly and deterministically. Together, we design a multi-verifier zero-knowledge (MVZK) protocol to derive nonces statelessly and deterministically.
Last updated:  2024-05-28
Security--Throughput Tradeoff of Nakamoto Consensus under Bandwidth Constraints
Lucianna Kiffer, Joachim Neu, Srivatsan Sridhar, Aviv Zohar, and David Tse
For Nakamoto’s longest-chain consensus protocol, whose proof-of-work (PoW) and proof-of-stake (PoS) variants power major blockchains such as Bitcoin and Cardano, we revisit the classic problem of the security–performance tradeoff: Given a network of nodes with limited capacities, against what fraction of adversary power is Nakamoto consensus (NC) secure for a given block production rate? State-of-the-art analyses of Nakamoto’s protocol fail to answer this question because their bounded-delay model does not capture realistic constraints such as limited communication-and computation-resources. We develop a new analysis technique to prove a refined security–performance tradeoff for PoW Nakamoto consensus in a bounded-bandwidth model. In this model, we show that, in contrast to the classic bounded-delay model, Nakamoto’s private attack is no longer the worst attack, and a new attack strategy we call the teasing strategy, that exploits the network congestion caused by limited bandwidth, is strictly worse. In PoS, equivocating blocks can exacerbate congestion, making the traditional PoS Nakamoto consensus protocol insecure except at very low block production rates. To counter such equivocation spamming, we present a variant of the PoS NC protocol we call Blanking NC (BlaNC), which achieves the same resilience as PoW NC.
Last updated:  2024-05-28
Relating Code Equivalence to Other Isomorphism Problems
Huck Bennett and Kaung Myat Htay Win
We study the complexity of the Code Equivalence Problem on linear error-correcting codes by relating its variants to isomorphism problems on other discrete structures---graphs, lattices, and matroids. Our main results are a fine-grained reduction from the Graph Isomorphism Problem to the Linear Code Equivalence Problem over any field $\mathbb{F}$, and a reduction from the Linear Code Equivalence Problem over any field $\mathbb{F}_p$ of prime, polynomially bounded order $p$ to the Lattice Isomorphism Problem. Both of these reductions are simple and natural. We also give reductions between variants of the Code Equivalence Problem, and study the relationship between isomorphism problems on codes and linear matroids.
Last updated:  2024-05-27
New Solutions to Delsarte's Dual Linear Programs
André Chailloux and Thomas Debris-Alazard
Understanding the maximum size of a code with a given minimum distance is a major question in computer science and discrete mathematics. The most fruitful approach for finding asymptotic bounds on such codes is by using Delsarte's theory of association schemes. With this approach, Delsarte constructs a linear program such that its maximum value is an upper bound on the maximum size of a code with a given minimum distance. Bounding this value can be done by finding solutions to the corresponding dual linear program. Delsarte's theory is very general and goes way beyond binary codes. In this work, we provide universal bounds in the framework of association schemes that generalize the Elias-Bassalygo bound, which can be applied to any association scheme constructed from a distance function. These bounds are obtained by constructing new solutions to Delsarte's dual linear program. We instantiate these results and we recover known bounds for $q$-ary codes and for constant-weight binary codes. Our other contribution is to recover, for essentially any $Q$-polynomial scheme, MRRW-type solutions to Delsarte's dual linear program which are inspired by the Laplacian approach of Friedman and Tillich instead of using the Christoffel-Darboux formulas. We show in particular how the second linear programming bound can be interpreted in this framework.
Last updated:  2024-05-27
On Quantum Secure Compressing Pseudorandom Functions
Ritam Bhaumik, Benoît Cogliati, Jordan Ethan, and Ashwin Jha
In this paper we characterize all $2n$-bit-to-$n$-bit Pseudorandom Functions (PRFs) constructed with the minimum number of calls to $n$-bit-to-$n$-bit PRFs and arbitrary number of linear functions. First, we show that all two-round constructions are either classically insecure, or vulnerable to quantum period-finding attacks. Second, we categorize three-round constructions depending on their vulnerability to these types of attacks. This allows us to identify classes of constructions that could be proven secure. We then proceed to show the security of the following three candidates against any quantum distinguisher that asks at most $ 2^{n/4} $ (possibly superposition) queries \[ \begin{array}{rcl} \mathsf{TNT}(x_1,x_2) &:=& f_3(x_2 \oplus f_2(x_2 \oplus f_1(x_1)))\\ \mathsf{LRQ}(x_1,x_2) &:=& f_2(x_2) \oplus f_3(x_2 \oplus f_1(x_1))\\ \mathsf{LRWQ}(x_1,x_2) &:=& f_3( f_1(x_1) \oplus f_2(x_2)). \end{array} \] Note that the first construction is a classically secure tweakable block cipher due to Bao et al., and the third construction is shown to be quantum secure tweakable block cipher by Hosoyamada and Iwata with similar query limits. Of note is our proof framework, an adaptation of Chung et al.'s rigorous formulation of Zhandry's compressed oracle technique in indistinguishability setup, which could be of independent interests. This framework gives very compact and mostly classical looking proofs as compared to Hosoyamada and Iwata interpretation of Zhandry's compressed oracle.
Last updated:  2024-05-27
Fully Adaptive Schnorr Threshold Signatures
Elizabeth Crites, Chelsea Komlo, and Mary Maller
We prove adaptive security of a simple three-round threshold Schnorr signature scheme, which we call Sparkle+. The standard notion of security for threshold signatures considers a static adversary – one who must declare which parties are corrupt at the beginning of the protocol. The stronger adaptive adversary can at any time corrupt parties and learn their state. This notion is natural and practical, yet not proven to be met by most schemes in the literature. In this paper, we demonstrate that Sparkle+ achieves several levels of security based on different corruption models and assumptions. To begin with, Sparkle+ is statically secure under minimal assumptions: the discrete logarithm assumption (DL) and the random oracle model (ROM). If an adaptive adversary corrupts fewer than t/2 out of a threshold of t+1 signers, then Sparkle+ is adaptively secure under a weaker variant of the one-more discrete logarithm assumption (AOMDL) in the ROM. Finally, we prove that Sparkle+ achieves full adaptive security, with a corruption threshold of t, under AOMDL in the algebraic group model (AGM) with random oracles. Importantly, we show adaptive security without requiring secure erasures. Ours is the first proof achieving full adaptive security without exponential tightness loss for any threshold Schnorr signature scheme.
Last updated:  2024-05-27
Discrete Logarithm Factory
Haetham AL ASWAD, Cécile PIERROT, and Emmanuel THOMÉ
The Number Field Sieve and its variants are the best algorithms to solve the discrete logarithm problem in finite fields. The Factory variant accelerates the computation when several prime fields are targeted. This article adapts the Factory variant to non-prime finite fields of medium and large characteristic. We combine this idea with two other variants of NFS, namely the tower and special variant. This combination leads to improvements in the asymptotic complexity. Besides, we lay out estimates of the practicality of this method for 1024-bit targets and extension degree $6$.
Last updated:  2024-05-27
Optimal Consensus in the Presence of Overlapping Faults and Total Omission
Julian Loss, Kecheng Shi, and Gilad Stern
Understanding the fault tolerance of Byzantine Agreement protocols is an important question in distributed computing. While the setting of Byzantine faults has been thoroughly explored in the literature, the (arguably more realistic) omission fault setting is far less studied. In this paper, we revisit the recent work of Loss and Stern who gave the first protocol in the mixed fault model tolerating $t$ Byzantine faults, $s$ send faults, and $r$ receive faults, when $2t+r+s<n$ and omission faults do not overlap. We observe that their protocol makes no guarantees when omission faults can overlap, i.e., when parties can simultaneously have send and receive faults. We give the first protocol that overcomes this limitation and tolerates the same number of potentially overlapping faults. We then study, for the first time, the total omission setting where all parties can become omission faulty. This setting is motivated by real-world scenarios where every party may experience connectivity issues from time to time, yet agreement should still hold for the parties who manage to output values. We show the first agreement protocol in this setting with parameters $s<n$ and $s+r=n$. On the other hand, we prove that there is no consensus protocol for the total omission setting which tolerates even a single overlapping omission fault, i.e., where $s+r=n+1$ and $s>2$, or a broadcast protocol for $s+r=n$ and $s>1$ even without overlapping faults.
Last updated:  2024-05-27
Early Stopping Byzantine Agreement in $(1+\epsilon)\cdot f$ Rounds
Fatima Elsheimy, Julian Loss, and Charalampos Papamanthou
In this paper, we present two early stopping Byzantine agreement protocols in the authenticated setting against a corrupt minority $t < n/2$, where $t$ represents the maximum number of malicious parties. Early stopping protocols ensure termination within a number of rounds determined solely by the actual number of malicious nodes $f$ present during execution, irrespective of $t$. Our first protocol is deterministic and ensures early stopping termination in $ (d+5) \cdot (\lfloor f/d \rfloor +3)$ rounds, where $d$ is a fixed constant. For example, for all $d\ge 6$, our protocol runs in at most $(1+\epsilon )\cdot f$ rounds (where $0<\epsilon<1$), improving (for large $f$) upon the best previous early stopping deterministic broadcast protocol by Perry and Toueg, which terminates in $min(2f+4,2t+2)$ rounds. Additionally, our second protocol is randomized, ensuring termination in an expected constant number of rounds and achieving early stopping in $(d+9) \cdot (\lfloor f/d \rfloor +2)$ rounds in the worst case. This marks a significant improvement over a similar result by Goldreich and Petrank, which always requires an expected constant number of rounds and $O(t)$ rounds in the worst case, i.e., does not have the early stopping property.
Last updated:  2024-05-27
Scaling Lattice Sieves across Multiple Machines
Martin R. Albrecht and Joe Rowell
Lattice sieves are algorithms for finding short vectors in lattices. We present an implementation of two such sieves – known as “BGJ1” and “BDGL” in the literature – that scales across multiple servers (with varying success). This class of algorithms requires exponential memory which had put into question their ability to scale across sieving nodes. We discuss our architecture and optimisations and report experimental evidence of the efficiency of our approach.
Last updated:  2024-05-27
Efficient Linkable Ring Signature from Vector Commitment inexplicably named Multratug
Anton A. Sokolov
In this paper we revise the idea of our previous article “Lin2-Xor Lemma: an OR-proof that leads to the membership proof and signature” and introduce another lemma, called Lin2-Choice, which is a generalization of the Lin2-Xor lemma. With the Lin2-Choice lemma we obtain a compact general-purpose trusted-setup-free log-size linkable threshold ring signature called EFLRSL. The signature size is 2log(n+1)+3l+1, where n is the ring size and l is the threshold. By extending the set membership argument of the Lin2-Choice lemma we create a multifunctional version of the EFLRSL signature aliased as Multratug, of size 2log(n+l+1)+7l+4. In addition to signing a message, Multratug proves balance and allows for easy multiparty signing, which makes it suitable for blockchains. We use a black-boxed vector commitment argument as the pivotal building block for both of EFLRSL and Multratug. Only the black-boxed pivot contributes components that depend on the ring size n into the signature sizes. This makes both versions of our signature combinable with other proofs, thus allowing overall size reduction. All this takes place in a prime-order group without bilinear parings under the decisional Diffie-Hellman assumption in the random oracle model. Both versions of our signature are unforgeable w.r.t. insider corruption and existentially unforgeable under chosen message attack. They remain anonymous even for non-uniformly distributed and malformed keys, which makes it possible to use them as a log-size drop-in replacement for LSAG-based schemes.
Last updated:  2024-05-27
The Multiple Millionaires’ Problem
Tamir Tassa and Avishay Yanai
We study a fundamental problem in Multi-Party Computation, which we call the Multiple Millionaires’ Problem (MMP). Given a set of private integer inputs, the problem is to identify the subset of inputs that equal the maximum (or minimum) of that set, without revealing any further information on the inputs beyond what is implied by the desired output. Such a problem is a natural extension of the Millionaires’ Problem, which is the very first Multi- Party Computation problem that was presented in Andrew Yao’s seminal work [30 ]. A closely related problem is MaxP, in which the value of the maximum is sought. We study these fundamental problems and describe several algorithmic approaches and protocols for their solution. In addition, we compare the performance of the protocols under several selected settings. As applications of privacy-preserving computation are more and more commonly implemented in industrial systems, MMP and MaxP become important building blocks in privacy-preserving statistics, machine learning, auctions and other domains. One of the prominent advantages of the protocols that we present here is their simplicity. As they solve fundamental problems that are essential building blocks in various application scenarios, we believe that the presented solutions to those problems, and the comparison between them, will serve well future researchers and practitioners of secure distributed computing.
Last updated:  2024-05-27
Formal Definition and Verification for Combined Random Fault and Random Probing Security
Sonia Belaid, Jakob Feldtkeller, Tim Güneysu, Anna Guinet, Jan Richter-Brockmann, Matthieu Rivain, Pascal Sasdrich, and Abdul Rahman Taleb
In our highly digitalized world, an adversary is not constrained to purely digital attacks but can monitor or influence the physical execution environment of a target computing device. Such side-channel or fault-injection analysis poses a significant threat to otherwise secure cryptographic implementations. Hence, it is important to consider additional adversarial capabilities when analyzing the security of cryptographic implementations besides the default black-box model. For side-channel analysis, this is done by providing the adversary with knowledge of some internal values, while for fault-injection analysis the capabilities of the adversaries include manipulation of some internal values. In this work, we extend probabilistic security models for physical attacks, by introducing a general random probing model and a general random fault model to capture arbitrary leakage and fault distributions, as well as the combination of these models. Our aim is to enable a more accurate modeling of low-level physical effects. We then analyze important properties, such as the impact of adversarial knowledge on faults and compositions, and provide tool-based formal verification methods that allow the security assessment of design components. These methods are introduced as extension of previous tools VERICA and IronMask which are implemented, evaluated and compared.
Last updated:  2024-05-27
A new stand-alone MAC construct called SMAC
Dachao Wang, Alexander Maximov, Patrik Ekdahl, and Thomas Johansson
In this paper, we present a new efficient stand-alone MAC construct based on processing using the FSM part of the stream cipher family SNOW, which in turn uses the AES round function. It offers a combination of very high speed in software and hardware with a truncatable tag. Two concrete versions of SMAC are proposed with different security levels, although other use cases are also possible. For example, SMAC can be combined with an external ciphering engine in AEAD mode. Every design choice is justified and supported by the results of our analysis and simulations. A novelty of the proposal is that it meets future performance requirements but is still not directly vulnerable to attacks using repeated nonce when the tag size is short, as is the case for other very fast MACs (MACs based on polynomial hashing). This can be an important aspect in practical applications.
Last updated:  2024-05-27
Improved Meet-LWE Attack via Ternary Trees
Eunmin Lee, Joohee Lee, and Yuntao Wang
The Learning with Errors (LWE) problem with its variants over structured lattices has been widely exploited in efficient post-quantum cryptosystems. Recently, May suggests the Meet-LWE attack, which poses a significant advancement in the line of work on the Meet-in-the-Middle approach to analyze LWE with ternary secrets. In this work, we generalize and extend the idea of Meet-LWE by introducing ternary trees, which result in diverse representations of the secrets. More precisely, we split the secrets into three pieces with the same dimension and expand them into a ternary tree to leverage the increased representations to improve the overall attack complexity. We carefully analyze and optimize the time and memory costs of our attack algorithm exploiting ternary trees, and compare them to those of the Meet-LWE attack. With asymptotic and non-asymptotic comparisons, we observe that our attack provides improved estimations for all parameter settings, including those of the practical post-quantum schemes, compared to the Meet-LWE attack. We also evaluate the security of the Round 2 candidates of the KpqC competition which aims to standardize post-quantum public key cryptosystems in the Republic of Korea, and report that the estimated complexities for our attack applied to SMAUG-T are lower than the claimed for some of the recommended parameters.
Last updated:  2024-05-27
Learning with Quantization: Construction, Hardness, and Applications
Shanxiang Lyu, Ling Liu, and Cong Ling
This paper presents a generalization of the Learning With Rounding (LWR) problem, initially introduced by Banerjee, Peikert, and Rosen, by applying the perspective of vector quantization. In LWR, noise is induced by scalar quantization. By considering a new variant termed Learning With Quantization (LWQ), we explore large-dimensional fast-decodable lattices with superior quantization properties, aiming to enhance the compression performance over scalar quantization. We identify polar lattices as exemplary structures, effectively transforming LWQ into a problem akin to Learning With Errors (LWE), whose distribution of quantization error is statistically close to discrete Gaussian. We present two applications of LWQ: Lily, a smaller ciphertext public key encryption (PKE) scheme, and quancryption, a privacy-preserving secret-key encryption scheme. Lily achieves smaller ciphertext sizes without sacrificing security, while quancryption achieves a source-ciphertext ratio larger than $1$.
Last updated:  2024-05-26
Securing Lattice-Based KEMs with Code-Based Masking: A Theoretical Approach
Uncategorized
Pierre-Augustin Berthet, Yoan Rougeolle, Cédric Tavernier, Jean-Luc Danger, and Laurent Sauvage
Show abstract
Uncategorized
The recent technological advances in Post-Quantum Cryptography (PQC) raise the questions of robust implementations of new asymmetric cryptographic primitives in today’s technology. This is the case for the lattice-based Module Lattice-Key Encapsulation Mechanism (ML-KEM) algorithm which is proposed by the NIST as the first standard for Key Encapsulation Mechanism (KEM), taking inspiration from CRYSTALS-Kyber. We have notably to make sure the ML-KEM implementation is resilient against physical attacks like Side-Channel Analysis (SCA) and Fault Injection Attacks (FIA). To reach this goal, we propose to adapt a masking countermeasure, more precisely the generic Direct Sum Masking method (DSM). By taking inspiration of a previous paper on AES, we extend the method to finite fields of characteristic prime other than 2 and even-length codes. We also investigate its application to Keccak, which is the hash-based function used in ML-KEM. We propose masked conversions and use cost-amortization to perform this hash. We provide the first masked implementation of ML-KEM with both SCA and FIA resilience able of correcting errors. Our FIA resilience allows for fault correction even within the multiplicative gadget. Finally, we adapt a polynomial evaluation method to compute masked polynomials with public coefficients over finite fields of characteristic different from 2.
Last updated:  2024-05-26
Approximate Lower Bound Arguments
Pyrros Chaidos, Aggelos Kiayias, Leonid Reyzin, and Anatoliy Zinovyev
Suppose a prover, in possession of a large body of valuable evidence, wants to quickly convince a verifier by presenting only a small portion of the evidence. We define an Approximate Lower Bound Argument, or ALBA, which allows the prover to do just that: to succinctly prove knowledge of a large number of elements satisfying a predicate (or, more generally, elements of a sufficient total weight when a predicate is generalized to a weight function). The argument is approximate because there is a small gap between what the prover actually knows and what the verifier is convinced the prover knows. This gap enables very efficient schemes. We present noninteractive constructions of ALBA in the random oracle and uniform reference string models and show that our proof sizes are nearly optimal. We also show how our constructions can be made particularly communication-efficient when the evidence is distributed among multiple provers, which is of practical importance when ALBA is applied to a decentralized setting. We demonstrate two very different applications of ALBAs: for large-scale decentralized signatures and for proving universal composability of succinct proofs.
Last updated:  2024-05-26
SoK: Polynomial Multiplications for Lattice-Based Cryptosystems
Vincent Hwang
We survey various mathematical tools used in software works multiplying polynomials in $\mathbb{Z}_q [x] / ⟨x^n −αx−β⟩$. In particular, we survey implementation works targeting polynomial multiplications in lattice-based cryptosystems Dilithium, Kyber, NTRU, NTRU Prime, and Saber with instruction set architectures/extensions Armv7-M, Armv7E-M, Armv8-A, and AVX2. There are three emphases in this paper: (i) modular arithmetic, (ii) homomorphisms, and (iii) vectorization. For modular arithmetic, we survey Montgomery, Barrett, and Plantard multiplications. For homomorphisms, we survey (a) various homomorphisms such as Cooley–Tukey FFT, Bruun’s FFT, Rader’s FFT, Karatsuba, and Toom– Cook; (b) various algebraic techniques for adjoining nice properties to the coefficient rings, including injections, Schönhage’s FFT, Nussbaumer’s FFT, and localization; and (c) various algebraic techniques related to the polynomial moduli, including twisting, composed multiplication, evaluation at ∞, Good–Thomas FFT, truncation, incomplete transformation, and Toeplitz matrix-vector product. For vectorization, we survey the relations between homomorphisms and the support of vector arithmetic. We then go through several case studies: We compare the implementations of modular multiplications used in Dilithium and Kyber, explain how the matrix-to-vector structure was exploited in Saber, and review the design choices of transformations for NTRU and NTRU Prime with vectorization. Finally, we outline several interesting implementation projects.
Last updated:  2024-05-26
Batched Distributed Point Function from Sparse LPN and Homomorphic Secret Sharing
Lucas Piske, Jaspal Singh, and Ni Trieu
A function secret sharing (FSS) scheme ($\mathsf{gen},\mathsf{eval}$) for a class of programs $\mathcal{F}$ allows a dealer to secret share any function $f \in \mathcal{F}$, such that each function share hides the function, and the shares can be used to non-interactively compute additive shares of $f(x)$ for any input $x$. All FSS related applications often requires the dealer to generate and share secret sharings for a batch of functions. We initiate the study of batched function secret sharing - where the objective is to secret share a set of functions from the class $\mathcal{F}$ while minimizing the size of the collection of FSS keys. We use standard homomorphic secret sharing (HSS) schemes, variant of the Learning with Parity Noise assumption and the Quadratic Residuosity assumption to construct batched FSS schemes for point functions with single-bit and multi-bit output. Our scheme is asymptotically superior than naively batching state of the art FSS schemes for point functions. Concretely our batch key sizes are smaller by a factor of $3-80\times$ as batch size is increased from $2^{13}$ to $2^{19}$. Although our protocol relies on public key operations, it exhibits inefficiency in a LAN setting. However, it demonstrates up to a 120-fold improvement in a WAN setting with slow network bandwidth. As a building block in our protocols, we introduce a new HSS ciphertext compression algorithm, that can decompress a short compressed ciphertext to give low noise ciphertexts of array of input message bits. This primitive might be of independent interest for other HSS related applications.
Last updated:  2024-05-26
CipherGPT: Secure Two-Party GPT Inference
Xiaoyang Hou, Jian Liu, Jingyu Li, Yuhan Li, Wen-jie Lu, Cheng Hong, and Kui Ren
ChatGPT is recognized as a significant revolution in the field of artificial intelligence, but it raises serious concerns regarding user privacy, as the data submitted by users may contain sensitive information. Existing solutions for secure inference face significant challenges in supporting GPT-like models due to the enormous number of model parameters and complex activation functions. In this paper, we develop CipherGPT, the first framework for secure two-party GPT inference, building upon a series of innovative protocols. First, we propose a secure matrix multiplication that is customized for GPT inference, achieving upto 6.2× speedup and 4.1× bandwidth reduction over SOTA. We also propose a novel protocol for securely computing GELU, surpassing SOTA by 1.8× in runtime, 2.5× in communication and 7.4× in precision. Furthermore, we come up with the first protocol for secure top-k sampling. We provide a full-fledged implementation and comprehensive benchmark for CipherGPT. In particular, we measure the runtime and communication for each individual operation, along with their corresponding proportions. We believe this can serve as a reference for future research in this area.
Last updated:  2024-05-26
A General Framework for Lattice-Based ABE Using Evasive Inner-Product Functional Encryption
Yao-Ching Hsieh, Huijia Lin, and Ji Luo
We present a general framework for constructing attribute-based encryption (ABE) schemes for arbitrary function class based on lattices from two ingredients, i) a noisy linear secret sharing scheme for the class and ii) a new type of inner-product functional encryption (IPFE) scheme, termed *evasive* IPFE, which we introduce in this work. We propose lattice-based evasive IPFE schemes and establish their security under simple conditions based on variants of evasive learning with errors (LWE) assumption recently proposed by Wee [EUROCRYPT ’22] and Tsabary [CRYPTO ’22]. Our general framework is modular and conceptually simple, reducing the task of constructing ABE to that of constructing noisy linear secret sharing schemes, a more lightweight primitive. The versatility of our framework is demonstrated by three new ABE schemes based on variants of the evasive LWE assumption. - We obtain two ciphertext-policy ABE schemes for all polynomial-size circuits with a predetermined depth bound. One of these schemes has *succinct* ciphertexts and secret keys, of size polynomial in the depth bound, rather than the circuit size. This eliminates the need for tensor LWE, another new assumption, from the previous state-of-the-art construction by Wee [EUROCRYPT ’22]. - We develop ciphertext-policy and key-policy ABE schemes for deterministic finite automata (DFA) and logspace Turing machines ($\mathsf{L}$). They are the first lattice-based public-key ABE schemes supporting uniform models of computation. Previous lattice-based schemes for uniform computation were limited to the secret-key setting or offered only weaker security against bounded collusion. Lastly, the new primitive of evasive IPFE serves as the lattice-based counterpart of pairing-based IPFE, enabling the application of techniques developed in pairing-based ABE constructions to lattice-based constructions. We believe it is of independent interest and may find other applications.
Last updated:  2024-05-26
Rate-1 Arithmetic Garbling from Homomorphic Secret-Sharing
Pierre Meyer, Claudio Orlandi, Lawrence Roy, and Peter Scholl
We present a new approach to garbling arithmetic circuits using techniques from homomorphic secret sharing, obtaining constructions with high rate that support free addition gates. In particular, we build upon non-interactive protocols for computing distributed discrete logarithms in groups with an easy discrete-log subgroup, further demonstrating the versatility of tools from homomorphic secret sharing. Relying on distributed discrete log for the Damgård-Jurik cryptosystem (Roy and Singh, Crypto `21), whose security follows from the decisional composite residuosity assumption (DCR), we get the following main results: [**Two ciphertexts per multiplication, from IND-CPA security of Damgård-Jurik.**] Assuming the Damgård-Jurik cryptosystem is semantically secure (which follows from DCR), there is a garbling scheme for circuits with $B$-bounded integer arithmetic using only two ciphertexts per multiplication. The total bit-size of the resulting garbled circuit is: $$(n + 2s_\times+2D_\times)\cdot (\zeta + 1) \cdot \log N$$ where $n$ is the number of inputs, $s_\times$ is the number of multiplications, $D_\times$ is the multiplicative depth of the circuit, $N$ is an RSA modulus and $N^{\zeta-1}$ is a rough bound on the magnitude of wire values in the computation. [**One ciphertext per multiplication, from KDM security of Damgård-Jurik.**] Assuming the Damgård-Jurik encryption scheme remains secure given encryption of the key and its inverse, the construction achieves rate-$1$. The total bit-size of the resulting garbled circuit is: $$(n + s_\times + 1) \cdot (\zeta + 1) \cdot \log N$$ where the parameters are as above, except $N^{\zeta-2}$ is the magnitude bound. As a side result, we show that our scheme based on IND-CPA security achieves rate $3/5$ for levelled circuits.
Last updated:  2024-05-26
Simulation-Secure Threshold PKE from Standard (Ring-)LWE
Hiroki Okada and Tsuyoshi Takagi
Threshold public key encryption (ThPKE) is PKE that can be decrypted by collecting “partial decryptions” from t (≤ N) out of N parties. ThPKE based on the learning with errors problem (LWE) is particularly important because it can be extended to threshold fully homomorphic encryption (ThFHE). ThPKE and ThFHE are fundamental tools for constructing multiparty computation (MPC) protocols: In 2023, NIST initiated a project (NIST IR 8214C) to establish guidelines for implementing threshold cryptosystems. Because MPC often requires simulation-security (SS), ThPKE schemes that satisfy SS (SS-ThPKE) are also important. Recently, Micciancio and Suhl (ePrint 2023/1728) presented an efficient SS-ThPKE scheme based on LWE with a polynomial modulus. However, the scheme requires the use of a nonstandard problem called “known-norm LWE” for the security proof because the norm ∥e∥ of the error of the public key is leaked from the partial decryptions. This leads to the following two challenges: 1) The construction based on LWE relies on a nontight reduction from known- norm LWE to LWE. 2) No construction based on (standard) Ring-LWE has been presented. In this paper, we address both of these challenges: we propose an efficient SS-ThPKE scheme whose security is (directly) reduced from standard LWE/Ring-LWE with a polynomial modulus. The core technique of our construction is what we call “error sharing”: We distribute shares of a small error ζ via secret sharing, and use them to prevent leakage of ∥e∥ from partial decryptions.
Last updated:  2024-05-26
Functional Bootstrapping for Packed Ciphertexts via Homomorphic LUT Evaluation
Dongwon Lee, Seonhong Min, and Yongsoo Song
Fully Homomorphic Encryption (FHE) enables the computation of an arbitrary function over encrypted data without decrypting them. In particular, bootstrapping is a core building block of FHE which reduces the noise of a ciphertext thereby recovering the computational capability. This paper introduces a new bootstrapping framework for the Fan-Vercauteren (FV) scheme, called the functional bootstrapping, providing more generic and advanced functionality than the ordinary bootstrapping method. More specifically, the functional bootstrapping allows us to evaluate an arbitrary function while removing the error of an input ciphertext. Therefore, we achieve better depth consumption and computational complexity as the evaluation of a circuit can be integrated as part of the functional bootstrapping procedure. In particular, our approach extends the functionality of FV since it is even applicable to functions between different plaintext spaces. At the heart of our functional bootstrapping framework is a homomorphic Look-Up Table (LUT) evaluation method where we represent any LUT using only the operations supported by the FV scheme. Finally, we provide a proof-of-concept implementation and present benchmarks of the functional bootstrapping. In concrete examples, such as delta and sign functions, our functional bootstrapping takes about 46.5s or 171.4s for 9-bit or 13-bit plaintext modulus, respectively.
Last updated:  2024-05-26
Towards Achieving Asynchronous MPC with Linear Communication and Optimal Resilience
Vipul Goyal, Chen-Da Liu-Zhang, and Yifan Song
Secure multi-party computation (MPC) allows a set of $n$ parties to jointly compute a function over their private inputs. The seminal works of Ben-Or, Canetti and Goldreich [STOC '93] and Ben-Or, Kelmer and Rabin [PODC '94] settled the feasibility of MPC over asynchronous networks. Despite the significant line of work devoted to improving the communication complexity, current protocols with information-theoretic security and optimal resilience $t<n/3$ communicate $\Omega(n^4C)$ field elements for a circuit with $C$ multiplication gates. In contrast, synchronous MPC protocols with $\Omega(nC)$ communication have long been known. In this work we make progress towards closing this gap. We provide a novel MPC protocol that makes black-box use of an asynchronous complete secret-sharing (ACSS) protocol, where the cost per multiplication reduces to the cost of distributing a constant number of sharings via ACSS, improving a linear factor over the state of the art by Choudhury and Patra [IEEE Trans. Inf. Theory '17]. Instantiating ACSS with the protocol by Choudhury and Patra [J. Crypto '23] we achieve an MPC protocol with $\mathcal{O}(n^3C)$ communication. Moreover, with a recent concurrent work achieving ACSS with linear cost per sharing, we achieve an MPC with $\mathcal{O}(nC)$ communication.
Last updated:  2024-05-26
The Brave New World of Global Generic Groups and UC-Secure Zero-Overhead SNARKs
Jan Bobolz, Pooya Farshim, Markulf Kohlweiss, and Akira Takahashi
The universal composability (UC) model provides strong security guarantees for protocols used in arbitrary contexts. While these guarantees are highly desirable, in practice, schemes with a standalone proof of security, such as the Groth16 proof system, are preferred. This is because UC security typically comes with undesirable overhead, sometimes making UC-secure schemes significantly less efficient than their standalone counterparts. We establish the UC security of Groth16 without any significant overhead. In the spirit of global random oracles, we design a global (restricted) observable generic group functionality that models a natural notion of observability: computations that trace back to group elements derived from generators of other sessions are observable. This notion turns out to be surprisingly subtle to formalize. We provide a general framework for proving protocols secure in the presence of global generic groups, which we then apply to Groth16.
Last updated:  2024-05-26
DVA: Dangerous Variations of ALTEQ
Arnaud Sipasseuth
In this paper, we present three types of variations of the ALTEQ cryptosystem, a recent submission to the NIST's additional call for signatures. We name these Dangerous Variations of ALTEQ (DVA), as there is always a certain danger in stepping out of usual constructions, although we attempt to maintain heuristic security. First, we present DVA-GG (Graph Generalization), that can be seen as a more abstract point-of-view on the operations done in ALTEQ and encourages more research on the algebraic variants. In particular, we show this approach can lead to a patch counter to Beullens' recent seed collision attack on ALTEQ that only depends on the primitive, and showcase some fancy usages of the primitive for experimental protocols. Second, we present DVA-PC (Precomputations) which is ``likely'' as secure as ALTEQ in the random oracle model, and allow to drastically reduce the intermediate memory requirements within both the signature and verification process through an easily parallelizable extra operation. In particular, this facilitates precomputation variants with online phases that only depends on the complexity of basic matrix operations. We can then choose between either a tiny offline memory per signature, or get one of the fastest online signing speed for post-quantum cryptography. Third, we present DVA-DM (Distinct Matrices), some cryptanalytic targets that deviates from ALTEQ's original algebraic structure. Those structures can serve as plain computational acceleration or just compress data sizes, and provide good options to motivate the study of specialized cryptanalysis for ALTEQ: if those are safe, then ALTEQ gain safe variants, and otherwise, we gain further understanding of the problems. In particular, the ideas can be applied beyond ALTEQ and beyond, and hopefully extend to MEDS, LESS, and group-action-based cryptography.
Last updated:  2024-05-26
Zero-knowledge IOPs Approaching Witness Length
Noga Ron-Zewi and Mor Weiss
Interactive Oracle Proofs (IOPs) allow a probabilistic verifier interacting with a prover to verify the validity of an NP statement while reading only few bits from the prover messages. IOPs generalize standard Probabilistically-Checkable Proofs (PCPs) to the interactive setting, and in the few years since their introduction have already exhibited major improvements in main parameters of interest (such as the proof length and prover and verifier running times), which in turn led to significant improvements in constructions of succinct arguments. Zero-Knowledge (ZK) IOPs additionally guarantee that the view of any query-bounded (possibly malicious) verifier can be efficiently simulated. ZK-IOPs are the main building block of succinct ZK arguments which use the underlying cryptographic object (e.g., a collision-resistant hash function) as a black box. In this work, we construct the first ZK-IOPs approaching the witness length for a natural NP problem. More specifically, we design constant-query and constant-round IOPs for 3SAT in which the total communication is $(1+\gamma)m$, where $m$ is the number of variables and $\gamma>0$ is an arbitrarily small constant, and ZK holds against verifiers querying $m^\beta$ bits from the prover's messages, for a constant $\beta>0$. This gives a ZK variant of a recent result of Ron-Zewi and Rothblum (FOCS `20), who construct (non-ZK) IOPs approaching the witness length for a large class of NP languages. Previous constructions of ZK-IOPs incurred an (unspecified) large constant multiplicative overhead in the proof length, even when restricting to ZK against the honest verifier. We obtain our ZK-IOPs by improving the two main building blocks underlying most ZK-IOP constructions, namely ZK codes and ZK-IOPs for sumcheck. More specifically, we give the first ZK-IOPs for sumcheck that achieve both sublinear communication for sumchecking a general tensor code, and a ZK guarantee. We also show a strong ZK preservation property for tensors of ZK codes, which extends a recent result of Bootle, Chiesa, and Liu (EC `22). Given the central role of these objects in designing ZK-IOPs, these results might be of independent interest.
Last updated:  2024-05-26
Faster verifications and smaller signatures: Trade-offs for ALTEQ using rejections
Arnaud Sipasseuth
In this paper, we introduce a new probability function parameter in the instantiations of the Goldreich-Micali-Wigderson with Fiat-Shamir and unbalanced challenges used in ALTEQ, a recent NIST PQC candidate in the call for additional signatures. This probability set at 100% does not bring any changes in the scheme, but modifies the public challenge generation process when below 100%, by injecting potential rejections in otherwise completely valid inputs. From a theoretical point of view, this does not improve the asymptotical hardness of the scheme and negatively affects the efficiency of the signatory, and might itself seem trivial. However, from a practical point of view, implementation-wise and performance-wise, this triviality allows an extra degree of freedom in optimizing parameters, as the heuristic security level is also increased against forgers: previously valid combinations now can be deemed invalid. This allows us to make trade-offs to reduce the computational load in verifiers, accelerating verifications, marginally reduce the signature size, at the cost of making signatures slower and unlikely to be constant-time. In particular, this extra degree of freedom allows to make implementation choices that enable smoother and faster executions of the aforementioned protocols, especially in the context of parallelization using vectorized instructions. We also demonstrate the usefulness of our proposal to ALTEQ for other options, when slowing down the signing process is not an issue: significantly smaller signatures but longer verifications, or lower public key sizes. The ideas presented apply to any primitive, and can be used beyond ALTEQ.
Last updated:  2024-05-26
Proofs for Deep Thought: Accumulation for large memories and deterministic computations
Benedikt Bünz and Jessica Chen
An important part in proving machine computation is to prove the correctness of the read and write operations performed from the memory, which we term memory-proving. Previous methodologies required proving Merkle Tree openings or multi-set hashes, resulting in relatively large proof circuits. We construct an efficient memory-proving Incrementally Verifiable Computation (IVC) scheme from accumulation, which is particularly useful for machine computations with large memories and deterministic steps. In our scheme, the IVC prover PIVC has cost entirely independent of the memory size T and only needs to commit to approximately 15 field elements per read/write operation, marking a more than 100X improvement over prior work. We further reduce this cost by employing a modified, accumulation-friendly version of the GKR protocol. In the optimized version, PIVC only needs to commit to 6 small memory-table elements per read/write. If the table stores 32-bit values, then this is equivalent to committing to less than one single field element per read and write. Our modified GKR protocol is also valuable for proving other deterministic computations within the context of IVC. Our memory-proving protocol can be extended to support key-value store.
Last updated:  2024-05-26
On Maximum Size Simultaneous Linear Approximations in Ascon and Keccak and Related Translation and Differential Properties
Nicolas T. Courtois, Frédéric Amiel, and Alexandre Bonnard de Fonvillars
In this paper we study the S-box known as Chi or \chi initially proposed by Daemen in 1995 and very widely used ever since in Keccak, Ascon, and many other. This type of ciphers is typically analyzed [in recent research] in terms of subspace trail attacks [TeDi19] and vector space invariants. An interesting question is then, when different spaces are mapped to each other by translations with a constant. In this paper we relax this fundamental question and we consider arbitrary sets of points and their translations. We generalize previous S-box partial linearization results on Keccak and Ascon from Eurocrypt 2017. We basically introduce new ways to linearize the Ascon S-box to the maximum possible extent. On this basis we show further remarkable properties and some surprising connections between [simultaneous] linear and [prominent] differential properties. We exhibit a new family of maximum size and optimal approximation properties with 11 points, beyond the maximum size of any set in the DDT table. We prove a theorem which guarantees that this type of properties are stable by arbitrary input-side translations which holds for all quadratic permutations. All this will be placed in the context of previous work on classification of 5-bit quadratic permutations.
Last updated:  2024-05-26
Admissible Parameter Sets and Complexity Estimation of Crossbred Algorithm
Shuhei Nakamura
The Crossbred algorithm is one of the algorithms for solving a system of polynomial equations, proposed by Joux and Vitse in 2017. It has been implemented in Fukuoka MQ challenge, which is related to the security of multivariate crytography, and holds several records. A framework for estimating the complexity has already been provided by Chen et al. in 2017. However, it is generally unknown which parameters are actually available. This paper investigates how to select available parameters for the Crossbred algorithm. As a result, we provide formulae that give an available parameter set and estimate the complexity of the Crossbred algorithm.
Last updated:  2024-05-25
Improved Alternating-Moduli PRFs and Post-Quantum Signatures
Navid Alamati, Guru-Vamsi Policharla, Srinivasan Raghuraman, and Peter Rindal
We revisit the alternating moduli paradigm for constructing symmetric key primitives with a focus on constructing highly efficient protocols to evaluate them using secure multi-party computation (MPC). The alternating moduli paradigm of Boneh et al. (TCC 2018) enables the construction of various symmetric key primitives with the common characteristic that the inputs are multiplied by two linear maps over different moduli, first over $\mathbb{F}_2$ and then over $\mathbb{F}_3$. The first contribution focuses on efficient two-party evaluation of alternating moduli PRFs, effectively building an oblivious pseudorandom function. We present a generalization of the PRF proposed by Boneh et al. (TCC 18) along with methods to lower the communication and computation. We then provide several variants of our protocols, with different computation and communication tradeoffs, for evaluating the PRF. Most are in the OT/VOLE hybrid model while one is based on specialized garbling. Our most efficient protocol effectively is about $3\times$ faster and requires $1.3\times$ lesser communication. Our next contribution is the efficient evaluation of the OWF $f(x)=B\cdot_3 (A\cdot_2 x)$ proposed by Dinur et al. (CRYPTO 21) where $A \in \mathbb{F}^{m\times n}_2, B\in\mathbb{F}^{t\times m}_3$ and $\cdot_p$ is multiplication mod $p$. This surprisingly simple OWF can be evaluated within MPC by secret sharing $[\hspace{-3px}[x]\hspace{-3px}]$ over $\mathbb{F}_2$, locally computing $[\hspace{-3px}[v]\hspace{-3px}]=A\cdot_2 [\hspace{-3px}[x]\hspace{-3px}]$, performing a modulus switching protocol to $\mathbb{F}_3$ shares, followed by locally computing the output shares $[\hspace{-3px}[y]\hspace{-3px}]=B\cdot_3 [\hspace{-3px}[v]\hspace{-3px}]$. We design a bespoke MPC-in-the-Head (MPCitH) signature scheme that evaluates the OWF, achieving state of art performance. The resulting signature has a size ranging from 4.0-5.5 KB, achieving between $2\text{-}3\times$ reduction compared to Dinur et al. To the best of our knowledge, this is only $\approx 5\%$ larger than the smallest signature based on symmetric key primitives, including the latest NIST PQC competition submissions. We additionally show that our core techniques can be extended to build very small post-quantum ring signatures for small-medium sized rings that are competitive with state-of-the-art lattice based schemes. Our techniques are in fact more generally applicable to set membership in MPCitH.
Last updated:  2024-05-25
Probabilistically Checkable Arguments for all NP
Shany Ben-David
A probabilistically checkable argument (PCA) is a computational relaxation of PCPs, where soundness is guaranteed to hold only for false proofs generated by a computationally bounded adversary. The advantage of PCAs is that they are able to overcome the limitations of PCPs. A succinct PCA has a proof length that is polynomial in the witness length (and is independent of the non-deterministic verification time), which is impossible for PCPs, under standard complexity assumptions. Bronfman and Rothblum (ITCS 2022) constructed succinct PCAs for NC that are publicly-verifiable and have constant query complexity under the sub-exponential hardness of LWE. We construct a publicly-verifiable succinct PCA with constant query complexity for all NP in the adaptive security setting. Our PCA scheme offers several improvements compared to the Bronfman and Rothblum construction: (1) it applies to all problems in NP, (2) it achieves adaptive security, and (3) it can be realized under any of the following assumptions: the polynomial hardness of LWE; $O(1)$-LIN on bilinear maps; or sub-exponential DDH. Moreover, our PCA scheme has a succinct prover, which means that for any NP relation that can be verified in time $T$ and space $S$, the proof can be generated in time $O_{\lambda,m}(T\cdot\mathrm{polylog}(T))$ and space $O_{\lambda,m}(S\cdot\mathrm{polylog}(T))$. Here, ${O}_{\lambda,m}$ accounts for polynomial factors in the security parameter and in the size of the witness. En route, we construct a new complexity-preserving RAM delegation scheme that is used in our PCA construction and may be of independent interest.
Last updated:  2024-05-25
Generic MitM Attack Frameworks on Sponge Constructions
Xiaoyang Dong, Boxin Zhao, Lingyue Qin, Qingliang Hou, Shun Zhang, and Xiaoyun Wang
This paper proposes general meet-in-the-middle (MitM) attack frameworks for preimage and collision attacks on hash functions based on (generalized) sponge construction. As the first contribution, our MitM preimage attack framework covers a wide range of sponge-based hash functions, especially those with lower claimed security level for preimage compared to their output size. Those hash functions have been very widely standardized (e.g., Ascon-Hash, PHOTON, etc.), but are rarely studied against preimage attacks. Even the recent MitM attack framework on sponge construction by Qin et al. (EUROCRYPT 2023) cannot attack those hash functions. As the second contribution, our MitM collision attack framework shows a different tool for the collision cryptanalysis on sponge construction, while previous collision attacks on sponge construction are mainly based on differential attacks. Most of the results in this paper are the first third-party cryptanalysis results. If cryptanalysis previously existed, our new results significantly improve the previous results, such as improving the previous 2-round collision attack on Ascon-Hash to the current 4 rounds, improving the previous 3.5-round quantum preimage attack on SPHINCS$^+$-Haraka to our 4-round classical preimage attack, etc.
Last updated:  2024-05-25
Nonadaptive One-Way to Hiding Implies Adaptive Quantum Reprogramming
Joseph Jaeger
An important proof technique in the random oracle model involves reprogramming it on hard to predict inputs and arguing that an attacker cannot detect that this occurred. In the quantum setting, a particularly challenging version of this considers adaptive reprogramming wherein the points to be reprogrammed (or output values they should be programmed to) are dependent on choices made by the adversary. Frameworks for analyzing adaptive reprogramming were given by, e.g., by Unruh (CRYPTO 2014), Grilo-Hövelmanns-Hülsing-Majenz (ASIACRYPT 2021), and Pan-Zeng (PKC 2024). We show, counterintuitively, that these adaptive results follow directly from the non-adaptive one-way to hiding theorem of Ambainis-Hamburg-Unruh (CRYPTO 2019). These implications contradict beliefs (whether stated explicitly or implicitly) that some properties of the adaptive frameworks cannot be provided by the Ambainis-Hamburg-Unruh result.
Last updated:  2024-05-25
Measure-Rewind-Extract: Tighter Proofs of One-Way to Hiding and CCA Security in the Quantum Random Oracle Model
Jiangxia Ge, Heming Liao, and Rui Xue
The One-Way to Hiding (O2H) theorem, first given by Unruh (J ACM 2015) and then restated by Ambainis et al. (CRYPTO 2019), is a crucial technique for solving the reprogramming problem in the quantum random oracle model (QROM). It provides an upper bound $d\cdot\sqrt{\epsilon}$ for the distinguisher's advantage, where $d$ is the query depth and $\epsilon$ denotes the advantage of a one-wayness attacker. Later, in order to obtain a tighter upper bound, Kuchta et al. (EUROCRYPT 2020) proposed the Measure-Rewind-Measure (MRM) technique and then proved the Measure-Rewind-Measure O2H (MRM-O2H) theorem, which provides the upper bound $d\cdot\epsilon$. They also proposed an open question: Can we combine their MRM technique with Ambainis et al.'s semi-classical oracle technique (CRYPTO 2019) or Zhandry's compressed oracle technique (CRYPTO 2019) to prove a new O2H theorem with an upper bound even tighter than $d\cdot\epsilon$? In this paper, we give an affirmative answer for the above question. We propose a new technique named Measure-Rewind-Extract (MRE) by combining the MRM technique with the semi-classical oracle technique. By using MRE technique, we prove the Measure-Rewind-Extract O2H (MRE-O2H) theorem, which provides the upper bound $\sqrt{d}\cdot\epsilon$. As an important application of our MRE-O2H theorem, for the $FO^{\cancel{\bot}}$, $FO_m^{\cancel{\bot}}$, $FO^{\bot}$ and $FO_m^\bot$ proposed by Hofheinz et al. (TCC 2017), i.e., the key encapsulation mechanism (KEM) variants of the Fujisaki-Okamoto transformation, we prove the following results in the QROM: Their IND-CCA security can be reduced to the IND-CPA security of the underlying public key encryption (PKE) scheme without the square-root advantage loss. In particular, compared with the IND-CCA proof of $FO^{\cancel{\bot}}$ given by Kuchta et al. (EUROCRYPT 2020), ours removes the injectivity assumption and has a tighter security bound. Under the assumption that the underlying PKE scheme is unique randomness recoverable, we for the first time prove that their IND-CCA security can be reduced to the OW-CPA security of the underlying PKE scheme without the square-root advantage loss.
Last updated:  2024-05-25
Minimize the Randomness in Rasta-Like Designs: How Far Can We Go?
Lorenzo Grassi, Fukang Liu, Christian Rechberger, Fabian Schmid, Roman Walch, and Qingju Wang
The Rasta design strategy allows building low-round ciphers due to its efficient prevention of statistical attacks and algebraic attacks by randomizing the cipher, which makes it especially suitable for hybrid homomorphic encryption (HHE), also known as transciphering. Such randomization is obtained by pseudorandomly sampling new invertible matrices for each round of each new cipher evaluation. However, naively sampling a random invertible matrix for each round significantly impacts the plain evaluation runtime, though it does not impact the homomorphic evaluation cost. To address this issue, Dasta was proposed at ToSC 2020 to reduce the cost of generating the random matrices. In this work, we address this problem from a different perspective: How far can the randomness in Rasta-like designs be reduced in order to minimize the plain evaluation runtime without sacrificing the security? To answer this question, we carefully studied the main threats to Rasta-like ciphers and the role of random matrices in ensuring security. We apply our results to the recently proposed cipher $\text{PASTA}$, proposing a modified version called $\text{PASTA}_\text{v2}$ instantiated with one initial random matrix and fixed linear layers - obtained by combining two MDS matrices with the Kronecker product - for the other rounds. Compared with $\text{PASTA}$, the state-of-the-art cipher for BGV- and BFV-style HHE, our evaluation shows that $\text{PASTA}_\text{v2}$ is up to 100% faster in plain while having the same homomorphic runtime in the SEAL homomorphic encryption library and up to 30% faster evaluation time in HElib, respectively.
Last updated:  2024-05-24
Succinct Homomorphic Secret Sharing
Damiano Abram, Lawrence Roy, and Peter Scholl
This work introduces homomorphic secret sharing (HSS) with succinct share size. In HSS, private inputs are shared between parties, who can then homomorphically evaluate a function on their shares, obtaining a share of the function output. In succinct HSS, a portion of the inputs can be distributed using shares whose size is sublinear in the number of such inputs. The parties can then locally evaluate a function $f$ on the shares, with the restriction that $f$ must be linear in the succinctly shared inputs. We construct succinct, two-party HSS for branching programs, based on either the decisional composite residuosity assumption, a DDH-like assumption in class groups, or learning with errors with a superpolynomial modulus-to-noise ratio. We then give several applications of succinct HSS, which were only previously known using fully homomorphic encryption, or stronger tools: - Succinct vector oblivious linear evaluation (VOLE): Two parties can obtain secret shares of a long, arbitrary vector $\vec x$, multiplied by a scalar $\Delta$, with communication sublinear in the size of the vector. - Batch, multi-party distributed point functions: A protocol for distributing a batch of secret, random point functions among $N$ parties, for any polynomial $N$, with communication sublinear in the number of DPFs. - Sublinear MPC for any number of parties: Two new constructions of MPC with sublinear communication complexity, with $N$ parties for any polynomial $N$: (1) For general layered Boolean circuits of size $s$, with communication $O(N s/\log\log s)$, and (2) For layered, sufficiently wide Boolean circuits, with communication $O(N s/\log s)$.
Last updated:  2024-05-24
Privacy-Preserving Blueprints via Succinctly Verifiable Computation over Additively-Homomorphically Encrypted Data
Scott Griffy, Markulf Kohlweiss, Anna Lysyanskaya, and Meghna Sengupta
Introduced by Kohlweiss, Lysyanskaya, and Nguyen (Eurocrypt'23), an $f$-privacy-preserving blueprint (PPB) system allows an auditor with secret input $x$ to create a public encoding of the function $f(x,\cdot)$ that verifiably corresponds to a commitment $C_x$ to $x$. The auditor will then be able to derive $f(x,y)$ from an escrow $Z$ computed by a user on input the user's private data $y$ corresponding to a commitment $C_y$. $Z$ verifiably corresponds to the commitment $C_y$ and reveals no other information about $y$. PPBs provide an abuse-resistant escrow mechanism: for example, if $f$ is the watchlist function where $f(x,y)$ outputs $y$ only in the event that $y$ is on the list $x$, then an $f$-PPB allows the auditor to trace watchlisted users in an otherwise anonymous system. Yet, the auditor's $x$ must correspond to a publicly available $C_x$ (authorized by a transparent, lawful process), and the auditor will learn nothing except $f(x,y)$. In this paper, we build on the original PPB results in three ways: (1) We define and satisfy a stronger notion of security where a malicious auditor cannot frame a user in a transaction to which this user was not a party. (2) We provide efficient schemes for a bigger class of functions $f$; for example, for the first time, we show how to realize $f$ that would allow the auditor to trace e-cash transactions of a criminal suspect. (3) For the watchlist and related functions, we reduce the size of the escrow $Z$ from linear in the size of the auditor's input $x$, to logarithmic. Of independent interest, we develop a new framework for succinctly verifiable computation over additively-homomorphically encrypted data.
Last updated:  2024-05-24
Logstar: Efficient Linear* Time Secure Merge
Suvradip Chakraborty, Stanislav Peceny, Srinivasan Raghuraman, and Peter Rindal
Secure merge considers the problem of combining two sorted lists into a single sorted secret-shared list. Merge is a fundamental building block for many real-world applications. For example, secure merge can implement a large number of SQL-like database joins, which are essential for almost any data processing task such as privacy-preserving fraud detection, ad conversion rates, data deduplication, and many more. We present two constructions with communication bandwidth and rounds tradeoff. Logstar, our bandwidth-optimized construction, takes inspiration from Falk and Ostrovsky (ITC, 2021) and runs in $O(n\log^*n)$ time and communication with $O(\log n)$ rounds. In particular, for all conceivable $n$, the $\log^*n$ factor will be equal to the constant $2$, and therefore we achieve a near-linear running time. Median, our rounds-optimized construction, builds on the classic parallel medians-based insecure merge approach of Valiant (SIAM J. Comput., 1975), later explored in the secure setting by Blunk et al. (2022), and requires $O(n \log^c n)$, $c \approx 1.71$, communication with $O(\log \log n)$ rounds. We introduce two additional constructions that merge input lists of different sizes. SquareRootMerge, merges lists of sizes $n^{\frac{1}{2}}$ and $n$, and runs in $O(n)$ time and communication with $O(\log n)$ rounds. CubeRootMerge is closely inspired by Blunk et al.'s (2022) construction and merges lists of sizes $n^{\frac{1}{3}}$ and $n$. It runs in $O(n)$ time and communication with $O(1)$ rounds. We optimize our constructions for concrete efficiency. Today, concretely efficient secure merge protocols rely on standard techniques such as Batcher's merging network or generic sorting. These approaches require an $O(n \log n)$ size circuit of $O(\log n)$ depth. Despite significant research thrust, no work has been able to reduce their concrete costs. Our constructions are the first to be more efficient by improving their asymptotics and maintaining small constants. We analytically benchmark against these constructions and show that Logstar reduces bandwidth costs $\approx1.43\times$ and Median reduces rounds $\approx1.62\times$.
Last updated:  2024-05-24
Exponential Quantum Speedup for the Traveling Salesman Problem
Anant Sharma, Nupur Deshpande, Sanchita Ghosh, Sreetama Das, and Shibdas Roy
The traveling salesman problem is the problem of finding out the shortest route in a network of cities, that a salesman needs to travel to cover all the cities, without visiting the same city more than once. This problem is known to be $NP$-hard with a brute-force complexity of $O(N^N)$ or $O(N^{2N})$ for $N$ number of cities. This problem is equivalent to finding out the shortest Hamiltonian cycle in a given graph, if at least one Hamiltonian cycle exists in it. Quantum algorithms for this problem typically provide with a quadratic speedup only, using Grover's search, thereby having a complexity of $O(N^{N/2})$ or $O(N^N)$. We present a bounded-error quantum polynomial-time (BQP) algorithm for solving the problem, providing with an exponential speedup. The overall complexity of our algorithm is $O(N^3\log(N)\kappa/\epsilon + 1/\epsilon^3)$, where the errors $\epsilon$ are $O(1/{\rm poly}(N))$, and $\kappa$ is the not-too-large condition number of the matrix encoding all Hamiltonian cycles.
Last updated:  2024-05-24
Detecting Rogue Decryption in (Threshold) Encryption via Self-Incriminating Proofs
James Hsin-yu Chiang, Bernardo David, Tore Kasper Frederiksen, Arup Mondal, and Esra Yeniaras
Keeping decrypting parties accountable in public key encryption is notoriously hard since the secret key owner can decrypt any arbitrary ciphertext. Threshold encryption aims to solve this issue by distributing the power to decrypt among a set of parties, who must interact via a decryption protocol. However, such parties can employ cryptographic tools such as Multiparty Computation (MPC) to decrypt arbitrary ciphertexts without being detected. We introduce the notion of (threshold) encryption with Self-Incriminating Proofs, where parties must produce a self-incriminating proof of decryption when decrypting every ciphertext. In the standard public key encryption case, the adversary could destroy these proofs, so we strengthen our notion to guarantee that the proofs are published when decryption succeeds. This creates a decryption audit trail, which is useful in scenarios where decryption power is held by a single trusted party (e.g., a Trusted Execution Environment) who must be kept accountable. In the threshold case, we ensure that at least one of the parties who execute the decryption protocol will learn a self-incriminating proof, even if they employ advanced tools such as MPC. The fact that a party learns the proof and may leak it at any moment functions as a deterrent for parties who do not wish to be identified as malicious decryptors (e.g., a commercial operator of a service based on threshold encryption). We investigate the (im)possibility and applications of our notions while providing matching constructions under appropriate assumptions. In the threshold case, we build on recent results on Individual Cryptography (CRYPTO 2023).
Last updated:  2024-05-24
Relations among new CCA security notions for approximate FHE
Sébastien Canard, Caroline Fontaine, Duong Hieu Phan, David Pointcheval, Marc Renard, and Renaud Sirdey
In a recent Eurocrypt'24 paper, Manulis and Nguyen have proposed a new CCA security notion, vCCA, and associated construction blueprints to leverage both CPA-secure and correct FHE beyond the CCA1 security barrier. However, because their approach is only valid under the correctness assumption, it leaves a large part of the FHE spectrum uncovered as many FHE schemes used in practice turn out to be approximate and, as such, do not satisfy the correctness assumption. In this paper, we improve their work by defining and investigating a variant of their security notion which is suitable for a more general case where approximate FHE are included. As the passive security of approximate FHE schemes is more appropriately captured by CPAD rather than CPA security, we start from the former notion to define our vCCAD new security notion. Although, we show that vCCA and vCCAD are equivalent when the correctness assumption holds, we establish that vCCAD security is strictly stronger than vCCA security in the general case. In doing so, we interestingly establish several new separation results between variants of CPAD security of increasing strength. This allows us to clarify the relationship between vCCA security and CPAD security, and to reveal that the security notions landscape is much simpler for exact FHE than when approximate ones are included --- in which case, for example, we establish that multiple challenges security notions are strictly stronger than single-challenge ones for both CPAD and vCCAD security. Lastly, we also give concrete construction blueprints, showing how to leverage some of the blueprints proposed by Manulis and Nguyen to achieve vCCAD security. As a result, vCCAD security is the strongest CCA security notion so far known to be achievable by both correct and approximate FHE schemes.
Last updated:  2024-05-24
Traceable Secret Sharing Based on the Chinese Remainder Theorem
Charlotte Hoffmann
Traceable threshold secret sharing schemes, introduced by Goyal, Song and Srinivasan (CRYPTO'21), allow to provably trace leaked shares to the parties that leaked them. The authors give the first definition and construction of traceable secret sharing schemes. However, the size of the shares in their construction are quadratic in the size of the secret. Boneh, Partap and Rotem (CRYPTO'24) recently proposed a new definition of traceable secret sharing and the first practical constructions. In their definition, one considers a reconstruction box $R$ that contains $f$ leaked shares and, on input $t-f$ additional shares, outputs the secret $s$. A scheme is traceable if one can find out the leaked shares inside the box $R$ by only getting black-box access to $R$. Boneh, Partap and Rotem give constructions from Shamir's secret sharing and Blakely's secret sharing. The constructions are efficient as the size of the secret shares is only twice the size of the secret. In this work we present the first traceable secret sharing scheme based on the Chinese remainder theorem. This was stated as an open problem by Boneh, Partap and Rotem, as it gives rise to traceable secret sharing with weighted threshold access structures. The scheme is based on Mignotte's secret sharing and increases the size of the shares of the standard Mignotte secret sharing scheme by a factor of $2$.
Last updated:  2024-05-24
SoK: Public Randomness
Alireza Kavousi, Zhipeng Wang, and Philipp Jovanovic
Public randomness is a fundamental component in many cryptographic protocols and distributed systems and often plays a crucial role in ensuring their security, fairness, and transparency properties. Driven by the surge of interest in blockchain and cryptocurrency platforms and the usefulness of such a building block in those areas, designing secure protocols to generate public randomness in a distributed manner has received considerable attention in recent years. This paper presents a systematization of knowledge on the topic of public randomness with a focus on cryptographic tools providing public verifiability and key themes underlying these systems. We provide concrete insights on how state-of-the-art protocols achieve this task efficiently in an adversarial setting and present various research gaps that may be of interest for future research.
Last updated:  2024-05-24
The Perils of Limited Key Reuse: Adaptive and Parallel Mismatch Attacks with Post-processing Against Kyber
Qian Guo, Erik Mårtensson, and Adrian Åström
In this paper, we study the robustness of Kyber, the Learning With Errors (LWE)-based Key Encapsulation Mechanism (KEM) chosen for standardization by NIST, against key mismatch attacks. We demonstrate that Kyber's security levels can be compromised with a few mismatch queries by striking a balance between the parallelization level and the cost of lattice reduction for post-processing. This highlights the imperative need to strictly prohibit key reuse in CPA-secure Kyber. We further propose an adaptive method to enhance parallel mismatch attacks, initially proposed by Shao et al. at AsiaCCS 2024, thereby significantly reducing query complexity. This method combines the adaptive attack with post-processing via lattice reduction to retrieve the final secret key entries. Our method proves its efficacy by reducing query complexity by 14.6 % for Kyber512 and 7.5 % for Kyber768/Kyber1024. Furthermore, this approach has the potential to substantially improve multi-value Plaintext-Checking (PC) oracle-based side-channel attacks against the CCA-secure version of Kyber KEM.
Last updated:  2024-05-24
Closing the Efficiency Gap between Synchronous and Network-Agnostic Consensus
Giovanni Deligios and Mose Mizrahi Erbes
In the consensus problem, $n$ parties want to agree on a common value, even if some of them are corrupt and arbitrarily misbehave. If the parties have a common input $m$, then they must agree on $m$. Protocols solving consensus assume either a synchronous communication network, where messages are delivered within a known time, or an asynchronous network with arbitrary delays. Asynchronous protocols only tolerate $t_a < n/3$ corrupt parties. Synchronous ones can tolerate $t_s < n/2$ corruptions with setup, but their security completely breaks down if the synchrony assumptions are violated. Network-agnostic consensus protocols, as introduced by Blum, Katz, and Loss [TCC'19], are secure regardless of network conditions, tolerating up to $t_s$ corruptions with synchrony and $t_a$ without, under provably optimal assumptions $t_a \leq t_s$ and $2t_s + t_a < n$. Despite efforts to improve their efficiency, all known network-agnostic protocols fall short of the asymptotic complexity of state-of-the-art purely synchronous protocols. In this work, we introduce a novel technique to compile any synchronous and any asynchronous consensus protocols into a network-agnostic one. This process only incurs a small constant number of overhead rounds, so that the compiled protocol matches the optimal round complexity for synchronous protocols. Our compiler also preserves under a variety of assumptions the asymptotic communication complexity of state-of-the-art synchronous and asynchronous protocols. Hence, it closes the current efficiency gap between synchronous and network-agnostic consensus. As a plus, our protocols support $\ell$-bit inputs, and can be extended to achieve communication complexity $O(n^2\kappa + \ell n)$ under the assumptions for which this is known to be possible for purely synchronous protocols.
Last updated:  2024-05-24
Linear-Communication Asynchronous Complete Secret Sharing with Optimal Resilience
Xiaoyu Ji, Junru Li, and Yifan Song
Secure multiparty computation (MPC) allows a set of $n$ parties to jointly compute a function on their private inputs. In this work, we focus on the information-theoretic MPC in the \emph{asynchronous network} setting with optimal resilience ($t<n/3$). The best-known result in this setting is achieved by Choudhury and Patra [J. Cryptol '23], which requires $O(n^4\kappa)$ bits per multiplication gate, where $\kappa$ is the size of a field element. An asynchronous complete secret sharing (ACSS) protocol allows a dealer to share a batch of Shamir sharings such that all parties eventually receive their shares. ACSS is an important building block in AMPC. The best-known result of ACSS is due to Choudhury and Patra [J. Cryptol '23], which requires $O(n^3\kappa)$ bits per sharing. On the other hand, in the synchronous setting, it is known that distributing Shamir sharings can be achieved with $O(n\kappa)$ bits per sharing. There is a gap of $n^2$ in the communication between the synchronous setting and the asynchronous setting. Our work closes this gap by presenting the first ACSS protocol that achieves $O(n\kappa)$ bits per sharing. When combined with the compiler from ACSS to AMPC by Choudhury and Patra [IEEE Trans. Inf. Theory '17], we obtain an AMPC with $O(n^2\kappa)$ bits per multiplication gate, improving the previously best-known result by a factor of $n^2$. Moreover, with a concurrent work that improves the compiler by Choudhury and Patra by a factor of $n$, we obtain the first AMPC with $O(n\kappa)$ bits per multiplication gate.
Last updated:  2024-05-24
Multiple Group Action Dlogs with(out) Precomputation
Alexander May and Massimo Ostuzzi
Let $\star: G \times X \rightarrow X$ be the action of a group $G$ of size $N=|G|$ on a set $X$. Let $y = g \star x \in X$ be a group action dlog instance, where our goal is to compute the unknown group element $g \in G$ from the known set elements $x,y \in X$. The Galbraith-Hess-Smart (GHS) collision finding algorithm solves the group action dlog in $N^{\frac 1 2}$ steps with polynomial memory. We show that group action dlogs are suitable for precomputation attacks. More precisely, for any $s,t$ our precomputation algorithm computes within $st$ steps a hint of space complexity $s$, which allows to solve any group action dlog in an online phase within $t$ steps. A typical instantiation is $s=t=N^{\frac 1 3}$, which gives precomputation time $N^{\frac 2 3}$ and space $N^{\frac 1 3}$, and online time only $N^{\frac 1 3}$. Moreover, we show that solving multiple group action dlog instances $y_1, \ldots , y_m$ allows for speedups. Namely, our collision finding algorithm solves $m$ group action dlogs in $\sqrt{m}N^{\frac 1 2}$ steps, instead of the straight-forward $mN^{\frac 1 2}$ steps required for running $m$ times GHS. Interestingly, our multi instance algorithm (with precomputation) can be seen as a special case of our precomputation algorithm. Our multiple instance approach can be freely combined with our precomputations, allowing for a variety of tradeoffs. Technically, our precomputation and multiple instance group action dlog attacks are adaptations of the techniques from the standard dlog setting in abelian groups. While such an adaptation seems natural, it is per se unclear which techniques transfer from the dlog to the more general group action dlog setting, for which $X$ does not offer a group structure. Our algorithms have direct implications for all group action based cryptosystems, such as CSIDH and its variants. We provide experimental evidence that our techniques work well in the CSIDH setting.
Last updated:  2024-05-24
Reducing Overdefined Systems of Polynomial Equations Derived from Small Scale Variants of the AES via Data Mining Methods
Jana Berušková, Martin Jureček, and Olha Jurečková
This paper deals with reducing the secret key computation time of small scale variants of the AES cipher using algebraic cryptanalysis, which is accelerated by data mining methods. This work is based on the known plaintext attack and aims to speed up the calculation of the secret key by processing the polynomial equations extracted from plaintext-ciphertext pairs. Specifically, we propose to transform the overdefined system of polynomial equations over GF(2) into a new system so that the computation of the Gröbner basis using the F4 algorithm takes less time than in the case of the original system. The main idea is to group similar polynomials into clusters, and for each cluster, sum the two most similar polynomials, resulting in simpler polynomials. We compare different data mining techniques for finding similar polynomials, such as clustering or locality-sensitive hashing (LSH). Experimental results show that using the LSH technique, we get a system of equations for which we can calculate the Gröbner basis the fastest compared to the other methods that we consider in this work. Experimental results also show that the time to calculate the Gröbner basis for the transformed system of equations is significantly reduced compared to the case when the Gröbner basis was calculated from the original non-transformed system. This paper demonstrates that reducing an overdefined system of equations reduces the computation time for finding a secret key.
Last updated:  2024-05-24
Efficient Instances of Docked Double Decker With AES, and Application to Authenticated Encryption
Christoph Dobraunig, Krystian Matusiewicz, Bart Mennink, and Alexander Tereschenko
A tweakable wide blockcipher is a construction which behaves in the same way as a tweakable blockcipher, with the difference that the actual block size is flexible. Due to this feature, a tweakable wide blockcipher can be directly used as a strong encryption scheme that provides full diffusion when encrypting plaintexts to ciphertexts and vice versa. Furthermore, it can be the basis of authenticated encryption schemes fulfilling the strongest security notions. In this paper, we present three instantiations of the docked double decker tweakable wide blockcipher: $\mathit{ddd}\text{-}\mathit{AES}$, $\mathit{ddd}\text{-}\mathit{AES}^+$, and $\mathit{bbb}\text{-}\mathit{ddd}\text{-}\mathit{AES}$. These instances exclusively use similar building blocks as AES-GCM (AES and finite field multiplication), are designed for maximal parallelism, and hence, can make efficient use of existing hardware accelerators. $\mathit{ddd}\text{-}\mathit{AES}$ is a birthday bound secure scheme, and $\mathit{ddd}\text{-}\mathit{AES}^+$ is an immediate generalization to allow for variable length tweaks. $\mathit{bbb}\text{-}\mathit{ddd}\text{-}\mathit{AES}$ achieves security beyond the birthday bound provided that the same tweak is not used too often. Moreover, $\mathit{bbb}\text{-}\mathit{ddd}\text{-}\mathit{AES}$ builds upon a novel conditionally beyond birthday bound secure pseudorandom function, a tweakable variant of the XOR of permutations, facilitating in the need to include a tweak in the AES evaluations without sacrificing flexibility in docked double decker. We furthermore introduce an authenticated encryption mode $\mathit{aaa}$ specifically tailored to be instantiated with $\mathit{ddd}\text{-}\mathit{AES}$ and $\mathit{bbb}\text{-}\mathit{ddd}\text{-}\mathit{AES}$, where special attention is given to how the nonce and associated data can be processed. We prove that this mode is secure in the nonce-respecting setting, in the nonce-misuse setting, as well as in the setting where random nonces are used.
Last updated:  2024-05-24
Arma: Byzantine Fault Tolerant Consensus with Horizontal Scalability
Yacov Manevich, Hagar Meir, Kaoutar Elkhiyaoui, Yoav Tock, and May Buzaglo
Arma is a Byzantine Fault Tolerant (BFT) consensus system designed to achieve horizontal scalability across all hardware resources: network bandwidth, CPU, and disk I/O. As opposed to preceding BFT protocols, Arma separates the dissemination and validation of client transactions from the consensus process, restricting the latter to totally ordering only metadata of batches of transactions. This separation enables each party to distribute compute and storage resources for transaction validation, dissemination and disk I/O among multiple machines, resulting in horizontal scalability. Additionally, Arma ensures censorship resistance by imposing a maximum time limit on the inclusion of client transactions. We built and evaluated two Arma prototypes. The first is an independent system handling over 200,000 transactions per second, the second integrated into Hyperledger Fabric, speeding its consensus by an order of magnitude.
Last updated:  2024-05-24
A Simple Noncommutative UOV Scheme
Lih-Chung Wang, Po-En Tseng, Yen-Liang Kuan, and Chun-Yen Chou
In this paper, we propose a simple noncommutative-ring based UOV signature scheme with key-randomness alignment: Simple NOVA, which can be viewed as a simplified version of NOVA[48]. We simplify the design of NOVA by skipping the perturbation trick used in NOVA, thus shortens the key generation process and accelerates the signing and verification. Together with a little modification accordingly, this alternative version of NOVA is also secure and may be more suitable for practical uses. We also use Magma to actually implement and give a detailed security analysis against known major attacks.
Last updated:  2024-05-24
Resettable Statistical Zero-Knowledge for NP
Susumu Kiyoshima
Resettable statistical zero-knowledge [Garg--Ostrovsky--Visconti--Wadia, TCC 2012] is a strong privacy notion that guarantees statistical zero-knowledge even when the prover uses the same randomness in multiple proofs. In this paper, we show an equivalence of resettable statistical zero-knowledge arguments for $NP$ and witness encryption schemes for $NP$. - Positive result: For any $NP$ language $L$, a resettable statistical zero-knowledge argument for $L$ can be constructed from a witness encryption scheme for $L$ under the assumption of the existence of one-way functions. - Negative result: The existence of even resettable statistical witness-indistinguishable arguments for $NP$ imply the existence of witness encryption schemes for $NP$ under the assumption of the existence of one-way functions. The positive result is obtained by naturally extending existing techniques (and is likely to be already well-known among experts). The negative result is our main technical contribution. To explore workarounds for the negative result, we also consider resettable security in a model where the honest party's randomness is only reused with fixed inputs. We show that resettable statistically hiding commitment schemes are impossible even in this model.
Last updated:  2024-05-24
DiTRU: A Resurrection of NTRU over Dihedral Group
Ali Raya, Vikas Kumar, and Sugata Gangopadhyay
NTRU-like cryptosystems are among the most studied lattice-based post-quantum candidates. While most NTRU proposals have been introduced over a commutative ring of quotient polynomials, other rings can be used. Noncommutative algebra has been endorsed as a direction to build new variants of NTRU a long time ago. The first attempt to construct a noncommutative variant was due to Hoffstein and Silverman motivated by more resistance to lattice attack. The scheme has been built over the group ring of a dihedral group. However, their design differed from standard NTRU and soon was found vulnerable to algebraic attacks. In this work, we revive the group ring NTRU over the dihedral group as an instance of the GR-NTRU framework. Unlike many proposals of noncommutative variants in the literature, our work focuses on putting the scheme into practice. We clear all the aspects that make our scheme implementable by proposing an efficient inversion algorithm over the new setting of the noncommutative ring, describing the decryption failure model, and analyzing the lattice associated with our instantiation. Finally, we discuss the best-known attacks against our scheme and provide an implementation targeting 128-bit, 192-bit, and 256-bit levels of security as proof of its practicality.
Last updated:  2024-05-24
Differential Cryptanalysis on Quantum Computers
Kyungbae Jang, Yujin Oh, and Hwajeong Seo
As quantum computing progresses, extensive research has been conducted to find quantum advantages in the field of cryptography. Combining quantum algorithms with classical cryptographic analysis methods, such as differential cryptanalysis and linear cryptanalysis, has the potential to reduce complexity. In this paper, we present a quantum differential finding circuit for differential cryptanalysis. In our quantum circuit, both plaintext and input difference are in a superposition state. Actually, while our method cannot achieve a direct speedup with quantum computing, it offers a different perspective by relying on quantum probability in a superposition state. For the quantum simulation, given the limited number of qubits, we simulate our quantum circuit by implementing the Toy-ASCON quantum circuit.
Last updated:  2024-05-24
Analysis on Sliced Garbling via Algebraic Approach
Taechan Kim
Recent improvements to garbled circuits are mainly focused on reducing their size. The state-of-the-art construction of Rosulek and Roy~(Crypto~2021) requires $1.5\kappa$ bits for garbling AND gates in the free-XOR setting. This is below the previously proven lower bound $2\kappa$ in the linear garbling model of Zahur, Rosulek, and Evans~(Eurocrypt~2015). Recently, Ashur, Hazay, and Satish~(eprint 2024/389) proposed a scheme that requires $4/3\kappa + O(1)$ bits for garbling AND gates. Precisely they extended the idea of \emph{slicing} introduced by Rosulek and Roy to garble 3-input gates of the form $g(u,v,w) := u(v+w)$. By setting $w = 0$, it can be used to garble AND gates with the improved communication costs. However, in this paper, we observe that the scheme proposed by Ashur, Hazy, and Satish leaks information on the permute bits, thereby allowing the evaluator to reveal information on the private inputs. To be precise, we show that in their garbling scheme, the evaluator can compute the bits $\alpha$ and $\beta + \gamma$, where $\alpha$, $\beta$, and $\gamma$ are the private permute bits of the input labels $A$, $B$, and $C$, respectively.
Last updated:  2024-05-24
Sailfish: Towards Improving the Latency of DAG-based BFT
Nibesh Shrestha, Rohan Shrothrium, Aniket Kate, and Kartik Nayak
The traditional leader-based BFT protocols often lead to unbalanced work distribution among participating parties, with a single leader carrying out the majority of the tasks. Recently, Directed Acyclic Graph (DAG) based BFT protocols have emerged as a solution to balance consensus efforts across parties, typically resulting in higher throughput compared to traditional protocols. However, existing DAG-based BFT protocols exhibit long latency to commit decisions. The primary reason for such a long latency is having a leader every 2 or more ``rounds''. Even under honest leaders, these protocols require two or more reliable broadcast (RBC) instances to commit the proposal submitted by the leader (leader vertex), and additional RBCs to commit other proposals (non-leader vertices). In this work, we present \name, the first DAG-based BFT that supports a leader vertex in each round. Under honest leaders, \name maintains a commit latency of one RBC round plus $1\delta$ to commit the leader vertex (where $\delta$ is the actual transmission latency of a message) and only an additional RBC round to commit non-leader vertices. Furthermore, we extend \name to \multiname, which facilitates multiple leaders within a single round and commits all leader vertices in a round with a latency of one RBC round plus $1\delta$. Through experimental evaluation, we demonstrate that our protocols achieve significantly better latency compared to state-of-the-art DAG-based protocols, with slightly better throughput.
Last updated:  2024-05-24
Can We Beat Three Halves Lower Bound?: (Im)Possibility of Reducing Communication Cost for Garbled Circuits
Chunghun Baek and Taechan Kim
Recent improvements to garbled circuits are mainly focused on reducing their size. The state-of-the-art construction of Rosulek and Roy (Crypto 2021) requires $1.5\kappa$ bits for garbling AND gates in the free-XOR setting. This is below the previously proven lower bound $2\kappa$ in the linear garbling model of Zahur, Rosulek, and Evans (Eurocrypt 2015). Whether their construction is optimal in a more inclusive model than the linear garbling model still remains open. This paper begins by providing a comprehensive model for a large class of practical garbling schemes and proves the lower bound for the size of the garbled AND gates in our model. We show that garbled AND gates require at least $1.5\kappa$ bits in our new model with the free-XOR setting. It is remarkable to see that the construction by Rosulek and Roy is already optimal despite the fact that our model possibly captures any potential extension of their construction.
Last updated:  2024-05-23
Quantum Public-Key Encryption with Tamper-Resilient Public Keys from One-Way Functions
Fuyuki Kitagawa, Tomoyuki Morimae, Ryo Nishimaki, and Takashi Yamakawa
We construct quantum public-key encryption from one-way functions. In our construction, public keys are quantum, but ciphertexts are classical. Quantum public-key encryption from one-way functions (or weaker primitives such as pseudorandom function-like states) are also proposed in some recent works [Morimae-Yamakawa, eprint:2022/1336; Coladangelo, eprint:2023/282; Barooti-Grilo-Malavolta-Sattath-Vu-Walter, eprint:2023/877]. However, they have a huge drawback: they are secure only when quantum public keys can be transmitted to the sender (who runs the encryption algorithm) without being tampered with by the adversary, which seems to require unsatisfactory physical setup assumptions such as secure quantum channels. Our construction is free from such a drawback: it guarantees the secrecy of the encrypted messages even if we assume only unauthenticated quantum channels. Thus, the encryption is done with adversarially tampered quantum public keys. Our construction is the first quantum public-key encryption that achieves the goal of classical public-key encryption, namely, to establish secure communication over insecure channels, based only on one-way functions. Moreover, we show a generic compiler to upgrade security against chosen plaintext attacks (CPA security) into security against chosen ciphertext attacks (CCA security) only using one-way functions. As a result, we obtain CCA secure quantum public-key encryption based only on one-way functions.
Last updated:  2024-05-23
On Efficient and Secure Compression Modes for Arithmetization-Oriented Hashing
Elena Andreeva, Rishiraj Bhattacharyya, Arnab Roy, and Stefano Trevisani
ZK-SNARKs, a fundamental component of privacy-oriented payment systems, identity protocols, or anonymous voting systems, are advanced cryptographic protocols for verifiable computation: modern SNARKs allow to encode the invariants of a program, expressed as an arithmetic circuit, in an appropriate constraint language from which short, zero-knowledge proofs for correct computations can be constructed. One of the most important computations that is run through SNARK systems is the verification of Merkle tree (MT) opening proofs, which relies on the evaluation of a fixed-input-length (FIL) cryptographic compression function over binary MTs. As classical, bit-oriented hash functions like SHA-2 are not compactly representable in SNARK frameworks, Arithmetization-Oriented (AO) cryptographic designs have emerged as an alternative, efficient solution. Today, the majority of AO compression functions are built from the Sponge permutation-based hashing mode. While this approach allows cost savings, compared to blockcipher-based modes, as it does not require key-scheduling, AO blockcipher schedulers are often cheap to compute. Furthermore, classical bit-oriented cryptography has long studied how to construct provably secure compression functions from blockciphers, following the Preneel-Govaerts-Vandewalle (PGV) framework. The potential efficiency gains together with the strong provable security foundations in the classic setting, motivate the study of AO blockcipher-based compression functions. In this work, we propose PGV-LC and PGV-ELC, two AO blockcipher-based FIL compression modes inspired by and extending the classical PGV approach, offering flexible input and output sizes and coming with provable security guarantees in the AO setting. We prove the collision and preimage resistance in the ideal cipher model, and give bounds for collision and opening resistance over MTs of arbitrary arity. We compare experimentally the AO PGV-ELC mode over the Hades blockcipher with its popular and widely adopted Sponge instantiation, Poseidon, and its improved variant Poseidon2. Our resulting constructions are up to \(3\times \) faster than Poseidon and \(2\times \) faster than Poseidon2 in native x86 execution, and up to \(50\% \) faster in the Groth16 SNARK framework. Finally, we study the benefits of using MTs of arity wider than two, proposing a new strategy to obtain a compact R1CS constraint system in such case. In fact, by combining an efficient parametrization of the Hades blockcipher over the PGV-ELC mode, together with an optimal choice of the MT arity, we measured an improvement of up to \(9\times \) in native MT construction time, and up to \(2.5\times \) in proof generation time, compared to Poseidon over binary MTs.
Last updated:  2024-05-23
$\Pi$: A Unified Framework for Verifiable Secret Sharing
Karim Baghery
An $(n, t)$-Verifiable Secret Sharing (VSS) scheme allows a dealer to share a secret among $n$ parties, s.t. all the parties can verify the validity of their shares and only a set of them, i.e., more than $t$, can access the secret. In this paper, we present $\Pi$, as a unified framework for building VSS schemes in the honest majority setting. Notably, $\Pi$ does not rely on homomorphic commitments; instead requires a random oracle and any commitment scheme that extra to its core attributes hiding and binding, it might be homomorphic and/or post-quantum (PQ) secure. - When employing Discrete Logarithm (DL)-based commitments, $\Pi$ enables the construction of two novel VSS schemes in the RO model, named $\Pi_P$ and $\Pi_F$. Compared to the well-known Pedersen and Feldman VSS schemes, both $\Pi_P$ and $\Pi_F$ require $O(1)$ (resp. $O(t)$) exponentiations in the verification (resp. reconstruction) process, as opposed to $O(t)$ (resp. $O(t^2)$), albeit at the expense of a constant factor slower sharing and increased communication. - By instantiating $\Pi$ with a hash-based commitment, we obtain a novel PQ-secure VSS scheme, labeled $\Pi_{LA}$. $\Pi_{LA}$ outperforms the recent protocol by Atapoor, Baghery, Cozzo, and Pedersen from Asiacrypt'23 by a constant factor in all metrics. $\Pi_{LA}$ can also be seen as an amplified version of the simple VSS scheme, proposed by Gennaro, Rabin, and Rabin at PODC'98. - Building upon $\Pi_F$, we construct a Publicly VSS (PVSS) scheme, labeled $\Pi_S$, that can be seen as a new variant of Schoenmakers' scheme from Crypto'99. To this end, we first define the Polynomial Discrete Logarithm (PDL) problem, as a generalization of DL and then build a variant of the Schnorr Proof of Knowledge (PoK) scheme based on the new hardness assumption. We think the PDL relation and the associated PoK scheme can be independently interesting for Shamir-based threshold protocols. We believe $\Pi$ is general enough to be employed in various contexts such as lattices, isogenies, and an extensive array of practical use cases.
Last updated:  2024-05-23
LegoSNARK: Modular Design and Composition of Succinct Zero-Knowledge Proofs
Matteo Campanelli, Dario Fiore, and Anaïs Querol
We study the problem of building SNARKs modularly by linking small specialized “proof gadgets" SNARKs in a lightweight manner. Our motivation is both theoretical and practical. On the theoretical side, modular SNARK designs would be flexible and reusable. In practice, specialized SNARKs have the potential to be more efficient than general-purpose schemes, on which most existing works have focused. If a computation naturally presents different “components" (e.g. one arithmetic circuit and one boolean circuit), a general-purpose scheme would homogenize them to a single representation with a subsequent cost in performance. Through a modular approach one could instead exploit the nuances of a computation and choose the best gadget for each component. Our contribution is LegoSNARK, a "toolbox" (or framework) for commit-and-prove zkSNARKs (CP-SNARKs) that includes: 1) General composition tools: build new CP-SNARKs from proof gadgets for basic relations $\mathit{simply}$. 2) A "lifting" tool: add commit-and-prove capabilities to a broad class of existing zkSNARKs $\mathit{efficiently}$. This makes them interoperable (linkable) within the same computation. For example, one QAP-based scheme can be used prove one component; another GKR-based scheme can be used to prove another. 3) A collection of succinct proof gadgets for a variety of relations. Additionally, through our framework and gadgets, we are able to obtain new succinct proof systems. Notably: – $\mathsf{LegoGro16}$, a commit-and-prove version of Groth16 zkSNARK, that operates over data committed with a classical Pedersen vector commitment, and that achieves a 5000$\times$ speed in proving time. – $\mathsf{LegoUAC}$, a pairing-based SNARK for arithmetic circuits that has a universal, circuit-independent, CRS, and proving time linear in the number of circuit gates (vs. the recent scheme of Groth et al. (CRYPTO'18) with quadratic CRS and quasilinear proving time). – CP-SNARKs for matrix multiplication that achieve optimal proving complexity. 4) A codebase written in C$\mathsf{++}$ for highly composable zkSNARKs with commit-and-prove capabilities$^*$. _______________ $^*$ Available at https://github.com/imdea-software/legosnark .
Last updated:  2024-05-23
Hardness of Non-Interactive Differential Privacy from One-Way Functions
Lucas Kowalczyk, Tal Malkin, Jonathan Ullman, and Daniel Wichs
A central challenge in differential privacy is to design computationally efficient non-interactive algorithms that can answer large numbers of statistical queries on a sensitive dataset. That is, we would like to design a differentially private algorithm that takes a dataset $D \in X^n$ consisting of some small number of elements $n$ from some large data universe $X$, and efficiently outputs a summary that allows a user to efficiently obtain an answer to any query in some large family $Q$. Ignoring computational constraints, this problem can be solved even when $X$ and $Q$ are exponentially large and $n$ is just a small polynomial; however, all algorithms with remotely similar guarantees run in exponential time. There have been several results showing that, under the strong assumption of indistinguishability obfuscation (iO), no efficient differentially private algorithm exists when $X$ and $Q$ can be exponentially large. However, there are no strong separations between information-theoretic and computationally efficient differentially private algorithms under any standard complexity assumption. In this work we show that, if one-way functions exist, there is no general purpose differentially private algorithm that works when $X$ and $Q$ are exponentially large, and $n$ is an arbitrary polynomial. In fact, we show that this result holds even if $X$ is just subexponentially large (assuming only polynomially-hard one-way functions). This result solves an open problem posed by Vadhan in his recent survey.
Last updated:  2024-05-23
Lookup Arguments: Improvements, Extensions and Applications to Zero-Knowledge Decision Trees
Matteo Campanelli, Antonio Faonio, Dario Fiore, Tianyu Li, and Helger Lipmaa
Lookup arguments allow to prove that the elements of a committed vector come from a (bigger) committed table. They enable novel approaches to reduce the prover complexity of general-purpose zkSNARKs, implementing ``non-arithmetic operations" such as range checks, XOR and AND more efficiently. We extend the notion of lookup arguments along two directions and improve their efficiency: (1) we extend vector lookups to matrix lookups (where we can prove that a committed matrix is a submatrix of a committed table). (2) We consider the notion of zero-knowledge lookup argument that keeps the privacy of both the sub-vector/sub-matrix and the table. (3) We present new zero-knowledge lookup arguments, dubbed cq+, zkcq+ and cq++, more efficient than the state of the art, namely the recent work by Eagen, Fiore and Gabizon named cq. Finally, we give a novel application of zero-knowledge matrix lookup argument to the domain of zero-knowledge decision tree where the model provider releases a commitment to a decision tree and can prove zero-knowledge statistics over the committed data structure. Our scheme based on lookup arguments has succinct verification, prover's time complexity asymptotically better than the state of the art, and is secure in a strong security model where the commitment to the decision tree can be malicious.
Last updated:  2024-05-23
Algebraic Structure of the Iterates of $\chi$
Björn Kriepke and Gohar Kyureghyan
We consider the map $\chi:\mathbb{F}_2^n\to\mathbb{F}_2^n$ for $n$ odd given by $y=\chi(x)$ with $y_i=x_i+x_{i+2}(1+x_{i+1})$, where the indices are computed modulo $n$. We suggest a generalization of the map $\chi$ which we call generalized $\chi$-maps. We show that these maps form an Abelian group which is isomorphic to the group of units in $\mathbb{F}_2[X]/(X^{(n+1)/2})$. Using this isomorphism we easily obtain closed-form expressions for iterates of $\chi$ and explain their properties.
Last updated:  2024-05-23
Knot-based Key Exchange protocol
Silvia Sconza and Arno Wildi
We propose a new key exchange protocol based on the Generalised Diffie-Hellman Key Exchange. In the latter, instead of using a group-action, we consider a semigroup action. In our proposal, the semigroup is the set of oriented knots in $\mathbb{S}^3$ with the operation of connected sum. As a semigroup action, we choose the action of the semigroup on itself through the connected sum. For the protocol to work, we need to use knot invariants, which allow us to create the shared secret key starting from the same knot represented in two different ways. In particular, we use finite type invariants. The security of the protocol is guaranteed by the hardness of decomposing knots in the semigroup.
Last updated:  2024-05-23
A Note on Zero-Knowledge for NP and One-Way Functions
Yanyi Liu, Noam Mazor, and Rafael Pass
We present a simple alternative exposition of the the recent result of Hirahara and Nanashima (STOC’24) showing that one-way functions exist if (1) every language in NP has a zero-knowledge proof/argument and (2) ZKA contains non-trivial languages. Our presentation does not rely on meta-complexity and we hope it may be useful for didactic purposes.
Last updated:  2024-05-23
Symmetric Signcryption and E2EE Group Messaging in Keybase
Joseph Jaeger, Akshaya Kumar, and Igors Stepanovs
We introduce a new cryptographic primitive called symmetric signcryption, which differs from traditional signcryption because the sender and recipient share a secret key. We prove that a natural composition of symmetric encryption and signatures achieves strong notions of security against attackers that can learn and control many keys. We then identify that the core encryption algorithm of the Keybase encrypted messaging protocol can be modeled as a symmetric signcryption scheme. We prove the security of this algorithm, though our proof requires assuming non-standard, brittle security properties of the underlying primitives.
Last updated:  2024-05-23
Leverage Staking with Liquid Staking Derivatives (LSDs): Opportunities and Risks
Xihan Xiong, Zhipeng Wang, Xi Chen, William Knottenbelt, and Michael Huth
In the Proof of Stake (PoS) Ethereum ecosystem, users can stake ETH on Lido to receive stETH, a Liquid Staking Derivative (LSD) that represents staked ETH and accrues staking rewards. LSDs improve the liquidity of staked assets by facilitating their use in secondary markets, such as for collateralized borrowing on Aave or asset exchanges on Curve. The composability of Lido, Aave, and Curve enables an emerging strategy known as leverage staking, where users supply stETH as collateral on Aave to borrow ETH and then acquire more stETH. This can be done directly by initially staking ETH on Lido or indirectly by swapping ETH for stETH on Curve. While this iterative process enhances financial returns, it also introduces potential risks. This paper explores the opportunities and risks of leverage staking. We establish a formal framework for leverage staking with stETH and identify $442$ such positions on Ethereum over $963$ days. These positions represent a total volume of $537{,}123$ ETH ($877$m USD). Our data reveal that the majority ($81.7\%$) of leverage staking positions achieved an Annual Percentage Rate (APR) higher than that of conventional staking on Lido. Despite the high returns, we also recognize the risks of leverage staking. From the Terra crash incident, we understand that token devaluation can greatly impact the market. Therefore, we conduct stress tests under extreme conditions, particularly during stETH devaluations, to thoroughly evaluate the associated risks. Our simulations indicate that leverage staking can exacerbate the risk of cascading liquidations by introducing additional selling pressures from liquidation and deleveraging activities. Moreover, this strategy poses broader systemic risks as it undermines the stability of ordinary positions by intensifying their liquidations.
Last updated:  2024-05-23
Time-Based Cryptography From Weaker Assumptions: Randomness Beacons, Delay Functions and More
Damiano Abram, Lawrence Roy, and Mark Simkin
The assumption that certain computations inherently require some sequential time has established itself as a powerful tool for cryptography. It allows for security and liveness guarantees in distributed protocols that are impossible to achieve with classical hardness assumptions. Unfortunately, many constructions from the realm of time-based cryptography are based on new and poorly understood hardness assumptions, which tend not to stand the test of time (cf. Leurent et al. 2023, Peikert & Tang 2023). In this work, we make progress on several fronts. We formally define the concept of a delay function and present a construction thereof from minimal assumptions. We show that these functions, in combination with classical cryptographic objects that satisfy certain efficiency criteria, would allow for constructing delay encryption, which is otherwise only known to exist based on a new hardness assumption about isogenies. We formally define randomness beacons as they are used in the context of blockchains, and we show that (linearly homomorphic) time-lock puzzles allow for efficiently constructing them. Our work puts time-based cryptography on a firmer theoretical footing, provides new constructions from simpler assumptions, and opens new avenues for constructing delay encryption.
Last updated:  2024-05-23
Zero-Knowledge Proofs for Set Membership: Efficient, Succinct, Modular
Daniel Benarroch, Matteo Campanelli, Dario Fiore, Kobi Gurkan, and Dimitris Kolonelos
We consider the problem of proving in zero knowledge that an element of a public set satisfies a given property without disclosing the element, i.e., for some $u$, ``$u \in S$ and $P(u)$ holds''. This problem arises in many applications (anonymous cryptocurrencies, credentials or whitelists) where, for privacy or anonymity reasons, it is crucial to hide certain data while ensuring properties of such data. We design new \textit{modular} and \textit{efficient} constructions for this problem through new \textit{commit-and-prove zero-knowledge systems for set membership}, i.e. schemes proving $u \in S$ for a value $u$ that is in a public commitment $c_u$. We also extend our results to support {\em non-membership proofs}, i.e. proving $u \notin S$. Being commit-and-prove, our solutions can act as plug-and-play modules in statements of the form ``$u \in S$ and $P(u)$ holds'' by combining our set (non-)membership systems with any other commit-and-prove scheme for $P(u)$. Also, they work with Pedersen commitments over prime order groups which makes them compatible with popular systems such as Bulletproofs or Groth16. We implemented our schemes as a software library, and tested experimentally their performance. Compared to previous work that achieves similar properties---the clever techniques combining zkSNARKs and Merkle Trees in Zcash---our solutions offer more flexibility, shorter public parameters and $3.7 \times$--$30\times$ faster proving time for a set of size $2^{64}$.
Last updated:  2024-05-23
Permutation-Based Hash Chains with Application to Password Hashing
Charlotte Lefevre and Bart Mennink
Hash chain based password systems are a useful way to guarantee authentication with one-time passwords. The core idea is specified in RFC 1760 as S/Key. At CCS 2017, Kogan et al. introduced T/Key, an improved password system where one-time passwords are only valid for a limited time period. They proved security of their construction in the random oracle model under a basic modeling of the adversary. In this work, we make various advances in the analysis and instantiation of hash chain based password systems. Firstly, we describe a slight generalization called U/Key that allows for more flexibility in the instantiation and analysis, and we develop a security model that refines the adversarial strength into offline and online complexity, that can be used beyond the random oracle model, and that allows to argue multi-user security directly. Secondly, we derive a new security proof of U/Key in the random oracle model, as well as dedicated and tighter security proofs of U/Key instantiated with a sponge construction and a truncated permutation. These dedicated security proofs, in turn, solve a problem of understanding the preimage resistance of a cascaded evaluation of the sponge construction. When applied to T/Key, these results improve significantly over the earlier results: whereas the originally suggested instantiation using SHA-256 uses a compression function that maps 768 bits into 256 bits, with a truncated permutation construction one can generically achieve 128 bits of security already with a permutation of size 256 bits.
Last updated:  2024-05-23
Sprints: Intermittent Blockchain PoW Mining
Michael Mirkin, Lulu Zhou, Ittay Eyal, and Fan Zhang
Cryptocurrencies and decentralized platforms are rapidly gaining traction since Nakamoto's discovery of Bitcoin's blockchain protocol. These systems use Proof of Work (PoW) to achieve unprecedented security for digital assets. However, the significant power consumption and ecological impact of PoW are leading policymakers to consider stark measures against them and prominent systems to explore alternatives. But these alternatives imply stepping away from key security aspects of PoW. We present Sprints, a blockchain protocol that achieves almost the same security guarantees as PoW blockchains, but with an order-of-magnitude lower ecological impact, taking into account both power and hardware. To achieve this, Sprints forces miners to mine intermittently. It interleaves Proof of Delay (PoD, e.g., using a Verifiable Delay Function) and PoW, where only the latter bares a significant resource expenditure. We prove that in Sprints the attacker's success probability is the same as that of legacy PoW. To evaluate practical performance, we analyze the effect of shortened PoW duration, showing a minor reduction in resilience (49% instead of 50%). We confirm the results with a full implementation using 100 patched Bitcoin clients in an emulated network.
Last updated:  2024-05-23
Incompressible Functional Encryption
Rishab Goyal, Venkata Koppula, Mahesh Sreekumar Rajasree, and Aman Verma
Incompressible encryption (Dziembowski, Crypto'06; Guan, Wichs, Zhandry, Eurocrypt'22) protects from attackers that learn the entire decryption key, but cannot store the full ciphertext. In incompressible encryption, the attacker must try to compress a ciphertext within pre-specified memory bound $S$ before receiving the secret key. In this work, we generalize the notion of incompressibility to functional encryption. In incompressible functional encryption, the adversary can corrupt non-distinguishing keys at any point, but receives the distinguishing keys only after compressing the ciphertext to within $S$ bits. An important efficiency measure for incompressible encryption is the ciphertext-rate ( i.e., $\mathsf{rate} = \frac{|m|}{|\mathsf{ct}|}\;$). We give many new results for incompressible functional encryption: 1. Incompressible attribute-based encryption for circuits from standard assumptions, with $\mathsf{ct}$-rate-$\frac{1}{2}$ and short secret keys, 2. Incompressible functional encryption for circuits from (non-incompressible) functional encryption, with (a) $\mathsf{ct}$-rate-$\frac{1}{2}$ and short secret keys, (b) $\mathsf{ct}$-rate-$1$ and large secret keys. Our results achieve optimal efficiency, as incompressible attribute-based/functional encryption with $\mathsf{ct}$-rate-$1$ as well as short secret keys has strong implausibility barriers. Moreover, our assumptions are minimal as incompressible attribute-based/functional encryption are strictly stronger than their non-incompressible counterparts.
Last updated:  2024-05-23
Weak Consistency mode in Key Transparency: OPTIKS
Esha Ghosh and Melissa Chase
The need for third-party auditors in privacy-preserving Key Transparency (KT) systems presents a deployment challenge. In this short note, we take a simple privacy-preserving KT system that provides strong security guarantees in the presence of an honest auditor (OPTIKS) and show how to add a auditor-free mode to it. The auditor-free mode offers slightly weaker security. We formalize this security property and prove that our proposed protocol satisfies our security definition.
Last updated:  2024-05-22
New Limits of Provable Security and Applications to ElGamal Encryption
Sven Schäge
We provide new results showing that ElGamal encryption cannot be proven CCA1-secure – a long-standing open problem in cryptography. Our result follows from a very broad, meta-reduction-based impossibility result on random self-reducible relations with efficiently re-randomizable witnesses. The techniques that we develop allow, for the first time, to provide impossibility results for very weak security notions where the challenger outputs fresh challenge statements at the end of the security game. This can be used to finally tackle encryption-type definitions that have remained elusive in the past. We show that our results have broad applicability by casting several known cryptographic setups as instances of random self-reducible and re-randomizable relations. These setups include general semi-homomorphic PKE and the large class of certified homomorphic one-way bijections. As a result, we also obtain new impossibility results for the IND-CCA1 security of the PKEs of Paillier and Damg ̊ard–Jurik, and many one-more inversion assumptions like the one-more DLOG or the one-more RSA assumption.
Last updated:  2024-05-22
Precio: Private Aggregate Measurement via Oblivious Shuffling
F. Betül Durak, Chenkai Weng, Erik Anderson, Kim Laine, and Melissa Chase
We introduce Precio, a new secure aggregation method for computing layered histograms and sums over secret shared data in a client-server setting. Precio is motivated by ad conversion measurement scenarios, where online advertisers and ad networks want to measure the performance of ad campaigns without requiring privacy-invasive techniques, such as third-party cookies. Precio has linear (time and communication) complexity in the number of data points and guarantees differentially private outputs. We formally analyze its security and privacy and present a thorough performance evaluation. The protocol supports much larger domains than Prio. It supports much more flexible aggregates than the DPF-based solution and in some settings has up to four orders of magnitude better performance.
Last updated:  2024-05-22
Anamorphic Encryption, Revisited
Fabio Banfi, Konstantin Gegier, Martin Hirt, Ueli Maurer, and Guilherme Rito
An anamorphic encryption scheme allows two parties who share a so-called double key to embed covert messages in ciphertexts of an established PKE scheme. This protects against a dictator that can force the receiver to reveal the secret keys for the PKE scheme, but who is oblivious about the existence of the double key. We identify two limitations of the original model by Persiano, Phan, and Yung (EUROCRYPT 2022). First, in their definition a double key can only be generated once, together with a key-pair. This has the drawback that a receiver who wants to use the anamorphic mode after a dictator comes to power, needs to deploy a new key-pair, a potentially suspicious act. Second, a receiver cannot distinguish whether or not a ciphertext contains a covert message. In this work we propose a new model that overcomes these limitations. First, we allow to associate multiple double keys to a key-pair, after its deployment. This also enables deniability in case the double key only depends on the public key. Second, we propose a natural robustness notion, which guarantees that anamorphically decrypting a regularly encrypted message results in a special symbol indicating that no covert message is contained, which also eliminates certain attacks. Finally, to instantiate our new, stronger definition of anamorphic encryption, we provide generic and concrete constructions. Concretely, we show that ElGamal and Cramer-Shoup satisfy a new condition, selective randomness recoverability, which enables robust anamorphic extensions, and we also provide a robust anamorphic extension for RSA-OAEP.
Last updated:  2024-05-22
Hide-and-Seek and the Non-Resignability of the BUFF Transform
Jelle Don, Serge Fehr, Yu-Hsuan Huang, Jyun-Jie Liao, and Patrick Struck
The BUFF transform, due to Cremers et al. (S&P'21), is a generic transformation for digital signature scheme, with the purpose of obtaining additional security guarantees beyond unforgeability: exclusive ownership, message-bound signatures, and non-resignability. Non-resignability (which essentially challenges an adversary to re-sign an unknown message for which it only obtains the signature) turned out to be a delicate matter, as recently Don et al. (CRYPTO'24) showed that the initial definition is essentially unachievable; in particular, it is not achieved by the BUFF transform. This led to the introduction of new, weakened versions of non-resignability, which are (potentially) achievable. In particular, it was shown that a salted variant of the BUFF transform does achieves some weakened version of non-resignability. However, the salting requires additional randomness and leads to slightly larger signatures. Whether the original BUFF transform also achieves some meaningful notion of non-resignability remained a natural open question. In this work, we answer this question in the affirmative. We show that the BUFF transform satisfies the (almost) strongest notions of non-resignability one can hope for, facing the known impossibility results. Our results cover both the statistical and the computational case, and both the classical and the quantum setting. At the core of our analysis lies a new security game for random oracles that we call Hide-and-Seek. While seemingly innocent at first glance, it turns out to be surprisingly challenging to rigorously analyze.
Last updated:  2024-05-22
Stickel's Key Agreement Algebraic Variation
Daniel Nager
In this document we present a further development of non-commutative algebra based key agreement due to E. Stickel and a way to deal with the algebraic break due to V. Sphilrain.
Last updated:  2024-05-22
Suboptimality in DeFi
Aviv Yaish, Maya Dotan, Kaihua Qin, Aviv Zohar, and Arthur Gervais
The decentralized finance (DeFi) ecosystem has proven to be popular in facilitating financial operations, such as token exchange and lending. The public availability of DeFi platforms’ code, together with real-time data on all user interactions with them, has given rise to complex tools that find and seize profit opportunities on behalf of users. In this work, we show that both users and the aforementioned tools sometimes act suboptimally: their profits can be increased by more than 100%, with the highest amount of missed revenue by a suboptimal action reaching 428.14ETH ($517K). To reach these findings, we examine core DeFi primitives used by over 850 platforms that are responsible for a daily volume of more than 100 million USD in Ethereum alone: (1) lending and borrowing funds, (2) using flashswaps to close arbitrage opportunities between decentralized exchanges (DEXs), (3) liquidation of insolvent loans using flashswaps, which combines the previous two. The profit that can be made from each primitive is then cast as an optimization problem that can be solved. We show that missed opportunities to make a profit are noticed by others and are sometimes followed by back-running transactions that extract profits using similar actions. By analyzing these events, we find that some transactions are circumstantially tied to specific miners and hypothesize that they use their knowledge of private orderflow for a profit. Essentially, this is an instance of miner-extractable value (MEV) "in action".
Last updated:  2024-05-22
On the (In)Security of the BUFF Transform
Jelle Don, Serge Fehr, Yu-Hsuan Huang, and Patrick Struck
The BUFF transform is a generic transformation for digital signature schemes, with the purpose of obtaining additional security properties beyond standard unforgeability, e.g., exclusive ownership and non-resignability. In the call for additional post-quantum signatures, these were explicitly mentioned by the NIST as ``additional desirable security properties'', and some of the submissions indeed refer to the BUFF transform with the purpose of achieving them, while some other submissions follow the design of the BUFF transform without mentioning it explicitly. In this work, we show the following negative results regarding the non-resignability property in general, and the BUFF transform in particular. In the plain model, we observe by means of a simple attack that any signature scheme for which the message has a high entropy given the signature does not satisfy the non-resignability property (while non-resignability is trivially not satisfied if the message can be efficiently computed from its signature). Given that the BUFF transform has high entropy in the message given the signature, it follows that the BUFF transform does not achieve non-resignability whenever the random oracle is instantiated with a hash function, no matter what hash function. When considering the random oracle model (ROM), the matter becomes slightly more delicate since prior works did not rigorously define the non-resignability property in the ROM. For the natural extension of the definition to the ROM, we observe that our impossibility result still holds, despite there having been positive claims about the non-resignability of the BUFF transform in the ROM. Indeed, prior claims of the non-resignability of the BUFF transform rely on faulty argumentation. On the positive side, we prove that a salted version of the BUFF transform satisfies a slightly weaker variant of non-resignability in the ROM, covering both classical and quantum attacks, if the entropy requirement in the (weakened) definition of non-resignability is statistical; for the computational variant, we show yet another negative result.
Last updated:  2024-05-22
Algebraically Structured LWE, Revisited
Chris Peikert and Zachary Pepin
In recent years, there has been a proliferation of *algebraically structured* Learning With Errors (LWE) variants, including Ring-LWE, Module-LWE, Polynomial-LWE, Order-LWE, and Middle-Product LWE, and a web of reductions to support their hardness, both among these problems themselves and from related worst-case problems on structured lattices. However, these reductions are often difficult to interpret and use, due to the complexity of their parameters and analysis, and most especially their (frequently large) blowup and distortion of the error distributions. In this paper we unify and simplify this line of work. First, we give a general framework that encompasses *all* proposed LWE variants (over commutative base rings), and in particular unifies all prior ``algebraic'' LWE variants defined over number fields. We then use this framework to give much simpler, more general, and tighter reductions from Ring-LWE to other algebraic LWE variants, including Module-LWE, Order-LWE, and Middle-Product LWE. In particular, all of our reductions have easy-to-analyze and frequently small error expansion; in most cases they even leave the error unchanged. A main message of our work is that it is straightforward to use the hardness of the original Ring-LWE problem as a foundation for the hardness of all other algebraic LWE problems defined over number fields, via simple and rather tight reductions.
Last updated:  2024-05-22
Menhir: An Oblivious Database with Protection against Access and Volume Pattern Leakage
Leonie Reichert, Gowri R Chandran, Phillipp Schoppmann, Thomas Schneider, and Björn Scheuermann
Analyzing user data while protecting the privacy of individuals remains a big challenge. Trusted execution environments (TEEs) are a possible solution as they protect processes and Virtual Machines (VMs) against malicious hosts. However, TEEs can leak access patterns to code and to the data being processed. Furthermore, when data is stored in a TEE database, the data volume required to answer a query is another unwanted side channel that contains sensitive information. Both types of information leaks, access patterns and volume patterns, allow for database reconstruction attacks. In this paper, we present Menhir, an oblivious TEE database that hides access patterns with ORAM guarantees and volume patterns through differential privacy. The database allows range and point queries with SQL-like WHERE-clauses. It builds on the state-of-the-art oblivious AVL tree construction Oblix (S&P'18), which by itself does not protect against volume leakage. We show how volume leakage can be exploited in range queries and improve the construction to mitigate this type of attack. We prove the correctness and obliviousness of Menhir. Our evaluation shows that our approach is feasible and scales well with the number of rows and columns in the database.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.