Papers updated in last 31 days (356 results)

Last updated:  2024-02-26
Deep Learning Based Analysis of Key Scheduling Algorithm of Advanced Ciphers
Narendra Kumar Patel and Hemraj Shobharam Lamkuche
The advancements in information technology have made the Advanced Encryption Standard (AES) and the PRESENT cipher indispensable in ensuring data security and facilitating private transactions. AES is renowned for its flexibility and widespread use in various fields, while the PRESENT cipher excels in lightweight cryptographic situations. This paper delves into a dual examination of the Key Scheduling Algorithms (KSAs) of AES and the PRESENT cipher, which play a crucial role in generating round keys for their respective encryption techniques. By implementing deep learning methods, particularly a Neural Network model, our study aims to unravel the complexities of these KSAs and shed light on their inner workings.
Last updated:  2024-02-26
Practical Two-party Computational Differential Privacy with Active Security
Fredrik Meisingseth, Christian Rechberger, and Fabian Schmid
In this work we revisit the problem of using general-purpose MPC schemes to emulate the trusted dataholder in central differential privacy, to achieve same accuracy but without the need to trust one single dataholder. In particular, we consider the two-party model of having two computational parties (or dataholders) each with their own dataset wishing to compute a canonical differentially private mechanism on their combined data and to do so with active security. We start by remarking that available definitions of computational DP (CDP) for protocols are somewhat ill-suited for such a use-case, due to them using formalisms that either are much weaker than one can typically get for MPC protocols, or they are too strict in the sense that they need significant adjustment in order to be realisable by using common DP and MPC techniques. With this in mind we propose a new version of simulation-based CDP, called SIM$^*$-CDP, specifically geared towards being easy to use for MPC practitioners and more closely capture guarantees granted by using state-of-the-art MPC schemes to compute standard DP mechanism. We demonstrate the merit of the SIM$^*$-CDP definition by showing how to satsify it by use of an available distributed protocol for sampling truncated geometric noise. Further, we use the protocol to compute two-party inner products with computational DP and with similar levels of accuracy as in the central model, being the first to do so. Finally, we provide an open-sourced implementation and benchmark its practical performance.
Last updated:  2024-02-26
Twinkle: Threshold Signatures from DDH with Full Adaptive Security
Uncategorized
Renas Bacho, Julian Loss, Stefano Tessaro, Benedikt Wagner, and Chenzhi Zhu
Show abstract
Uncategorized
Sparkle is the first threshold signature scheme in the pairing-free discrete logarithm setting (Crites, Komlo, Maller, Crypto 2023) to be proven secure under adaptive corruptions. However, without using the algebraic group model, Sparkle's proof imposes an undesirable restriction on the adversary. Namely, for a signing threshold $t<n$, the adversary is restricted to corrupt at most $t/2$ parties. In addition, Sparkle's proof relies on a strong one-more assumption. In this work, we propose Twinkle, a new threshold signature scheme in the pairing-free setting which overcomes these limitations. Twinkle is the first pairing-free scheme to have a security proof under up to $t$ adaptive corruptions without relying on the algebraic group model. It is also the first such scheme with a security proof under adaptive corruptions from a well-studied non-interactive assumption, namely, the Decisional Diffie-Hellman (DDH) assumption. We achieve our result in two steps. First, we design a generic scheme based on a linear function that satisfies several abstract properties and prove its adaptive security under a suitable one-more assumption related to this function. In the context of this proof, we also identify a gap in the security proof of Sparkle and develop new techniques to overcome this issue. Second, we give a suitable instantiation of the function for which the corresponding one-more assumption follows from DDH.
Last updated:  2024-02-26
The Complexity of Algebraic Algorithms for LWE
Matthias Johann Steiner
Arora & Ge introduced a noise-free polynomial system to compute the secret of a Learning With Errors (LWE) instance via linearization. Albrecht et al. later utilized the Arora-Ge polynomial model to study the complexity of Gröbner basis computations on LWE polynomial systems under the assumption of semi-regularity. In this paper we revisit the Arora-Ge polynomial and prove that it satisfies a genericity condition recently introduced by Caminata & Gorla, called being in generic coordinates. For polynomial systems in generic coordinates one can always estimate the complexity of DRL Gröbner basis computations in terms of the Castelnuovo-Mumford regularity and henceforth also via the Macaulay bound. Moreover, we generalize the Gröbner basis algorithm of Semaev & Tenti to arbitrary polynomial systems with a finite degree of regularity. In particular, existence of this algorithm yields another approach to estimate the complexity of DRL Gröbner basis computations in terms of the degree of regularity. In practice, the degree of regularity of LWE polynomial systems is not known, though one can always estimate the lowest achievable degree of regularity. Consequently, from a designer's worst case perspective this approach yields sub-exponential complexity estimates for general, binary secret and binary error LWE. In recent works by Dachman-Soled et al. the hardness of LWE in the presence of side information was analyzed. Utilizing their framework we discuss how hints can be incorporated into LWE polynomial systems and how they affect the complexity of Gröbner basis computations.
Last updated:  2024-02-26
Toothpicks: More Efficient Fork-Free Two-Round Multi-Signatures
Jiaxin Pan and Benedikt Wagner
Tightly secure cryptographic schemes can be implemented with standardized parameters, while still having a sufficiently high security level backed up by their analysis. In a recent work, Pan and Wagner (Eurocrypt 2023) presented the first tightly secure two-round multi-signature scheme without pairings, called Chopsticks. While this is an interesting first theoretical step, Chopsticks is much less efficient than its non-tight counterparts. In this work, we close this gap by proposing a new tightly secure two-round multi-signature scheme that is as efficient as non-tight schemes. Our scheme is based on the DDH assumption without pairings. Compared to Chopsticks, we reduce the signature size by more than a factor of 3 and the communication complexity by more than a factor of 2. Technically, we achieve this as follows: (1) We develop a new pseudorandom path technique, as opposed to the pseudorandom matching technique in Chopsticks. (2) We construct a more efficient commitment scheme with suitable properties, which is an important primitive in both our scheme and Chopsticks. Surprisingly, we observe that the commitment scheme does not have to be binding, enabling our efficient construction.
Last updated:  2024-02-26
Authenticated private information retrieval
Simone Colombo, Kirill Nikitin, Henry Corrigan-Gibbs, David J. Wu, and Bryan Ford
This paper introduces protocols for authenticated private information retrieval. These schemes enable a client to fetch a record from a remote database server such that (a) the server does not learn which record the client reads, and (b) the client either obtains the "authentic" record or detects server misbehavior and safely aborts. Both properties are crucial for many applications. Standard private-information-retrieval schemes either do not ensure this form of output authenticity, or they require multiple database replicas with an honest majority. In contrast, we offer multi-server schemes that protect security as long as at least one server is honest. Moreover, if the client can obtain a short digest of the database out of band, then our schemes require only a single server. Performing an authenticated private PGP-public-key lookup on an OpenPGP key server's database of 3.5 million keys (3 GiB), using two non-colluding servers, takes under 1.2 core-seconds of computation, essentially matching the time taken by unauthenticated private information retrieval. Our authenticated single-server schemes are 30-100$\times$ more costly than state-of-the-art unauthenticated single-server schemes, though they achieve incomparably stronger integrity properties.
Last updated:  2024-02-26
Practical Constructions for Single Input Functionality against a Dishonest Majority
Zhelei Zhou, Bingsheng Zhang, Hong-Sheng Zhou, and Kui Ren
Single Input Functionality (SIF) is a special case of MPC, where only one distinguished party called dealer holds the secret input. SIF allows the dealer to complete a computation task and send to other parties their respective outputs without revealing any additional information about its secret input. SIF has many applications, including multiple-verifier zero-knowledge and verifiable relation sharing, etc. Recently, several works devote to round-efficient realization of SIF, and achieve 2-round communication in the honest majority setting (Applebaum et al., Crypto 2022; Baum et al., CCS 2022; Yang and Wang, Asiacrypt 2022). In this work, we focus on concrete efficiency and propose \emph{the first} practical construction for SIF against \emph{a dishonest majority} in the preprocessing model; moreover, the online phase of our protocol is only 2-round and is highly efficient, as it requires no cryptographic operations and achieves information theoretical security. For SIF among 5 parties, our scheme takes 152.34ms (total) to evaluate an AES-128 circuit with 7.36ms online time. Compared to the state-of-the-art (honest majority) solution (Baum et al., CCS 2022), our protocol is roughly 2$\times$ faster in the online phase, although more preprocessing time is needed. Compared to the state-of-the-art generic MPC against a dishonest majority (Wang et al., CCS 2017; Cramer et al., Crypto 2018), our protocol outperforms them with respect to both total running time and online running time.
Last updated:  2024-02-26
The Hardness of LPN over Any Integer Ring and Field for PCG Applications
Hanlin Liu, Xiao Wang, Kang Yang, and Yu Yu
Learning parity with noise (LPN) has been widely studied and used in cryptography. It was recently brought to new prosperity since Boyle et al. (CCS'18), putting LPN to a central role in designing secure multi-party computation, zero-knowledge proofs, private set intersection, and many other protocols. In this paper, we thoroughly studied the security of LPN problems in this particular context. We found that some important aspects have long been ignored and many conclusions from classical LPN cryptanalysis do not apply to this new setting, due to the low noise rates, extremely high dimensions, various types (in addition to $\mathbb{F}_2$) and noise distributions. 1. For LPN over a field, we give a parameterized reduction from exact-noise LPN to regular-noise LPN. Compared to the recent result by Feneuil, Joux and Rivain (Crypto'22), we significantly reduce the security loss by paying only a small additive price in dimension and number of samples. 2. We analyze the security of LPN over a ring $\mathbb{Z}_{2^\lambda}$. Existing protocols based on LPN over integer rings use parameters as if they are over fields, but we found an attack that effectively reduces the weight of a noise by half compared to LPN over fields. Consequently, prior works that use LPN over $\mathbb{Z}_{2^\lambda}$ overestimate up to 40 bits of security. 3. We provide a complete picture of the hardness of LPN over integer rings by showing: 1) the equivalence between its search and decisional versions; 2) an efficient reduction from LPN over $\mathbb{F}_2$ to LPN over $\mathbb{Z}_{2^\lambda}$; and 3) generalization of our results to any integer ring. Finally, we provide an all-in-one estimator tool for the bit security of LPN parameters in the context of PCG, incorporating the recent advanced attacks.
Last updated:  2024-02-26
Proofs for Deep Thought: Accumulation for large memories and deterministic computations
Benedikt Bünz and Jessica Chen
We construct two new accumulation schemes. The first one is for checking that $\ell$ read and write operations were performed correctly from a memory of size $T$. Unlike all prior work, the prover time is entirely independent of $T$ and only depends on $\ell$. The second one is for deterministic computations. It does not require committing to the intermediate wires of the computation but only the input and output. This is achieved by building an accumulation scheme for a modified version of the famous GKR protocol. We show that these schemes are highly compatible and that the accumulation for GKR can further reduce the cost of the memory-checking scheme. Using the BCLMS (Crypto 21) compiler, these protocols yield an efficient incrementally verifiable computation (IVC) scheme that is particularly useful for machine computations with large memories and deterministic steps.
Last updated:  2024-02-26
Information-Theoretic Homomorphic Encryption and 2-Party Computation
Jonathan Trostle
Homomorphic encryption has been an active area of research since Gentry's breakthrough results on fully homomorphic encryption. We present secret key somewhat homomorphic schemes where client privacy is information-theoretic (server can be computationally unbounded). As the group order in our schemes gets larger, entropy approaches max- imal entropy (perfect security). Our basic scheme is additive somewhat homomorphic. In one scheme, the server handles circuit multiplication gates by returning the mulitiplicands to the client which does the multiplication and sends back the encrypted product. We give a 2-party protocol that also incorporates server inputs where the client privacy is information-theoretic. Server privacy is not information-theoretic, but rather depends on hardness of the subset sum problem. Correctness for the server in the malicious model can be verified by a 3rd party where the client and server privacy are information-theoretically protected from the verifier. Scaling the 2PC protocol via separate encryption parameters for smaller subcircuits allows the ciphertext size to grow logarithmically as circuit size grows.
Last updated:  2024-02-25
Under What Conditions Is Encrypted Key Exchange Actually Secure?
Jake Januzelli, Lawrence Roy, and Jiayu Xu
A Password-Authenticated Key Exchange (PAKE) protocol allows two parties to agree upon a cryptographic key, in the setting where the only secret shared in advance is a low-entropy password. The standard security notion for PAKE is in the Universal Composability (UC) framework. In recent years there have been a large number of works analyzing the UC-security of Encrypted Key Exchange (EKE), the very first PAKE protocol, and its One-encryption variant (OEKE), both of which compile an unauthenticated Key Agreement (KA) protocol into a PAKE. In this work, we present a comprehensive and thorough study of the UC-security of both EKE and OEKE in the most general setting and using the most efficient building blocks: 1. We show that among the seven existing results on the UC-security of (O)EKE, six are flawed; 2. We show that for (O)EKE to be UC-secure, the underlying KA protocol needs to satisfy the properties of strong pseudorandomness, pseudorandom non-malleability, and collision resistance, all of which are missing in existing works; 3. We give UC-security proofs for EKE and OEKE using Programmable-Once Random Function (POPF), which is the most efficient instantiation to date and is around 4 times faster than the standard instantiation using Ideal Cipher (IC). Our results in particular allow for PAKE constructions from post-quantum KA protocols such as Kyber. We also give a security analysis of POPF in a new composition framework called almost UC, which we believe is interesting in its own right.
Last updated:  2024-02-25
A New Perspective on Key Switching for BGV-like Schemes
Johannes Mono and Tim Güneysu
Fully homomorphic encryption is a promising solution for privacy-preserving computation, especially involving sensitive data. For BFV, BGV, and CKKS, three state-of-the-art encryption schemes, the most costly homomorphic primitive is the so-called key switching. While a decent amount of research has been devoted to optimize other aspects of these schemes, key switching has gone largely untouched. One exception has been a recent work by Kim et al. at CRYPTO 2023 [26] introducing a new double-decomposition technique for state-of-the-art key switching. While their contributions are interesting, the authors have a skewed perspective on the complexity of key switching which results in a flawed parameter analysis and incorrect conclusions about the effectiveness of their approach. In this work, we correct their analysis with a new perspective on key switching and provide the new asymptotic bound O(ωℓ). More generally, we take a holistic look at key switching and parameter selection. We revisit an idea by Gentry, Halevi, and Smart [19] improving key switching performance by up to 63% and explore novel possibilities for parameter optimization. We also reduce the number of multiplications in key switching using new constant folding techinques, which speed up execution times by up to 11.6%. Overall, we provide an in-depth analysis of key switching, guidelines for optimal parameter selection, and novel ideas which speed up execution times significantly.
Last updated:  2024-02-25
Phase-shift Fault Analysis of Grain-128
HRIDYA P R and Jimmy Jose
Phase-shift fault attack is a type of fault attack used for cryptanalysis of stream ciphers. It involves clocking a cipher’s feedback shift registers out of phase, in order to generate faulted keystream. Grain- 128 cipher is a 128-bit modification of the Grain cipher which is one of the finalists in the eSTREAM project. In this work, we propose a phase-shift fault attack against Grain-128 loaded with key-IV pairs that result in an all-zero LFSR after initialisation. We frame equations in terms of the input and output bits of the cipher and solve them using a SAT solver. By correctly guessing 40 internal state bits, we are able to recover the entire 128-bit key with just 2 phase-shift faults for keystreams of length 200 bits.
Last updated:  2024-02-25
Circuit Bootstrapping: Faster and Smaller
Ruida Wang, Yundi Wen, Zhihao Li, Xianhui Lu, Benqiang Wei, Kun Liu, and Kunpeng Wang
We present a novel circuit bootstrapping algorithm that outperforms the state-of-the-art TFHE method with 9.9× speedup and 15.6× key size reduction. These improvements can be attributed to two technical contributions. Firstly, we redesigned the circuit bootstrapping workflow to operate exclusively under the ring ciphertext type, which eliminates the need of conversion between LWE and RLWE ciphertexts. Secondly, we improve the LMKC+ blind rotation algorithm by reducing the number of automorphisms, then propose the first automorphism type multi-value functional bootstrapping. These automorphism-based techniques lead to further key size optimization, and are of independent interest besides circuit bootstrapping. Based our new circuit bootstrapping we can evaluate AES-128 in 26.2s (single thread), achieving 10.3× speedup compared with the state-of-the-art TFHE-based approach.
Last updated:  2024-02-25
Your Reputation's Safe with Me: Framing-Free Distributed Zero-Knowledge Proofs
Carmit Hazay, Muthuramakrishnan Venkitasubramaniam, and Mor Weiss
Distributed Zero-Knowledge (dZK) proofs, recently introduced by Boneh et al. (CYPTO`19), allow a prover $P$ to prove NP statements on an input $x$ which is distributed between $k$ verifiers $V_1,\ldots,V_k$, where each $V_i$ holds only a piece of $x$. As in standard ZK proofs, dZK proofs guarantee Completeness when all parties are honest; Soundness against a malicious prover colluding with $t$ verifiers; and Zero Knowledge against a subset of $t$ malicious verifiers, in the sense that they learn nothing about the NP witness and the input pieces of the honest verifiers. Unfortunately, dZK proofs provide no correctness guarantee for an honest prover against a subset of maliciously corrupted verifiers. In particular, such verifiers might be able to ``frame'' the prover, causing honest verifiers to reject a true claim. This is a significant limitation, since such scenarios arise naturally in dZK applications, e.g., for proving honest behavior, and such attacks are indeed possible in existing dZKs. We put forth and study the notion of strong completeness for dZKs, guaranteeing that true claims are accepted even when $t$ verifiers are maliciously corrupted. We then design strongly-complete dZK proofs using the ``MPC-in-the-head'' paradigm of Ishai et al. (STOC`07), providing a novel analysis that exploits the unique properties of the distributed setting. To demonstrate the usefulness of strong completeness, we present several applications in which it is instrumental in obtaining security. First, we construct a certifiable version of Verifiable Secret Sharing (VSS), which is a VSS in which the dealer additionally proves that the shared secret satisfies a given NP relation. Our construction withstands a constant fraction of corruptions, whereas a previous construction of Ishai et al. (TCC`14) could only handle $k^{\varepsilon}$ corruptions for a small $\varepsilon<1$. We also design a reusable version of certifiable VSS that we introduce, in which the dealer can prove an unlimited number of predicates on the same shared secret. Finally, we extend a compiler of Boneh et al. (CRYPTO`19), who used dZKs to transform a class of ``natural'' semi-honest protocols in the honest-majority setting into maliciously secure ones with abort. Our compiler uses strongly-complete dZKs to obtain identifiable abort.
Last updated:  2024-02-25
Authentication Key Recovery on Galois Counter Mode (GCM)
John Mattsson and Magnus Westerlund
GCM is used in a vast amount of security protocols and is quickly becoming the de facto mode of operation for block ciphers due to its exceptional performance. In this paper we analyze the NIST stan- dardized version (SP 800-38D) of GCM, and in particular the use of short tag lengths. We show that feedback of successful or unsuccessful forgery attempt is almost always possible, contradicting the NIST assumptions for short tags. We also provide a complexity estimation of Ferguson’s authentication key recovery method on short tags, and suggest several novel improvements to Fergusons’s attacks that significantly reduce the security level for short tags. We show that for many truncated tag sizes; the security levels are far below, not only the current NIST requirement of 112-bit security, but also the old NIST requirement of 80-bit security. We therefore strongly recommend NIST to revise SP 800-38D.
Last updated:  2024-02-25
Theoretical Explanation and Improvement of Deep Learning-aided Cryptanalysis
Weixi Zheng, Liu Zhang, and Zilong Wang
At CRYPTO 2019, Gohr demonstrated that differential-neural distinguishers (DNDs) for Speck32/64 can learn more features than classical cryptanalysis's differential distribution tables (DDT). Furthermore, a non-classical key recovery procedure is devised by combining the Upper Confidence Bound (UCB) strategy and the BayesianKeySearch algorithm. Consequently, the time complexity of 11-round key recovery attacks on Speck32/64 is significantly reduced compared with the state-of-the-art results in classical cryptanalysis. This advancement in deep learning-assisted cryptanalysis has opened up new possibilities. However, the specific encryption features exploited by DNDs remain unclear. In this paper, we begin by analyzing the features learned by DND based on the probability distribution of a ciphertext pair. Our analysis reveals that DND not only learns the differential features of the ciphertext pair but also captures the XOR information of the left and right branches of the ciphertext pair. This explains why the performance of DND can outperform DDT in certain cases. For other ciphers, we can also predict whether deep learning methods can achieve superior results to classical methods based on the probability distribution of the ciphertext pair. Next, we modify the input data format and network structure based on the specific features that can be learned to train DND specifically. With these modifications, it is possible to reduce the size of their parameters to only 1/16 of their previous networks while maintaining high precision. Additionally, the training time for the DNDs is significantly reduced. Finally, to improve the efficiency of deep learning-assisted cryptanalysis, we introduce Bayes-UCB to select promising ciphertext structures more efficiently. We also introduce an improved BayesianKeySearch algorithm to retain guessed keys with the highest scores in key guessing. We use both methods to launch 11-round, 12-round, and 13-round key recovery attacks on Speck32/64. The results show that under the same conditions, the success rate of 11-round key recovery attacks has increased from Gohr's 36.1% to 52.8%, the success rate of 12-round key recovery attacks has increased from Gohr's 39% to 50%, and the success rate of 13-round key recovery attacks has increased from Zhang et al.'s 21% to 24%. In addition, the time complexity of these experiments is also significantly reduced.
Last updated:  2024-02-24
Formal Verification of Emulated Floating-Point Arithmetic in Falcon
Vincent Hwang
We show that there is a discrepancy between the emulated floating-point multiplications in the submission package of Falcon and the claimed behavior. In particular, we show that floating-point products with absolute values the smallest normal positive floating-point number are incorrectly zeroized. However, we show that the discrepancy doesn’t effect the complex fast Fourier transform by modeling the floating-point addition, subtraction, and multiplication in CryptoLine. We later implement our own floating-point multiplications in Armv7-M assembly and Jasmin and prove their equivalence with our model, demonstrating the possibility of transferring the challenging verification task (verifying highly-optimized assembly) to the presumably more readable code base (Jasmin).
Last updated:  2024-02-24
POPSTAR: Lightweight Threshold Reporting with Reduced Leakage
Hanjun Li, Sela Navot, and Stefano Tessaro
This paper proposes POPSTAR, a new lightweight protocol for the private computation of heavy hitters, also known as a private threshold reporting system. In such a protocol, the users provide input measurements, and a report server learns which measurements appear more than a pre-specified threshold. POPSTAR follows the same architecture as STAR (Davidson et al, CCS 2022) by relying on a helper randomness server in addition to a main server computing the aggregate heavy hitter statistics. While STAR is extremely lightweight, it leaks a substantial amount of information, consisting of an entire histogram of the provided measurements (but only reveals the actual measurements that appear beyond the threshold). POPSTAR shows that this leakage can be reduced at a modest cost ($\sim$7$\times$ longer aggregation time). Our leakage is closer to that of Poplar (Boneh et al, S&P 2021), which relies however on distributed point functions and a different model which requires interactions of two non-colluding servers (with equal workloads) to compute the heavy hitters.
Last updated:  2024-02-24
On the cryptosystems based on two Eulerian transfor-mations defined over the commutative rings $Z_{2^s}, s>1$.
Vasyl Ustimenko
We suggest the family of ciphers s^E^n, n=2,3,.... with the space of plaintexts (Z*_{2^s})^n, s >1 such that the encryption map is the composition of kind G=G_1A_1G_2A_2 where A_i are the affine transformations from AGL_n(Z_{2^s}) preserving the variety (Z*_{2^s)}^n , Eulerian endomorphism G_i , i=1,2 of K[x_1, x_2,...., x_n] moves x_i to monomial term ϻ(x_1)^{d(1)}(x_2)^{d(2)}...(x_n)^{d(n)} , ϻϵ Z*_{2^s} and act on (Z*_{2^s})^n as bijective transformations. The cipher is converted to a protocol supported cryptosystem. Protocols of Noncommutative Cryptography implemented on the platform of Eulerian endomorphism are used for the delivery of G_i and A_i from Alice to Bob. One can use twisted Diffie-Hellman protocols which security rests on the complexity of Conjugacy Power problem or hidden tame homomorphism protocol which security rests of the word decomposition problem. Instead of the delivery of G_i Alice and Bob can elaborate these transformations via the inverse twisted Diffie-Hellman protocol implemented on the platform of tame Eulerian transformations of (Z*_{2^s})^n. The cost of single protocol is O(n^3) and the cost of the computation of the reimage of used nonlinear map is O(n^2). So the verification of n^t , t≥1 signatures takes time O(n^{t+2}). Instead of inverse twisted Diffie-Hellman protocol correspondents can use inverse hidden tame homomorphism protocol which rests on the complexity of word decomposition for tame Eulerian transformations. We use natural bijections between Z*_{2^s} and Z_{2^{s-1}}, Z*_{2^s} and finite field F_{2^{s-1}} and Z*_{2^s} and Boolean ring B_{s-1} of order 2^{s-1} to modify the family of ciphers or cryptosystems via the change of AGL_n(Z_{2^s}) for the AGL_n(K), where K is one of the rings Z_{2^{s-1}, F_{2^{s-1} and B_{s-1}. New ciphers are defined via the multiplications of two different commutative rings Z_{2^s} and K. It does not allow to treat them as stream ciphers of multivariate cryptography and use corresponding cryptanalytic technique. Adversary is not able to use known cryptanalytical methods such as linearisation attacks. We discuss the option of change the mentioned above elements of AGL_n(Z_{2^s) or AGL_n(K) for nonlinear multivariate transformation F of (Z_{2^s})^n or K^n with the symmetric trapdoor accelerator T, i.e. the piece of information such that the knowledge of T allows to compute the value F(p) in arbitrarily chosen p ϵ P in time O(n^2) and to solve the equation of kind F(x)=c for each c from C in time O(n ^2).
Last updated:  2024-02-24
Lightweight Authentication of Web Data via Garble-Then-Prove
Xiang Xie, Kang Yang, Xiao Wang, and Yu Yu
Transport Layer Security (TLS) establishes an authenticated and confidential channel to deliver data for almost all Internet applications. A recent work (Zhang et al., CCS'20) proposed a protocol to prove the TLS payload to a third party, without any modification of TLS servers, while ensuring the privacy and originality of the data in the presence of malicious adversaries. However, it required maliciously secure Two-Party Computation (2PC) for generic circuits, leading to significant computational and communication overhead. This paper proposes the garble-then-prove technique to achieve the same security requirement without using any heavy mechanism like generic malicious 2PC. Our end-to-end implementation shows 14$\times$ improvement in communication and an order of magnitude improvement in computation over the state-of-the-art protocol. We also show worldwide performance when using our protocol to authenticate payload data from Coinbase and Twitter APIs. Finally, we propose an efficient gadget to privately convert the above authenticated TLS payload to additively homomorphic commitments so that the properties of the payload can be proven efficiently using zkSNARKs.
Last updated:  2024-02-24
Interactive Oracle Arguments in the QROM and Applications to Succinct Verification of Quantum Computation
Islam Faisal
This work is motivated by the following question: can an untrusted quantum server convince a classical verifier of the answer to an efficient quantum computation using only polylogarithmic communication? We show how to achieve this in the quantum random oracle model (QROM), after a non-succinct instance-independent setup phase. We introduce and formalize the notion of post-quantum interactive oracle arguments for languages in QMA, a generalization of interactive oracle proofs (Ben-Sasson-Chiesa-Spooner). We then show how to compile any non-adaptive public-coin interactive oracle argument (with private setup) into a succinct argument (with setup) in the QROM. To conditionally answer our motivating question via this framework under the post-quantum hardness assumption of LWE, we show that the ZX local Hamiltonian problem with at least inverse-polylogarithmic relative promise gap has an interactive oracle argument with instance-independent setup, which we can then compile. Assuming a variant of the quantum PCP conjecture that we introduce called the weak ZX quantum PCP conjecture, we obtain a succinct argument for QMA (and consequently the verification of quantum computation) in the QROM (with non-succinct instance-independent setup) which makes only black-box use of the underlying cryptographic primitives.
Last updated:  2024-02-23
Plinko: Single-Server PIR with Efficient Updates via Invertible PRFs
Alexander Hoover, Sarvar Patel, Giuseppe Persiano, and Kevin Yeo
We study single-server private information retrieval (PIR) where a client wishes to privately retrieve the $x$-th entry from a database held by a server without revealing the index $x$. In our work, we focus on PIR with client pre-processing where the client may compute hints during an offline phase. The hints are then leveraged during queries to obtain sub-linear online time. We present Plinko that is the first single-server PIR with client pre-processing that obtains optimal trade-offs between client storage and query time for all parameters. Our scheme uses $t = \tilde{O}(n/r)$ query time for any client storage size $r$. This matches known lower bounds of $r \cdot t = \Omega(n)$ up to logarithmic factors for all parameterizations whereas prior works could only match the lower bound when $r = \tilde{O}(\sqrt{n})$. Moreover, Plinko is also the first updateable PIR scheme where an entry can be updated in worst-case $\tilde{O}(1)$ time. As our main technical tool, we define the notion of an invertible pseudorandom function (iPRF) that generalizes standard PRFs to be equipped with an efficient inversion algorithm. We present a construction of an iPRF from one-way functions where forward evaluation runs in $\tilde{O}(1)$ time and inversion runs in time linear in the inverse set (output) size. Furthermore, our iPRF construction is the first that remains efficient and secure for arbitrary domain and range sizes (including small domains and ranges). In the context of single-server PIR, we show that iPRFs may be used to construct the first hint set representation where finding a hint containing an entry $x$ may be done in $\tilde{O}(1)$ time.
Last updated:  2024-02-23
Closing the Efficiency Gap between Synchronous and Network-Agnostic Consensus
Giovanni Deligios and Mose Mizrahi Erbes
In the consensus problem, $n$ parties want to agree on a common value, even if some of them are corrupt and arbitrarily misbehave. If the parties have a common input $m$, then they must agree on $m$. Protocols solving consensus assume either a synchronous communication network, where messages are delivered within a known time, or an asynchronous network with arbitrary delays. Asynchronous protocols only tolerate $t_a < n/3$ corrupt parties. Synchronous ones can tolerate $t_s < n/2$ corruptions with setup, but their security completely breaks down if the synchrony assumptions are violated. Network-agnostic consensus protocols, as introduced by Blum, Katz, and Loss [TCC'19], are secure regardless of network conditions, tolerating up to $t_s$ corruptions with synchrony and $t_a$ without, under provably optimal assumptions $t_a \leq t_s$ and $2t_s + t_a < n$. Despite efforts to improve their efficiency, all known network-agnostic protocols fall short of the asymptotic complexity of state-of-the-art purely synchronous protocols. In this work, we introduce a novel technique to compile any synchronous and any asynchronous consensus protocols into a network-agnostic one. This process only incurs a small constant number of overhead rounds, so that the compiled protocol matches the optimal round complexity for synchronous protocols. Our compiler also preserves under a variety of assumptions the asymptomatic communication complexity of state-of-the-art synchronous and asynchronous protocols. Hence, it closes the current efficiency gap between synchronous and network-agnostic consensus. As a plus, our protocols support $\ell$-bit inputs, and can be extended to achieve communication complexity $O(n^2\kappa + \ell n)$ under the assumptions for which this is known to be possible for purely synchronous protocols.
Last updated:  2024-02-23
Threshold Garbled Circuits with Low Overhead
Schuyler Rosefield, abhi shelat, and LaKyah Tyner
The folklore approach to designing a threshold variant of symmetric cryptographic algorithms involves applying generic MPC methods to se- cret sharing techniques: the MPC first combines participant input shares using the secret sharing scheme, and then evaluates the cryptographic function on the reconstructed key. Hardening this secure against n − 1 malicious parties requires some mechanism to ensure input consistency, e.g., adding MACs to inputs, which consequently, increases the number of inputs and gates to the MPC. In many cases, this extra overhead is substantially more than the underlying cost of evaluating the symmetric cryptographic algorithm. We present a scheme that can convert any suitable maliciously secure dishonest majority boolean-circuit FMPC into a threshold scheme Fthresh with almost no overhead. Specifically, we present an SUC-secure scheme that allows for reactive threshold t-of-n boolean circuit evaluation amongst a group of n parties P , for any t ≤ n, against a malicious adversary that corrupts any number of parties less than the threshold t. Moreover, mul- tiple circuits can be evaluated sequentially with the secret-shared authen- ticated outputs of a circuit to be used subsequently as inputs for a new circuit by any S ⊆ P of size |S| ≥ t. Building upon the works of Wang et al, Hazay et al, and Yang et al, [WRK17, HSSV17, YWZ20] for dishonest majority FMPC, our key insight is to create threshold versions of the “authenticated bits” used to han- dle input in these recent n-party garbled circuits protocols. The resulting design incurs a small overhead to produce the reusable “threshold authen- ticated bits” during preprocessing, and adds no extra communication to evaluate with the authenticated input during the online phase. Using our methods, thresholdizing a boolean circuit has essentially no performance overhead. For example, to compute HMAC, a full Setup+Eval execution of the (n − 2)-out-of-n thresholdized version is approximately 4% more expensive than the state-of-the-art n-party MPC. In contrast, using the folklore method is approximately 100% more expensive. This is especially true for small circuits such as AES which has 6800 gates and thus incurs the most overhead for thresholdizing. Simply considering the online Eval cost, our approach can evaluate AES blocks at 2.3/s with 16 parties, exceeding the baseline MPC cost without preprocessing, and sur- passing the folklore method that only achieves .33/s blocks. Ultimately, this result makes threshold boolean circuit MPC as feasible as any MPC application.
Last updated:  2024-02-23
Alternative Key Schedules for the AES
Christina Boura, Patrick Derbez, and Margot Funk
The AES block cipher is today the most important and analyzed symmetric algorithm. While all versions of the AES are known to be secure in the single-key setting, this is not the case in the related-key scenario. In this article we try to answer the question whether the AES would resist better differential-like related-key attacks if the key schedule was different. For this, we search for alternative permutation-based key schedules by extending the work of Khoo et al. at ToSC 2017 and Derbez et al. at SAC 2018. We first show that the model of Derbez et al. was flawed. Then, we develop different approaches together with MILP-based tools to find good permutations that could be used as the key schedule for AES-128, AES-192 and AES-256. Our methods permitted to find permutations that outperform the permutation exhibited by Khoo et al. for AES-128. Moreover, our new approach based on two MILP models that call one another allowed us to handle a larger search space and thus to search for alternative key schedules for the two bigger versions of AES. This method permitted us to find permutations for AES-192 and AES-256 that provide better resistance to related-key differential attacks. Most importantly, we showed that these variants can resist full-round boomerang attacks.
Last updated:  2024-02-23
Keeping Up with the KEMs: Stronger Security Notions for KEMs and automated analysis of KEM-based protocols
Cas Cremers, Alexander Dax, and Niklas Medinger
Key Encapsulation Mechanisms (KEMs) are a critical building block for hybrid encryption and modern security protocols, notably in the post-quantum setting. Given the asymmetric public key of a recipient, the primitive establishes a shared secret key between sender and recipient. In recent years, a large number of abstract designs and concrete implementations of KEMs have been proposed, e.g., in the context of the NIST process for post-quantum primitives. In this work, we (i) establish stronger security notions for KEMs, and (ii) develop a symbolic analysis method to analyze security protocols that use KEMs. First, we generalize existing security notions for KEMs in the computational setting, introduce several stronger security notions, and prove their relations. Our new properties formalize in which sense outputs of the KEM uniquely determine, i.e., bind, other values. Our new binding properties can be used, e.g., to prove the absence of attacks that were not captured by prior security notions, such as re-encapsulation attacks. Second, we develop a family of fine-grained symbolic models that correspond to our hierarchy of computational security notions, and are suitable for the automated analysis of KEM-based security protocols. We encode our models as a library in the framework of the Tamarin prover. Given a KEM-based protocol, our approach can automatically derive the minimal binding properties required from the KEM; or, if also given a concrete KEM, can analyze if the protocols meets its security goals. In case studies, Tamarin automatically discovers, e.g., that the key exchange protocol proposed in the original Kyber paper requires stronger properties from the KEM than were proven in the paper.
Last updated:  2024-02-23
Committing AE from Sponges: Security Analysis of the NIST LWC Finalists
Juliane Krämer, Patrick Struck, and Maximiliane Weishäupl
Committing security has gained considerable attention in the field of authenticated encryption (AE). This can be traced back to a line of recent attacks, which entail that AE schemes used in practice should not only provide confidentiality and authenticity, but also committing security. Roughly speaking, a committing AE scheme guarantees that ciphertexts will decrypt only for one key. Despite the recent research effort in this area, the finalists of the NIST lightweight cryptography standardization process have not been put under consideration yet. We close this gap by providing an analysis of these schemes with respect to their committing security. Despite the structural similarities the finalists exhibit, our results are of a quite heterogeneous nature: We break four of the schemes with effectively no costs, while for two schemes our attacks are costlier, yet still efficient. For the remaining three schemes ISAP, Ascon, and (a slightly modified version of) Schwaemm, we give formal security proofs. Our analysis reveals that sponges—due to their large states—are more favorable for committing security compared to block-ciphers.
Last updated:  2024-02-23
Exploring the Advantages and Challenges of Fermat NTT in FHE Acceleration
Andrey Kim, Ahmet Can Mert, Anisha Mukherjee, Aikata Aikata, Maxim Deryabin, Sunmin Kwon, HyungChul Kang, and Sujoy Sinha Roy
Recognizing the importance of fast and resource-efficient polynomial multiplication in homomorphic encryption, in this paper, we introduce a novel method that enables integer multiplier-less Number Theoretic Transform (NTT) for computing polynomial multiplication. First, we use a Fermat number as an auxiliary modulus of NTT. However, this approach of using Fermat number scales poorly with the degree of polynomial. Hence, we propose a transformation of a large-degree univariate polynomial into small-degree multi-variable polynomials. After that, we compute these NTTs on small-degree polynomials with Fermat number as modulus. We design an accelerator architecture customized for the novel multivariate NTT and use it for benchmarking practical homomorphic encryption applications. The accelerator can achieve a 1,200× speed-up compared to software implementations. We further discuss the potential and limitations of the proposed polynomial multiplication method in the context of homomorphic encryption.
Last updated:  2024-02-23
flookup: Fractional decomposition-based lookups in quasi-linear time independent of table size
Ariel Gabizon and Dmitry Khovratovich
We present a protocol for checking the values of a committed polynomial $\phi(X)$ over a multiplicative subgroup $H\subset \mathbb{F}$ of size $m$ are contained in a table $T\in \mathbb{F}^N$. After an $O(N \log^2 N)$ preprocessing step, the prover algorithm runs in *quasilinear* time $O(m\log ^2 m)$. We improve upon the recent breakthrough results Caulk[ZBK+22] and Caulk+[PK22], which were the first to achieve the complexity sublinear in the full table size $N$ with prover time being $O(m^2+m\log N)$ and $O(m^2)$, respectively. We pose further improving this complexity to $O(m\log m)$ as the next important milestone for efficient zk-SNARK lookups.
Last updated:  2024-02-23
Trapdoor Memory-Hard Functions
Benedikt Auerbach, Christoph U. Günther, and Krzysztof Pietrzak
Memory-hard functions (MHF) are functions whose evaluation provably requires a lot of memory. While MHFs are an unkeyed primitive, it is natural to consider the notion of trapdoor MHFs (TMHFs). A TMHF is like an MHF, but when sampling the public parameters one also samples a trapdoor which allows evaluating the function much cheaper. Biryukov and Perrin (Asiacrypt'17) were the first to consider TMHFs and put forth a candidate TMHF construction called Diodon that is based on the Scrypt MHF (Percival, BSDCan'09). To allow for a trapdoor, Scrypt's initial hash chain is replaced by a sequence of squares in a group of unknown order where the order of the group is the trapdoor. For a length $n$ sequence of squares and a group of order $N$, Diodon's cumulative memory complexity (CMC) is $O(n^2\log N)$ without the trapdoor and $O(n \log(n) \log(N)^2)$ with knowledge of it. While Scrypt is proven to be optimally memory-hard in the random oracle model (Alwen et al., Eurocrypt'17), Diodon's memory-hardness has not been proven so far. In this work, we fill this gap by rigorously analyzing a specific instantiation of Diodon. We show that its CMC is lower bounded by $\Omega(\frac{n^2}{\log n} \log N)$ which almost matches the upper bound. Our proof is based Alwen et al.'s lower bound on Scrypt's CMC but requires non-trivial modifications due to the algebraic structure of Diodon. Most importantly, our analysis involves a more elaborate compression argument and a solvability criterion for certain systems of Diophantine equations.
Last updated:  2024-02-23
Aggregating Falcon Signatures with LaBRADOR
Marius A. Aardal, Diego F. Aranha, Katharina Boudgoust, Sebastian Kolby, and Akira Takahashi
Several prior works have suggested to use non-interactive arguments of knowledge with short proofs to aggregate signatures of Falcon, which is part of the first post-quantum signatures selected for standardization by NIST. Especially LaBRADOR, based on standard structured lattice assumptions and published at CRYPTO’23, seems promising to realize this task. However, no prior work has tackled this idea in a rigorous way. In this paper, we thoroughly prove how to aggregate Falcon signatures using LaBRADOR. First, we improve LaBRADOR by moving from a low-splitting to a high-splitting ring, allowing for faster computations. This modification leads to some additional technical challenges for proving the knowledge soundness of LaBRADOR. Moreover, we provide the first complete knowledge soundness analysis for the non-interactive version of LaBRADOR. Here, the multi-round and recursive nature of LaBRADOR requires a complex and thorough analysis. For this purpose, we introduce the notion of predicate special soundness (PSS). This is a general framework for evaluating the knowledge error of complex Fiat-Shamir arguments of knowledge protocols in a modular fashion, which we believe to be of independent interest. Lastly, we explain the exact steps to take in order to adapt the LaBRADOR proof system for aggregating Falcon signatures and provide concrete estimates for proof sizes. Additionally, we formalize the folklore approach of obtaining aggregate signatures from the class of hash-then-sign signatures through arguments of knowledge.
Last updated:  2024-02-23
A Zero-Dimensional Gröbner Basis for Poseidon
Matthias Johann Steiner
In this paper we construct dedicated weight orders $>$ so that a $>$-Gröbner bases of Poseidon can be found via linear transformations for the preimage as well as the CICO problem. In particular, with our Gröbner bases we can exactly compute the $\mathbb{F}_q$-vector space dimension of the quotient space for all possible Poseidon configurations. This in turn resolves previous attempts to assess the security of Poseidon against Gröbner basis attacks, since the vector space dimension quantifies the complexity of computing the variety of a zero-dimensional polynomial system.
Last updated:  2024-02-23
PLONK: Permutations over Lagrange-bases for Oecumenical Noninteractive arguments of Knowledge
Uncategorized
Ariel Gabizon, Zachary J. Williamson, and Oana Ciobotaru
Show abstract
Uncategorized
zk-SNARK constructions that utilize an updatable universal structured reference string remove one of the main obstacles in deploying zk-SNARKs [GKMMM, Crypto 2018]. The important work of Maller et al. [MBKM, CCS 2019] presented $\mathsf{Sonic}$ - the first potentially practical zk-SNARK with fully succinct verification for general arithmetic circuits with such an SRS. However, the version of $\mathsf{Sonic}$ enabling fully succinct verification still requires relatively high proof construction overheads. We present a universal SNARK construction with fully succinct verification, and significantly lower prover running time (roughly 7.5-20 less group exponentiations than [MBKM] in the fully succinct verifier mode depending on circuit structure). Similarly to [MBKM], we rely on a permutation argument based on Bayer and Groth [Eurocrypt 2012]. However, we focus on ``Evaluations on a subgroup rather than coefficients of monomials''; which enables simplifying both the permutation argument and the artihmetization step.
Last updated:  2024-02-23
On Sigma-Protocols and (packed) Black-Box Secret Sharing Schemes
Claudia Bartoli and Ignacio Cascudo
$\Sigma$-protocols are a widely utilized, relatively simple and well understood type of zero-knowledge proofs. However, the well known Schnorr $\Sigma$-protocol for proving knowledge of discrete logarithm in a cyclic group of known prime order, and similar protocols working over this type of groups, are hard to generalize to dealing with other groups. In particular with hidden order groups, due to the inability of the knowledge extractor to invert elements modulo the order. In this paper, we introduce a universal construction of $\Sigma$-protocols designed to prove knowledge of preimages of group homomorphisms for any abelian finite group. In order to do this, we first establish a general construction of a $\Sigma$-protocol for $\mathfrak{R}$-module homomorphism given only a linear secret sharing scheme over the ring $\mathfrak{R}$, where zero knowledge and special soundness can be related to the privacy and reconstruction properties of the secret sharing scheme. Then, we introduce a new construction of 2-out-of-$n$ packed black-box secret sharing scheme capable of sharing $k$ elements of an arbitrary (abelian, finite) group where each share consists of $k+\log n-3$ group elements. From these two elements we obtain a generic ``batch'' $\Sigma$-protocol for proving knowledge of $k$ preimages of elements via the same group homomorphism, which communicates $k+\lambda-3$ elements of the group to achieve $2^{-\lambda}$ knowledge error. For the case of class groups, we show that our $\Sigma$-protocol improves in several aspects on existing proofs for knowledge of discrete logarithm and other related statements that have been used in a number of works. Finally, we extend our constructions from group homomorphisms to the case of ZK-ready functions, introduced by Cramer and Damg\aa rd in Crypto 09, which in particular include the case of proofs of knowledge of plaintext (and randomness) for some linearly homomorphic encryption schemes such as Joye-Libert encryption. However, in the case of Joye-Libert, we show an even better alternative, using Shamir secret sharing over Galois rings, which achieves $2^{-k}$ knowledge soundness by communicating $k$ ciphertexts to prove $k$ statements.
Last updated:  2024-02-23
HaMAYO: A Fault-Tolerant Reconfigurable Hardware Implementation of the MAYO Signature Scheme
Oussama Sayari, Soundes Marzougui, Thomas Aulbach, Juliane Krämer, and Jean-Pierre Seifert
MAYO is a topical modification of the established multivariate signature scheme UOV. Signer and Verifier locally enlarge the public key map, such that the dimension of the oil space and therefore, the parameter sizes in general, can be reduced. This significantly reduces the public key size while maintaining the appealing properties of UOV, like short signatures and fast verification. Therefore, MAYO is considered as an attractive candidate in the NIST call for additional digital signatures and might be an adequate solution for real-world deployment in resource-constrained devices. When emerging to hardware implementation of multivariate schemes and specifically MAYO, different challenges are faced, namely resource utilization, which scales up with higher parameter sets. To accommodate this, we introduce a configurable hardware implementation designed for integration across various FPGA architectures. Our approach features adaptable configurations aligned with NIST-defined security levels and incorporates resources optimization modules. Our implementation is specifically tested on the Zynq ZedBoard with the Zynq-7020 SoC, with performance evaluations and comparisons made against previous hardware implementations of multivariate schemes. Furthermore, we conducted a security analysis of the MAYO implementation highlighting potential physical attacks and implemented lightweight countermeasures.
Last updated:  2024-02-23
NiLoPher: Breaking a Modern SAT-Hardened Logic-Locking Scheme via Power Analysis Attack
Prithwish Basu Roy, Johann Knechtel, Akashdeep Saha, Saideep Sreekumar, Likhitha Mankali, Mohammed Nabeel, Debdeep Mukhopadhyay, Ramesh Karri, and Ozgur Sinanoglu
LoPher brings, for the first time, cryptographic security promises to the field of logic locking in a bid to break the game of cat-and-mouse seen in logic locking. Toward this end, LoPher embeds the circuitry to lock within multiple rounds of a block cipher, by carefully configuring all the S-Boxes. To realize general Boolean functionalities and to support varying interconnect topologies, LoPher also introduces additional layers of MUXes between S-Boxes and the permutation operations. The authors of LoPher claim resilience against SAT-based attacks in particular. Here, we show the first successful attack on LoPher. First, we uncover a significant limitation for LoPher’s key-space configuration, resulting in large numbers of equivalent keys and, thus, a largely simplified search space for attackers in practice. Second, motivated by their well-proven working against ciphers, we employ a power side-channel attack against LoPher. We find that ISCAS-85 benchmarks locked with LoPher can all be broken in few thousands of traces. Finally, we also outline a simple and low-cost countermeasure to render LoPher more secure.
Last updated:  2024-02-23
C'est très CHIC: A compact password-authenticated key exchange from lattice-based KEM
Afonso Arriaga, Manuel Barbosa, Stanislaw Jarecki, and Marjan Skrobot
Several Password Authenticated Key Exchange (PAKE) protocols have been recently proposed that leverage a Key-Encapsulation Mechanism (KEM) to create an efficient and easy-to-implement post-quantum secure PAKE. This line of work is driven by the intention of the National Institute of Standards and Technology (NIST) to soon standardize a lattice-based post-quantum KEM called $\mathsf{Kyber}$. In two recent works, Beguinet et al. (ACNS 2023) and Pan and Zeng (ASIACRYPT 2023) proposed generic compilers that transform KEM into PAKE, relying on an Ideal Cipher (IC) defined over a group. However, although IC on a group is often used in cryptographic protocols, special care must be taken to instantiate such objects in practice, especially when a low-entropy key is used. To address this concern, Dos Santos et al. (EUROCRYPT 2023) proposed a relaxation of the IC model under the Universal Composability (UC) framework called Half-Ideal Cipher (HIC). They demonstrate how to construct a UC-secure PAKE protocol, named $\mathsf{EKE\textrm{-}KEM}$, from a KEM and a modified 2-round Feistel construction called $\mathsf{m2F}$. Remarkably, $\mathsf{m2F}$ sidesteps the use of IC over a group, instead employing an IC defined over a fixed-length bitstring domain, which is easier to instantiate. In this paper, we introduce a novel PAKE protocol called $\mathsf{CHIC}$ that improves the communication and computation efficiency of $\mathsf{EKE\textrm{-}KEM}$. We do so by opening $\mathsf{m2F}$ construction in a white-box manner and avoiding the HIC abstraction in our analysis. We provide a detailed proof of the security of $\mathsf{CHIC}$ and establish precise security requirements for the underlying KEM, including one-wayness and anonymity of ciphertexts, and uniformity of public keys. Our analysis improves prior work by pinpointing the necessary and sufficient conditions for a tight security proof. Our findings extend to general KEM-based EKE-style protocols, under both game-based definitions (with Perfect Forward Secrecy) and UC PAKE definitions, and show that a passively secure KEM is not sufficient. In this respect, our results align with those of Pan and Zeng (ASIACRYPT 2023), but contradict the analyses of KEM-to-PAKE compilers by Beguinet et al. (ACNS 2023) and Dos Santos et al. (EUROCRYPT 2023). Finally, we provide an implementation of $\mathsf{CHIC}$, highlighting its minimal overhead compared to an underlying CCA-secure KEM - $\mathsf{Kyber}$. An interesting aspect of the implementation is that we reuse existing $\mathsf{Kyber}$ reference code to solve an open problem concerning instantiating the half-ideal cipher construction. Specifically, we reuse the rejection sampling procedure, originally designed for public-key compression, to implement the hash onto the public key space, which is a component in the half-ideal cipher. As of now, to the best of our knowledge, CHIC stands as the most efficient PAKE protocol from black-box KEM that offers rigorously proven UC security.
Last updated:  2024-02-23
SweetPAKE: Key exchange with decoy passwords
Afonso Arriaga, Peter Y.A. Ryan, and Marjan Skrobot
Decoy accounts are often used as an indicator of the compromise of sensitive data, such as password files. An attacker targeting only specific known-to-be-real accounts might, however, remain undetected. A more effective method proposed by Juels and Rivest at CCS'13 is to maintain additional fake passwords associated with each account. An attacker who gains access to the password file is unable to tell apart real passwords from fake passwords, and the attempted usage of a false password immediately sets off an alarm indicating a password file compromise. Password-Authenticated Key Exchange (PAKE) has long been recognised for its strong security guarantees when it comes to low-entropy password authentication and secure channel establishment, without having to rely on the setup of a PKI. In this paper, we introduce SweetPAKE, a new cryptographic primitive that offers the same security guarantees as PAKE for key exchange, while allowing clients with a single password to authenticate against servers with $n$ candidate passwords for that account and establish a secure channel. Additional security properties are identified and formalized to ensure that (a) high-entropy session keys are indistinguishable from random, even if later on the long-term secret password becomes corrupted (forward secrecy); (b) upon password file leakage, an adversary cannot tell apart real from fake passwords; and (c) a malicious client cannot trigger a false alarm. We capture these properties by extending well-established game-based definitions of PAKE. Furthermore, we propose a new UC formulation that comprehensively unifies both SweetPAKE (session key indistinguishability and sugarword indistinguishability) and a related notion known as Oblivious-PAKE. Finally, we propose efficient SweetPAKE and Oblivious-PAKE protocols constructed from Password-Authenticated Public-Key Encryption (PAPKE) that satisfy all the proposed notions.
Last updated:  2024-02-23
Concretely Efficient Lattice-based Polynomial Commitment from Standard Assumptions
Intak Hwang, Jinyeong Seo, and Yongsoo Song
Polynomial commitment is a crucial cryptographic primitive in constructing zkSNARKs. To date, most practical constructions are either insecure against quantum adversaries or lack homomorphic properties, which are useful in recursive compositions of SNARKs. Recently, lattice-based constructions from functional commitments have drawn attention for possessing all the desirable properties, but they yet lack concrete efficiency, and their extractability, which is essential for SNARKs, requires further analysis. In this paper, we propose a novel construction of an extractable polynomial commitment scheme based on standard lattice-based assumptions, which is transparent and publicly verifiable. Our polynomial commitment has a square-root proof size and verification complexity, but it provides concrete efficiency in proof size, proof generation, and verification. When compared with the recent code-based construction based on Brakedown (CRYPTO 23), our construction provides comparable performance in all aspects.
Last updated:  2024-02-23
Single-Input Functionality against a Dishonest Majority: Practical and Round-Optimal
Zhelei Zhou, Bingsheng Zhang, Hong-Sheng Zhou, and Kui Ren
In this work, we focus on Single-Input Functionality (SIF), which can be viewed as a special case of MPC. In a SIF, only one distinguished party called the dealer holds a private input. SIF allows the dealer to perform a computation task with other parties without revealing any additional information about the private input. SIF has diverse applications, including multiple-verifier zero-knowledge, and verifiable relation sharing. As our main contribution, we propose the first 1-round SIF protocol against a dishonest majority in the preprocessing model, which is highly efficient. The only prior work that achieves 1-round online communication assumes an honest majority and is only a feasibility result (Applebaum et al., Crypto 2022). We implement our protocols and conduct extensive experiments to illustrate the practical efficiency of our protocols. As our side product, we extend the subfield Vector Oblivious Linear Evaluation (sVOLE) into the multi-party setting, and propose a new primitive called multi-verifier sVOLE, which may be of independent interest.
Last updated:  2024-02-23
A generic algorithm for efficient key recovery in differential attacks – and its associated tool
Christina Boura, Nicolas David, Patrick Derbez, Rachelle Heim Boissier, and María Naya-Plasencia
Differential cryptanalysis is an old and powerful attack against block ciphers. While different techniques have been introduced throughout the years to improve the complexity of this attack, the key recovery phase remains a tedious and error-prone procedure. In this work, we propose a new algorithm and its associated tool that permits, given a distinguisher, to output an efficient key guessing strategy. Our tool can be applied to SPN ciphers whose linear layer consists of a bit-permutation and whose key schedule is linear or almost linear. It can be used not only to help cryptanalysts find the best differential attack on a given cipher but also to assist designers in their security analysis. We applied our tool to four targets: RECTANGLE, PRESENT-80, SPEEDY-7-192 and GIFT-64. We extend the previous best attack on RECTANGLE-128 by one round and the previous best differential attack against PRESENT-80 by 2 rounds. We improve a previous key recovery step in an attack against SPEEDY and present more efficient key recovery strategies for RECTANGLE-80 and GIFT. Our tool outputs the results in only a second for most targets.
Last updated:  2024-02-23
A Two-Layer Blockchain Sharding Protocol Leveraging Safety and Liveness for Enhanced Performance
Yibin Xu, Jingyi Zheng, Boris Düdder, Tijs Slaats, and Yongluan Zhou
Sharding is a critical technique that enhances the scalability of blockchain technology. However, existing protocols often assume adversarial nodes in a general term without considering the different types of attacks, which limits transaction throughput at runtime because attacks on liveness could be mitigated. There have been attempts to increase transaction throughput by separately handling the attacks; however, they have security vulnerabilities. This paper introduces Reticulum, a novel sharding protocol that overcomes these limitations and achieves enhanced scalability in a blockchain network without security vulnerabilities. Reticulum employs a two-phase design that dynamically adjusts transaction throughput based on runtime adversarial attacks on either or both liveness and safety. It consists of `control' and `process' shards in two layers corresponding to the two phases. Process shards are subsets of control shards, with each process shard expected to contain at least one honest node with high confidence. Conversely, control shards are expected to have a majority of honest nodes with high confidence. Reticulum leverages unanimous voting in the first phase to involve fewer nodes in accepting/rejecting a block, allowing more parallel process shards. The control shard finalizes the decision made in the first phase and serves as a lifeline to resolve disputes when they surface. Experiments demonstrate that the unique design of Reticulum empowers high transaction throughput and robustness in the face of different types of attacks in the network, making it superior to existing sharding protocols for blockchain networks.
Last updated:  2024-02-22
Single Pass Client-Preprocessing Private Information Retrieval
Arthur Lazzaretti and Charalampos Papamanthou
Recently, many works have considered Private Information Retrieval (PIR) with client-preprocessing: In this model a client and a server jointly run a preprocessing phase, after which client queries can run in time sublinear in the size of the database. In addition, such approaches store no additional bits per client at the server, allowing us to scale PIR to a large number of clients. In this work, we propose the first client-preprocessing PIR scheme with ``single pass'' client-preprocessing. In particular, our scheme is concretely optimal with respect to preprocessing, in the sense that it requires exactly one linear pass over the database. This is in stark contrast with existing works, whose preprocessing is proportional to $\lambda \cdot N$, where $\lambda$ is the security parameter (e.g., $\lambda=128$). Our approach yields a preprocessing speedup of 45-100$\times$ and a query speedup of up to 20$\times$ when compared to previous state-of-the-art schemes (e.g., Checklist, USENIX 2021), making preprocessing PIR more attractive for a myriad of use cases that are ``session-based''. In addition to fast preprocessing, our scheme features extremely fast updates (additions and edits)---in constant time. Previously, the best known approach for handling updates in client-preprocessing PIR had time complexity $O(\log N)$, while also adding a $\log N$ factor to the bandwidth. We implement our update algorithm and show concrete speedups of about 20$\times$ in update time when compared to the previous state-of-the-art updatable scheme (e.g., Checklist, USENIX 2021).
Last updated:  2024-02-22
Pseudorandom unitaries with non-adaptive security
Tony Metger, Alexander Poremba, Makrand Sinha, and Henry Yuen
Pseudorandom unitaries (PRUs) are ensembles of efficiently implementable unitary operators that cannot be distinguished from Haar random unitaries by any quantum polynomial-time algorithm with query access to the unitary. We present a simple PRU construction that is a concatenation of a random Clifford unitary, a pseudorandom binary phase operator, and a pseudorandom permutation operator. We prove that this PRU construction is secure against non-adaptive distinguishers assuming the existence of quantum-secure one-way functions. This means that no efficient quantum query algorithm that is allowed a single application of $U^{\otimes \mathrm{poly}(n)}$ can distinguish whether an $n$-qubit unitary $U$ was drawn from the Haar measure or our PRU ensemble. We conjecture that our PRU construction remains secure against adaptive distinguishers, i.e., secure against distinguishers that can query the unitary polynomially many times in sequence, not just in parallel.
Last updated:  2024-02-22
BaseFold: Efficient Field-Agnostic Polynomial Commitment Schemes from Foldable Codes
Hadas Zeilberger, Binyi Chen, and Ben Fisch
This works introduces Basefold, a new $\textit{field-agnostic}$ Polynomial Commitment Scheme (PCS) for multilinear polynomials that has $O(\log^{2}(n))$ verifier costs and $O(n \log n)$ prover time. An important application of a multilinear PCS is constructing Succinct Non-interactive Arguments (SNARKs) from multilinear polynomial interactive oracle proofs (PIOPs). Furthermore, field-agnosticism is a major boon to SNARK efficiency in applications that require (or benefit from) a certain field choice. Our inspiration for Basefold is the Fast Reed-Solomon Interactive-Oracle Proof of Proximity (FRI IOPP), which leverages two properties of Reed-Solomon (RS) codes defined over "FFT-friendly'' fields: $O(n \log n)$ encoding time, and a second property that we call foldability. We first introduce a generalization of the FRI IOPP that works over any foldable linear code in linear time. Second, we construct a new family of linear codes which we call $\textit{random foldable codes}$, that are a special type of punctured Reed-Muller codes, and prove tight bounds on their minimum distance. Unlike RS codes, our new codes are foldable and have $O(n \log n)$ encoding time over ${any}$ sufficiently large field. Finally, we construct a new multilinear PCS by carefully interleaving our IOPP with the classical sumcheck protocol, which also gives a new multilinear PCS from FRI. Basefold is 2-3 times faster than prior multilinear PCS constructions from FRI when defined over the same finite field. More significantly, using Hyperplonk (Eurocrypt, 2022) as a multilinear PIOP backend for apples-to-apples comparison, we show that Basefold results in a SNARK that has better concrete efficiency across a range of field choices than with any prior multilinear PCS in the literature. Hyperplonk with Basefold has a proof size that is more than $10$ times smaller than Hyperplonk with Brakedown and its verifier is over $30$ times faster for circuits with more than $2^{20}$ gates. Compared to FRI, Hyperplonk with Basefold retains efficiency over any sufficiently large field. For illustration, with Basefold we can prove ECDSA signature verification over the secp256k1 curve more than $20$ times faster than Hyperplonk with FRI and the verifier is also twice as fast. Proofs of signature verification have many useful applications, including offloading blockchain transactions and enabling anonymous credentials over the web.
Last updated:  2024-02-22
Symmetric and Dual PRFs from Standard Assumptions: A Generic Validation of a Prevailing Assumption
Mihir Bellare and Anna Lysyanskaya
A two-input function is a dual PRF if it is a PRF when keyed by either of its inputs. Dual PRFs are assumed in the design and analysis of numerous primitives and protocols including HMAC, AMAC, TLS 1.3 and MLS. But, not only do we not know whether particular functions on which the assumption is made really are dual PRFs; we do not know if dual PRFs even exist. What if the goal is impossible? This paper addresses this with a foundational treatment of dual PRFs, giving constructions based on standard assumptions. This provides what we call a generic validation of the dual PRF assumption. Our approach is to introduce and construct symmetric PRFs, which imply dual PRFs and may be of independent interest. We give a general construction of a symmetric PRF based on a function having a weak form of collision resistance coupled with a leakage hardcore function, a strengthening of the usual notion of hardcore functions we introduce. We instantiate this general construction in two ways to obtain two specific symmetric and dual PRFs, the first assuming any collision-resistant hash function, and the second assuming any one-way permutation. A construction based on any one-way function evades us and is left as an intriguing open problem.
Last updated:  2024-02-22
Recommendations for the Design and Validation of a Physical True Random Number Generator Integrated in an Electronic Device
David Lubicz and Viktor FIscher
These Recommendations describe essential elements of the design of a secure physical true random number generator (PTRNG) integrated in an electronic device. Based on these elements, we describe and justify requirements for the design, validation and testing of PTRNGs, which are intended to guarantee the security of generators aimed at cryptographic applications.
Last updated:  2024-02-22
Security of Symmetric Ratchets and Key Chains - Implications for Protocols like TLS 1.3, Signal, and PQ3
John Preuß Mattsson
Symmetric ratchets and one-way key chains play a vital role in numerous important security protocols such as TLS 1.3, DTLS 1.3, QUIC, Signal, MLS, EDHOC, OSCORE, and Apple PQ3. Despite the crucial role they play, very little is known about their security properties. This paper categorizes and examines different ratchet constructions, offering a comprehensive overview of their security. Our analysis reveals notable distinctions between different types of one-way key chains. Notably, the type of ratchet used by TLS 1.3, Signal, and PQ3 exhibit a significant number of weak keys, an unexpectedly high rate of key collisions surpassing birthday attack expectations, and a predictable shrinking key space susceptible to novel Time-Memory Trade-Off (TMTO) attacks with complexity $\approx N^{1/4}$. Consequently, the security level provided by e.g., TLS 1.3 is significantly lower than anticipated. To address these concerns, we analyze the aforementioned protocols and provide numerous concrete recommendations for enhancing their security, as well as guidance for future security protocol design.
Last updated:  2024-02-22
Fork-Resilient Continuous Group Key Agreement
Joël Alwen, Marta Mularczyk, and Yiannis Tselekounis
Continuous Group Key Agreement (CGKA) lets a evolving group of clients agree on a sequence of group keys. An important application of CGKA is scalable asynchronous end-to-end (E2E) encrypted group messaging. A major problem preventing the use of CGKA over unreliable infrastructure are so-called forks. A fork occurs when group members have diverging views of the group's history (and thus its current state); e.g. due to network or server failures. Once communication channels are restored, members resolve a fork by agreeing on the state of the group again. Today's CGKA protocols make fork resolution challenging, as natural resolution strategies seem to conflict with the way the protocols enforce group state agreement and forward secrecy. Meanwhile, secure group messaging protocols which do support fork resolution do not scale nearly as well as CGKA does. In this work, we pave the way to practical scalable E2E messaging over unreliable infrastructure. To that end, we generalize CGKA to Fork Resilient-CGKA which allows clients to process significantly more types of out-of-order network traffic. This is important for many natural fork resolution procedures as they are based, in part, on replaying missed traffic. Next, we give two FR-CGKA constructions: a practical one based on the CGKA underlying the MLS messaging standard and an optimally secure one (albeit with only theoretical efficiency). To further assist with fork resolution, we introduce a simple new abstraction to describe a client's local protocol state. The abstraction describes all and only the information relevant to natural fork resolution, making it easier for higher-level fork resolution procedures to work with and reason about. We define a black-box extension of an FR-CGKA which maintains such a description of a client's internal state. Finally, as a proof of concept, we give a basic fork resolution protocol.
Last updated:  2024-02-22
Diving Deep into the Preimage Security of AES-like Hashing
Shiyao Chen, Jian Guo, Eik List, Danping Shi, and Tianyu Zhang
Since the seminal works by Sasaki and Aoki, Meet-in-the-Middle (MITM) attacks are recognized as an effective technique for preimage and collision attacks on hash functions. At Eurocrypt 2021, Bao et al. automated MITM attacks on AES-like hashing and improved upon the best manual result. The attack framework has been furnished by subsequent works, yet far from complete. This paper elucidates three key contributions dedicated in further generalizing the idea of MITM and refining the automatic model on AES-like hashing. (1) We introduce S-box linearization to MITM pseudo-preimage attacks on AES-like hashing. The technique suits perfectly with superposition states to preserve information after S-box with an affordable cost. (2) We propose distributed initial structures, an extension on the original concept of initial states, that selects initial degrees of freedom in a more versatile manner to enlarge the search space. (3) We exploit the structural similarities between encryption and key schedule in constructions (e.g. Whirlpool and Streebog) to model propagations more accurately and avoid repeated costs. Weaponed with these innovative techniques, we further empower the MITM framework and improve the attack results on AES-like designs for preimage and collision. We obtain the first preimage attacks on 10-round AES-192, 10-round Rijndael-192/256, and 7.75-round Whirlpool, reduced time and/or memory complexities for preimage attacks on 5-, 6-round Whirlpool and 7.5-, 8.5-round Streebog, as well as improved collision attacks on 6- and 6.5-round Whirlpool.
Last updated:  2024-02-22
The Multi-user Constrained PRF Security of Generalized GGM Trees for MPC and Hierarchical Wallets
Chun Guo, Xiao Wang, Xiang Xie, and Yu Yu
Multi-user (mu) security considers large-scale attackers that, given access to a number of cryptosystem instances, attempt to compromise at least one of them. We initiate the study of mu security of the so-called GGMtree that stems from the PRG-to-PRF transformation of Goldreich, Goldwasser, and Micali, with a goal to provide references for its recently popularized use in applied cryptography. We propose a generalized model for GGM trees and analyze its mu prefix-constrained PRF security in the random oracle model. Our model allows to derive concrete bounds and improvements for various protocols, and we showcase on the Bitcoin-Improvement-Proposal standard Bip32 hierarchical wallets and function secret sharing (FSS) protocols. In both scenarios, we propose improvements with better performance and concrete security bounds at the same time. Compared with the state-of-the-art designs, our SHACAL3- and KeccaK-𝑝-based Bip32 variants reduce the communication cost of MPC-based implementations by 73.3%∼93.8%, while our AES-based FSS substantially improves mu security while reducing computations by 50%.
Last updated:  2024-02-21
Provable Dual Attacks on Learning with Errors
Amaury Pouly and Yixin Shen
Learning with Errors (LWE) is an important problem for post-quantum cryptography (PQC) that underlines the security of several NIST PQC selected algorithms. Several recent papers have claimed improvements on the complexity of so-called dual attacks on LWE. These improvements make dual attacks comparable to or even better than primal attacks in certain parameter regimes. Unfortunately, those improvements rely on a number of untested and hard-to-test statistical assumptions. Furthermore, a recent paper claims that the whole premise of those improvements might be incorrect. The goal of this paper is to improve the situation by proving the correctness of a dual attack without relying on any statistical assumption. Although our attack is greatly simplified compared to the recent ones, it shares many important technical elements with those attacks and can serve as a basis for the analysis of more advanced attacks. We provide some rough estimates on the complexity of our simplified attack on Kyber using a Monte Carlo Markov Chain discrete Gaussian sampler. Our main contribution is to clearly identify a set of parameters under which our attack (and presumably other recent dual attacks) can work. Furthermore, our analysis completely departs from the existing statistics-based analysis and is instead rooted in geometry. We also compare the regime in which our algorithm works to the ``contradictory regime'' of [Ducas and Pulles,2023]. We observe that those two regimes are essentially complementary. Finally, we give a quantum version of our algorithm to speed up the computation. The algorithm is inspired by [Albrecht, and Shen,2022] but is completely formal and does not rely on any heuristics.
Last updated:  2024-02-21
On Optimal Tightness for Key Exchange with Full Forward Secrecy via Key Confirmation
Kai Gellert, Kristian Gjøsteen, Håkon Jacobsen, and Tibor Jager
A standard paradigm for building key exchange protocols with full forward secrecy (and explicit authentication) is to add key confirmation messages to an underlying protocol having only weak forward secrecy (and implicit authentication). Somewhat surprisingly, we show through an impossibility result that this simple trick must nevertheless incur a linear tightness loss in the number of parties for many natural protocols. This includes Krawczyk's HMQV protocol (CRYPTO 2005) and the protocol of Cohn-Gordon et al. (CRYPTO 2019). Cohn-Gordon et al. gave a very efficient underlying protocol with weak forward secrecy having a linear security loss, and showed that this is optimal for certain reductions. However, they also claimed that full forward secrecy could be achieved by adding key confirmation messages, and without any additional loss. Our impossibility result disproves this claim, showing that their approach, in fact, has an overall quadratic loss. Motivated by this predicament we seek to restore the original linear loss claim of Cohn-Gordon et al. by using a different proof strategy. Specifically, we start by lowering the goal for the underlying protocol with weak forward secrecy, to a selective security notion where the adversary must commit to a long-term key it cannot reveal. This allows a tight reduction rather than a linear loss reduction. Next, we show that the protocol can be upgraded to full forward secrecy using key confirmation messages with a linear tightness loss, even when starting from the weaker selective security notion. Thus, our approach yields an overall tightness loss for the fully forward-secret protocol that is only linear, as originally claimed. Finally, we confirm that the underlying protocol of Cohn-Gordon et al. can indeed be proven selectively secure, tightly.
Last updated:  2024-02-21
Divide and Surrender: Exploiting Variable Division Instruction Timing in HQC Key Recovery Attacks
Robin Leander Schröder, Stefan Gast, and Qian Guo
We uncover a critical side-channel vulnerability in the Hamming Quasi-Cyclic (HQC) round 4 optimized implementation arising due to the use of the modulo operator. In some cases, compilers optimize uses of the modulo operator with compile-time known divisors into constant-time Barrett reductions. However, this optimization is not guaranteed: for example, when a modulo operation is used in a loop the compiler may emit division (div) instructions which have variable execution time depending on the numerator. When the numerator depends on secret data, this may yield a timing side-channel. We name vulnerabilities of this kind Divide and Surrender (DaS) vulnerabilities. For processors supporting Simultaneous Multithreading (SMT) we propose a new approach called DIV-SMT which enables precisely measuring small division timing variations using scheduler and/or execution unit contention. We show that using only 100 such side-channel traces we can build a Plaintext-Checking (PC) oracle with above 90% accuracy. Our approach may also prove applicable to other instances of the DaS vulnerability, such as KyberSlash. We stress that exploitation with DIV-SMT requires co-location of the attacker on the same physical core as the victim. We then apply our methodology to HQC and present a novel way to recover HQC secret keys faster, achieving an 8-fold decrease in the number of idealized oracle queries when compared to previous approaches. Our new PC oracle attack uses our newly developed Zero Tester method to quickly determine whether an entire block of bits contains only zero-bits. The Zero Tester method enables the DIV-SMT powered attack on HQC-128 to complete in under 2 minutes on our targeted AMD Zen2 machine.
Last updated:  2024-02-21
New Models for the Cryptanalysis of ASCON
Mathieu Degré, Patrick Derbez, Lucie Lahaye, and André Schrottenloher
This paper focuses on the cryptanalysis of the ASCON family using automatic tools. We analyze two different problems with the goal to obtain new modelings, both simpler and less computationally heavy than previous works (all our models require only a small amount of code and run on regular desktop computers). The first problem is the search for Meet-in-the-middle attacks on reduced-round ASCON-Hash. Starting from the MILP modeling of Qin et al. (EUROCRYPT 2023 & ePrint 2023), we rephrase the problem in SAT, which accelerates significantly the solving time and removes the need for the ``weak diffusion structure'' heuristic. This allows us to reduce the memory complexity of Qin et al.'s attacks and to prove some optimality results. The second problem is the search for lower bounds on the probability of differential characteristics for the ASCON permutation. We introduce a lossy MILP encoding of the propagation rules based on the Hamming weight, in order to find quickly lower bounds which are comparable to the state of the art. We find a small improvement over the existing bound on 7 rounds.
Last updated:  2024-02-21
MetaDORAM: Info-Theoretic Distributed ORAM with Less Communication
Brett Hemenway Falk, Daniel Noble, and Rafail Ostrovsky
This paper presents a Distributed Oblivious RAM (DORAM) protocol, MetaDORAM, that is information-theoretically secure and has lower communication cost than all previous info-theoretically secure DORAM protocols for small block sizes. Specifically, given a memory of $n$ locations, each of size $d$ bits, MetaDORAM requires only $O( (d+\log^2(n)) \log(n)/\log(\log(n)) )$ bits of communication per query. When $d = \Theta(\log^2(n))$, this is a $\Theta(\log(n)/\log \log(n))$ \emph{overhead}, compared to the cost of reading one memory location directly. By comparison, the only existing statistically secure DORAM with sub-logarithmic overhead has communication cost $O( \log_a(n) d + a \omega(1) \log^2(n) \log_a(n))$ (Abraham et al. PKC '17), where $\omega(1)$ is any super-constant function in $n$ and $a \geq 2$ is a free parameter. MetaDORAM obtains sub-logarithmic communication overhead for smaller block sizes than previously achieved (any $d = \omega(\log^2(n)/\log(\log(n)))$) while providing statistical security, i.e., no computational assumptions. We circumvent the Goldreich-Ostrovsky lower bound by allowing servers to perform poly(log(n)) work, but without computational assumptions. By a standard transformation, our protocol also implies a 3-server active ORAM, Meta3ORAM, with information-theoretic security and $O( (d+\log^2(n)) \log(n)/\log(\log(n)) )$ communication per query. For small $d$, this is lower than all previous statistically-secure multi-server ORAMs. MetaDORAM and Meta3ORAM also have low communication costs relative to DORAM and multi-server ORAM protocols which make use of computational assumptions. Even compared to several recent works that make use of $O(n)$ computation, our protocols have lower communication cost. Our protocols are secure in the semi-honest honest-majority setting. We also show that perfectly secure DORAM/multi-server ORAM with the same efficiency can be obtained using a computationally-expensive once-off setup phase.
Last updated:  2024-02-21
Fiat-Shamir for Proofs Lacks a Proof Even in the Presence of Shared Entanglement
Frédéric Dupuis, Philippe Lamontagne, and Louis Salvail
We explore the cryptographic power of arbitrary shared physical resources. The most general such resource is access to a fresh entangled quantum state at the outset of each protocol execution. We call this the Common Reference Quantum State (CRQS) model, in analogy to the well-known Common Reference String (CRS). The CRQS model is a natural generalization of the CRS model but appears to be more powerful: in the two-party setting, a CRQS can sometimes exhibit properties associated with a Random Oracle queried once by measuring a maximally entangled state in one of many mutually unbiased bases. We formalize this notion as a Weak One-Time Random Oracle (WOTRO), where we only ask of the $m$-bit output to have some randomness when conditioned on the $n$-bit input. We show that when $n-m\in\omega(\lg n)$, any protocol for WOTRO in the CRQS model can be attacked by an (inefficient) adversary. Moreover, our adversary is efficiently simulatable, which rules out the possibility of proving the computational security of a scheme by a fully-black-box reduction to a cryptographic game assumption. On the other hand, we introduce a non-game quantum assumption for hash functions that implies WOTRO in the CRQ\$ model (where the CRQS consists only of EPR pairs). We first build a statistically secure WOTRO protocol where $m=n$, then hash the output. The impossibility of WOTRO has the following consequences. First, we show the fully-black-box impossibility of a quantum Fiat-Shamir transform, extending the impossibility result of Bitansky et al. (TCC '13) to the CRQS model. Second, we show a black-box impossibility result for a strenghtened version of quantum lightning (Zhandry, Eurocrypt '19) where quantum bolts have an additional parameter that cannot be changed without generating new bolts. Our results also apply to $2$-message protocols in the plain model.
Last updated:  2024-02-21
Accelerating Training and Enhancing Security Through Message Size Optimization in Symmetric Cryptography
Uncategorized
ABHISAR, Madhav Yadav, and Girish Mishra
Show abstract
Uncategorized
This research extends Abadi and Andersen's exploration of neural networks using secret keys for information protection in multiagent systems. Focusing on enhancing confidentiality properties, we employ end-to-end adversarial training with neural networks Alice, Bob, and Eve. Unlike prior work limited to 64-bit messages, our study spans message sizes from 4 to 1024 bits, varying batch sizes and training steps. An innovative aspect involves training model Bob to approach a minimal error value close to zero and examining its effect on the feasibility of the model. This research unveils the neural networks' adaptability and scalability in encryption and decryption across diverse scenarios, offering valuable insights into their optimization potential for secure communication.
Last updated:  2024-02-21
Attacking ECDSA with Nonce Leakage by Lattice Sieving: Bridging the Gap with Fourier Analysis-based Attacks
Yiming Gao, Jinghui Wang, Honggang Hu, and Binang He
The Hidden Number Problem (HNP) has found extensive applications in side-channel attacks against cryptographic schemes, such as ECDSA and Diffie-Hellman. There are two primary algorithmic approaches to solving the HNP: lattice-based attacks and Fourier analysis-based attacks. Lattice-based attacks exhibit better efficiency and require fewer samples when sufficiently long substrings of the nonces are known. However, they face significant challenges when only a small fraction of the nonce is leaked, such as 1-bit leakage, and their performance degrades in the presence of errors. In this paper, we address an open question by introducing an algorithmic tradeoff that significantly bridges the gap between these two approaches. By introducing a parameter $x$ to modify Albrecht and Heninger's lattice, the lattice dimension is reduced by approximately $(\log_2{x})/ l$, where $l$ represents the number of leaked bits. We present a series of new methods, including the interval reduction algorithm, several predicates, and the pre-screening technique. Furthermore, we extend our algorithms to solve the HNP with erroneous input. Our attack outperforms existing state-of-the-art lattice-based attacks against ECDSA. We break several records including 1-bit and less than 1-bit leakage on a 160-bit curve, while the best previous lattice-based attack for 1-bit leakage was conducted only on a 112-bit curve.
Last updated:  2024-02-21
An Efficient Hash Function for Imaginary Class Groups
Kostas Kryptos Chalkias, Jonas Lindstrøm, and Arnab Roy
This paper presents a new efficient hash function for imaginary class groups. Many class group based protocols, such as verifiable delay functions, timed commitments and accumulators, rely on the existence of an efficient and secure hash function, but there are not many concrete constructions available in the literature, and existing constructions are too inefficient for practical use cases. Our novel approach, building on Wesolowski's initial scheme, achieves a staggering 500-fold increase in computation speed, making it exceptionally practical for real-world applications. This optimisation is achieved at the cost of a smaller image of the hash function, but we show that the image is still sufficiently large for the hash function to be secure. Additionally, our construction is almost linear in its ability to be parallelized, which significantly enhances its computational efficiency on multi-processor systems, making it highly suitable for modern computing environments.
Last updated:  2024-02-21
Multiplex: TBC-based Authenticated Encryption with Sponge-Like Rate
Thomas Peters, Yaobin Shen, and François-Xavier Standaert
Authenticated Encryption (AE) modes of operation based on Tweakable Block Ciphers (TBC) usually measure efficiency in the number of calls to the underlying primitive per message block. On the one hand, many existing solutions reach a primitive-rate of 1, meaning that each n-bit block of message asymptotically needs a single call to the TBC with output length n. On the other hand, while these modes look optimal in a blackbox setting, they become less attractive when leakage comes into play, since all these calls must then be equally well protected to maintain security. Leakage-resistant modes improve this situation, by generating ephemeral keys every constant number of calls. However, rekeying is inherently suboptimal in primitive-rate, since a TBC call can only be used either to refresh a key or to encrypt a block. Even worse, existing solutions achieving almost n bits of security for n-bit secret keys have at most a primitive-rate 2/3. Hence the question: Can we design a highly-secure TBC-based rekeying mode with ``nearly optimal'' primitive-rate? We answer this question positively with Multiplex, a new mode that has primitive-rate d/(d+1) given a TBC with a dn-bit tweak. Multiplex achieves $n-\log_2(dn)$ bits of security for both (i) misuse-resilience CCA security in the blackbox setting and (ii) Ciphertext Integrity with Misuse-resistant and unbounded Leakage in encryption and decryption (CIML2). It also provides (iii) confidentiality with leakage up to the birthday bound. Furthermore, Multiplex can run d+1 calls in parallel in each iteration. The combination of these features gives a mode of operation that inherits most of the good implementation features and flexibility of a Duplex sponge -- therefore paving the way towards sound comparisons between TBC-based and permutation-based AE.
Last updated:  2024-02-21
Reducing the Number of Qubits in Quantum Factoring
Clémence Chevignard, Pierre-Alain Fouque, and André Schrottenloher
This paper focuses on the optimization of the number of logical qubits in Shor's quantum factoring algorithm. As in previous works, we target the implementation of the modular exponentiation, which is the most costly component of the algorithm, both in qubits and operations. In this paper, we show that using only $o(n)$ work qubits, one can obtain the first bit of the modular exponentiation output. We combine this result with May and Schlieper's truncation technique (ToSC 2022) and the Ekerå-Håstad variant of Shor's algorithm (PQCrypto 2017) to obtain a quantum factoring algorithm requiring only $n/2 + o(n)$ qubits in the case of an $n$-bit RSA modulus, while current envisioned implementations require about $2n$ qubits. Our algorithm uses a Residue Number System and succeeds with a parametrizable probability. Being completely classical, we have implemented and tested it. Among possible trade-offs, we can reach a gate count $\mathcal{O}(n^3)$ for a depth $\mathcal{O}(n^2 \log^3 n)$, which then has to be multiplied by $\mathcal{O}(\log n)$ (the number of measurement results required by Ekerå-Håstad). Preliminary logical resource estimates suggest that this circuit could be engineered to use less than 1700 qubits and $2^{36}$ Toffoli gates, and require 60 independent runs to factor an RSA-2048 instance.
Last updated:  2024-02-21
On the (In)Security of the BUFF Transform
Jelle Don, Serge Fehr, Yu-Hsuan Huang, and Patrick Struck
The BUFF transform is a generic transformation for digital signature schemes, with the purpose of obtaining additional security properties beyond standard unforgeability, e.g., exclusive ownership and non-resignability. In the call for additional post-quantum signatures, these were explicitly mentioned by the NIST as ``additional desirable security properties'', and some of the submissions indeed refer to the BUFF transform with the purpose of achieving them, while some other submissions follow the design of the BUFF transform without mentioning it explicitly. In this work, we show the following negative results regarding the non-resignability property in general, and the BUFF transform in particular. In the plain model, we observe by means of a simple attack that any signature scheme for which the message has a high entropy given the signature does not satisfy the non-resignability property (while non-resignability is trivially not satisfied if the message can be efficiently computed from its signature). Given that the BUFF transform has high entropy in the message given the signature, it follows that the BUFF transform does not achieve non-resignability whenever the random oracle is instantiated with a hash function, no matter what hash function. When considering the random oracle model (ROM), the matter becomes slightly more delicate since prior works did not rigorously define the non-resignability property in the ROM. For the natural extension of the definition to the ROM, we observe that our impossibility result still holds, despite there having been positive claims about the non-resignability of the BUFF transform in the ROM. Indeed, prior claims of the non-resignability of the BUFF transform rely on faulty argumentation. On the positive side, we prove that a salted version of the BUFF transform satisfies a slightly weaker variant of non-resignability in the ROM, covering both classical and quantum attacks, if the entropy requirement in the (weakened) definition of non-resignability is statistical; for the computational variant, we show yet another negative result.
Last updated:  2024-02-21
Registered Attribute-Based Signature
Yijian Zhang, Jun Zhao, Ziqi Zhu, Junqing Gong, and Jie Chen
This paper introduces the notion of registered attribute-based signature (registered ABS). Distinctly different from classical attribute-based signature (ABS), registered ABS allows any user to generate their own public/secret key pair and register it with the system. The key curator is critical to keep the system flowing, which is a fully transparent entity that does not retain secrets. Our results can be summarized as follows. -This paper provides the first definition of registered ABS, which has never been defined. -This paper presents the first generic fully secure registered ABS over the prime-order group from $k$-Lin assumption under the standard model, which supports various classes of predicate. -This paper gives the first concrete registered ABS scheme for arithmetic branching program (ABP), which achieves full security in the standard model. Technically, our registered ABS is inspired by the blueprint of Okamoto and Takashima[PKC'11]. We convert the prime-order registered attribute-based encryption (registered ABE) scheme of Zhu et al.[ASIACRYPT'23] via predicate encoding to registered ABS by employing the technique of re-randomization with specialized delegation, while we employ the different dual-system method considering the property of registration. Prior to our work, the work of solving the key-escrow issue was presented by Okamoto and Takashima[PKC'13] while their work considered the weak adversary in the random oracle model.
Last updated:  2024-02-21
IDEA-DAC: Integrity-Driven Editing for Accountable Decentralized Anonymous Credentials via ZK-JSON
Shuhao Zheng, Zonglun Li, Junliang Luo, Ziyue Xin, and Xue Liu
Decentralized Anonymous Credential (DAC) systems are increasingly relevant, especially when enhancing revocation mechanisms in the face of complex traceability challenges. This paper introduces IDEA-DAC, a paradigm shift from the conventional revoke-and-reissue methods, promoting direct and Integrity-Driven Editing (IDE) for Accountable DACs, which results in better integrity accountability, traceability, and system simplicity. We further incorporate an Edit-bound Conformity Check that ensures tailored integrity standards during credential amendments using R1CS-based ZK-SNARKs. Delving deeper, we propose ZK-JSON, a unique R1CS circuit design tailored for IDE over generic JSON documents. This design imposes strictly $O(N)$ rank-1 constraints for variable-length JSON documents of up to $N$ bytes in length, encompassing serialization, encryption, and edit-bound conformity checks. Additionally, our circuits only necessitate a one-time compilation, setup, and smart contract deployment for homogeneous JSON documents up to a specified size. While preserving core DAC features such as selective disclosure, anonymity, and predicate provability, IDEA-DAC achieves precise data modification checks without revealing private content, ensuring only authorized edits are permitted. In summary, IDEA-DAC offers an enhanced methodology for large-scale JSON-formatted credential systems, setting a new standard in decentralized identity management efficiency and precision.
Last updated:  2024-02-20
The Price of Active Security in Cryptographic Protocols
Carmit Hazay, Muthuramakrishnan Venkitasubramaniam, and Mor Weiss
We construct the first actively-secure Multi-Party Computation (MPC) protocols with an arbitrary number of parties in the dishonest majority setting, for an arbitrary field F with constant communication overhead over the “passive-GMW” protocol (Goldreich, Micali and Wigderson, STOC ‘87). Our protocols rely on passive implementations of Oblivious Transfer (OT) in the boolean setting and Oblivious Linear function Evaluation (OLE) in the arithmetic setting. Previously, such protocols were only known over sufficiently large fields (Genkin et al. STOC ‘14) or a constant number of parties (Ishai et al. CRYPTO ‘08). Conceptually, our protocols are obtained via a new compiler from a passively-secure protocol for a distributed multiplication functionality $F_{mult}$ , to an actively-secure protocol for general functionalities. Roughly, $F_{mult}$ is parameterized by a linear-secret sharing scheme S, where it takes S-shares of two secrets and returns S-shares of their product. We show that our compilation is concretely efficient for sufficiently large fields, resulting in an over- head of 2 when securely computing natural circuits. Our compiler has two additional benefits: (1) it can rely on any passive implementation of $F_{mult}$, which, besides the standard implementation based on OT (for boolean) and OLE (for arithmetic) allows us to rely on implementations based on threshold cryptosystems (Cramer et al. Eurocrypt ‘01); and (2) it can rely on weaker-than-passive (i.e., imperfect/leaky) implementations, which in some parameter regimes yield actively-secure protocols with overhead less than 2. Instantiating this compiler with an “honest-majority” implementation of FMULT, we obtain the first honest-majority protocol with optimal corruption threshold for boolean circuits with constant communication overhead over the best passive protocol (Damga&#778;rd and Nielsen, CRYPTO ‘07).
Last updated:  2024-02-20
Quantum Pseudorandomness Cannot Be Shrunk In a Black-Box Way
Samuel Bouaziz--Ermann and Garazi Muguruza
Pseudorandom Quantum States (PRS) were introduced by Ji, Liu and Song as quantum analogous to Pseudorandom Generators. They are an ensemble of states efficiently computable but computationally indistinguishable from Haar random states. Subsequent works have shown that some cryptographic primitives can be constructed from PRSs. Moreover, recent classical and quantum oracle separations of PRS from One-Way Functions strengthen the interest in a purely quantum alternative building block for quantum cryptography, potentially weaker than OWFs. However, our lack of knowledge of extending or shrinking the number of qubits of the PRS output still makes it difficult to reproduce some of the classical proof techniques and results. Short-PRSs, that is PRSs with logarithmic size output, have been introduced in the literature along with cryptographic applications, but we still do not know how they relate to PRSs. Here we answer half of the question, by showing that it is not possible to shrink the output of a PRS from polynomial to logarithmic qubit length while still preserving the pseudorandomness property, in a relativized way. More precisely, we show that relative to Kretschmer's quantum oracle (TQC 2021) short-PRSs cannot exist (while PRSs exist, as shown by Kretschmer's work).
Last updated:  2024-02-20
Secure Integrated Sensing and Communication under Correlated Rayleigh Fading
Martin Mittelbach, Rafael F. Schaefer, Matthieu Bloch, Aylin Yener, and Onur Gunlu
We consider a secure integrated sensing and communication (ISAC) scenario, in which a signal is transmitted through a state-dependent wiretap channel with one legitimate receiver with which the transmitter communicates and one honest-but-curious target that the transmitter wants to sense. The secure ISAC channel is modeled as two state-dependent fast-fading channels with correlated Rayleigh fading coefficients and independent additive Gaussian noise components. Delayed channel outputs are fed back to the transmitter to improve the communication performance and to estimate the channel state sequence. We establish and illustrate an achievable secrecy-distortion region for degraded secure ISAC channels under correlated Rayleigh fading. We also evaluate the inner bound for a large set of parameters to derive practical design insights for secure ISAC methods. The presented results include in particular parameter ranges for which the secrecy capacity of a classical wiretap channel setup is surpassed and for which the channel capacity is approached.
Last updated:  2024-02-20
SoK: Parameterization of Fault Adversary Models - Connecting Theory and Practice
Dilara Toprakhisar, Svetla Nikova, and Ventzislav Nikov
Since the first fault attack by Boneh et al. in 1997, various physical fault injection mechanisms have been explored to induce errors in electronic systems. Subsequent fault analysis methods of these errors have been studied, and successfully used to attack many cryptographic implementations. This poses a significant challenge to the secure implementation of cryptographic algorithms. To address this, numerous countermeasures have been proposed. Nevertheless, these countermeasures are primarily designed to protect against the particular assumptions made by the fault analysis methods. These assumptions, however, encompass only a limited range of the capabilities inherent to physical fault injection mechanisms. In this paper, we narrow our focus to fault attacks and countermeasures specific to ASICs, and introduce a novel parameterized fault adversary model capturing an adversary's control over an ASIC. We systematically map (a) the physical fault injection mechanisms, (b) adversary models assumed in fault analysis, and (c) adversary models used to design countermeasures into our introduced model. This model forms the basis for our comprehensive exploration that covers a broad spectrum of fault attacks and countermeasures within symmetric key cryptography as a comprehensive survey. Furthermore, our investigation highlights a notable misalignment among the adversary models assumed in countermeasures, fault attacks, and the intrinsic capabilities of the physical fault injection mechanisms. Through this study, we emphasize the need to reevaluate existing fault adversary models, and advocate for the development of a unified model.
Last updated:  2024-02-20
CAPABARA: A Combined Attack on CAPA
Dilara Toprakhisar, Svetla Nikova, and Ventzislav Nikov
Physical attacks pose a substantial threat to the secure implementation of cryptographic algorithms. While considerable research efforts are dedicated to protecting against passive physical attacks (e.g., side-channel analysis (SCA)), the landscape of protection against other types of physical attacks remains a challenge. Fault attacks (FA), though attracting growing attention in research, still lack the prevalence of provably secure designs when compared to SCA. The realm of combined attacks, which leverage the capabilities of both SCA and FA adversaries, introduces powerful adversarial models, rendering protection against them challenging. This challenge has consequently led to a relatively unexplored area of research, resulting in a notable gap in understanding and efficiently protecting against combined attacks. The CAPA countermeasure, published at CRYPTO 2018, addresses this challenge with a robust adversarial model that goes beyond conventional SCA and FA adversarial models. Drawing inspiration from the principles of Multiparty Computation (MPC), CAPA claims security against higher-order SCA, higher-order fault attacks, and their combination. In this work, we present a combined attack that breaks CAPA within the constraints of its assumed adversarial model. In response, we propose potential fixes to the design of CAPA that increase the complexity of the proposed attack, although not provably thwarting it. With this presented combined attack, we highlight the difficulty of effectively protecting against combined attacks.
Last updated:  2024-02-20
Efficient Zero-Knowledge Arguments and Digital Signatures via Sharing Conversion in the Head
Jules Maire and Damien Vergnaud
We present a novel technique within the MPC-in-the-Head framework, aiming to design efficient zero-knowledge protocols and digital signature schemes. The technique allows for the simultaneous use of additive and multiplicative sharings of secret information, enabling efficient proofs of linear and multiplicative relations. The applications of our technique are manifold. It is first applied to construct zero-knowledge arguments of knowledge for Double Discrete Logarithms (DDLP). The resulting protocol achieves improved communication complexity without compromising efficiency. We also propose a new zero-knowledge argument of knowledge for the Permuted Kernel Problem. Eventually, we suggest a short (candidate) post-quantum digital signature scheme constructed from a new one-way function based on simple polynomials known as fewnomials. This scheme offers simplicity and ease of implementation. Finally, we present two additional results inspired by this work but using alternative approaches. We propose a zero-knowledge argument of knowledge of an RSA plaintext for a small public exponent that significantly improves the state-of-the-art communication complexity. We also detail a more efficient forward-backward construction for the DDLP.
Last updated:  2024-02-20
Mirrored Commitment: Fixing ``Randomized Partial Checking'' and Applications
Paweł Lorek, Moti Yung, and Filip Zagórski
Randomized Partial Checking (RPC} was proposed by Jakobsson, Juels, and Rivest and attracted attention as an efficient method of verifying the correctness of the mixing process in numerous applied scenarios. In fact, RPC is a building block for many electronic voting schemes, including Prêt à Voter, Civitas, Scantegrity II as well as voting-systems used in real-world elections (e.g., in Australia). Mixing is also used in anonymous transfers of cryptocurrencies. It turned out, however, that a series of works showed, however, subtle issues with analyses behind RPC. First, that the actual security level of the RPC protocol is way off the claimed bounds. The probability of successful manipulation of $k$ votes is $(\frac{3}{4})^k$ instead of the claimed $\frac{1}{2^k}$ (this difference, in turn, negatively affects actual implementations of the notion within existing election systems. This is so since concrete implemented procedures of a given length were directly based on this parameter). Further, privacy guarantees that a constant number of mix-servers is enough turned out to also not be correct. We can conclude from the above that these analyses of the processes of mixing are not trivial. In this paper, we review the relevant attacks, and we present Mirrored-RPC -- a fix to RPC based on ``mirrored commitment'' which makes it optimally secure; namely, having a probability of successful manipulation of $k$ votes $\frac{1}{2^k}$. Then, we present an analysis of the privacy level of both RPC and mRPC. We show that for $n$ messages, the number of mix-servers (rounds) needed to be $\varepsilon$-close to the uniform distribution in total variation distance is lower bounded by: \[ r(n, \varepsilon) \geq \log_{2}{n \choose 2}/\varepsilon. \] This proof of privacy, in turn, gives insights into the anonymity of various cryptocurrencies (e.g., Zerocash) using anonymizing pools. If a random fraction $q$ of $n$ existing coins is mixed (in each block), then to achieve full anonymity, the number of blocks one needs to run the protocol for, is: \[ rb(n, q, \varepsilon) \geq - \frac{\log n + \log (n-1) - \log (2\varepsilon)}{ {\log({1-q^2}})}. \]
Last updated:  2024-02-20
Practical Improvements to Statistical Ineffective Fault Attacks
Barış Ege, Bob Swinkels, Dilara Toprakhisar, and Praveen Kumar Vadnala
Statistical Fault Attacks (SFA), introduced by Fuhr et al., exploit the statistical bias resulting from injected faults. Unlike prior fault analysis attacks, which require both faulty and correct ciphertexts under the same key, SFA leverages only faulty ciphertexts. In CHES 2018, more powerful attacks called Statistical Ineffective Fault Attacks (SIFA) have been proposed. In contrast to the previous fault attacks that utilize faulty ciphertexts, SIFA exploits the distribution of the intermediate values leading to fault-free ciphertexts. As a result, the SIFA attacks were shown to be effective even in the presence of widely used fault injection countermeasures based on detection and infection. In this work, we build upon the core idea of SIFA, and provide two main practical improvements over the previously proposed analysis methods. Firstly, we show how to perform SIFA from the input side, which in contrast to the original SIFA, requires injecting faults in the earlier rounds of an encryption or decryption operation. If we consider the start of the operation as the trigger for fault injection, the cumulative jitter in the first few rounds of a cipher is much lower than the last rounds. Hence, performing the attack in the first or second round requires a narrower parameter range for fault injection and hence less fault injection attempts to recover the secret key. Secondly, in comparison to the straightforward SIFA approach of guessing 32-bits at a time, we propose a chosen input approach that reduces the guessing effort to 16-bits at a time. This decreases the key search space for full key recovery of an AES-128 implementation from $2^{34}$ to $2^{19}$.
Last updated:  2024-02-20
On Efficient and Secure Compression Modes for Arithmetization-Oriented Hashing
Elena Andreeva, Rishiraj Bhattacharyya, Arnab Roy, and Stefano Trevisani
ZK-SNARKs are advanced cryptographic protocols used in private verifiable computation: modern SNARKs allow to encode the invariants of an arithmetic circuit over some large prime field in an appropriate NP language, from which a zero-knowlege short non-interactive argument of knowledge is built. Due to the high cost of proof generation, ZK-SNARKs for large constraint systems are inpractical. ZK-SNARKs are used in privacy-oriented blockchains such as Filecoin, ZCash and Monero, to verify Merkle tree opening proofs, which in turn requires computing a fixed-input-length (FIL) cryptographic compression function. As classical, bit-oriented hash functions like SHA-2 require huge constraint systems, Arithmetization-Oriented (AO) compression functions have emerged to fill the gap. Usually, AO compression functions are obtained by applying the Sponge hashing mode on a fixed-key permutation: while this avoids the cost of dynamic key scheduling, AO schedulers are often cheap to compute, making the exploration of AO compression functions based directly on blockciphers a topic of practical interest. In this work, we first adapt notions related to classical hash functions and their security notions to the AO syntax, and inspired by the classical PGV modes, we propose AO PGV-LC and AO PGV-ELC, two blockcipher-based FIL compression modes with parametrizable input and output sizes. In the ideal cipher model, we prove the collision and preimage resistance of both our modes, and give bounds for collision and opening resistance over Merkle trees of arbitrary arity. We then experimentally compare the AO PGV-LC mode over the Hades-MiMC blockcipher with its popular Sponge instantiation, Poseidon. The resulting construction, called Poseidon-DM, is $2$-$5\times$ faster than Poseidon in native computations, and $15$-$35\%$ faster in generating Merkle tree proofs over the Groth16 SNARK framework, depending on the tree arity. In particular, proof generation for an $8$-ary tree over Poseidon-DM is $2.5\times$ faster than for a binary tree with the same capacity over Poseidon. Finally, in an effort to further exploit the benefits of wide trees, we propose a new strategy to obtain a compact R1CS constraint system for Merkle trees with arbitrary arity.
Last updated:  2024-02-20
On Generalizations of the Lai-Massey Scheme
Lorenzo Grassi
In this paper, we re-investigate the Lai-Massey scheme, originally proposed in the cipher IDEA. Due to the similarity with the Feistel networks, and due to the existence of invariant subspace attacks as originally pointed out by Vaudenay at FSE 1999, the Lai-Massey scheme has received only little attention by the community. As first contribution, we propose two new generalizations of such scheme that are not (extended) affine equivalent to any generalized Feistel network proposed in the literature so far. Then, inspired by the recent Horst construction, we propose the Amaryllises structure as a generalization of the Lai-Massey scheme, in which the linear combination in the Lai-Massey scheme can be replaced by a non-linear one. Besides proposing concrete examples of the Amaryllises construction, we analyze its cryptographic properties, and we compare them with the ones of other existing schemes/constructions published in the literature. Our results show that the Amaryllises construction could have concrete advantages especially in the context of MPC-/FHE-/ZK-friendly primitives.
Last updated:  2024-02-20
Toward Malicious Constant-Rate 2PC via Arithmetic Garbling
Carmit Hazay and Yibin Yang
A recent work by Ball, Li, Lin, and Liu [Eurocrypt'23] presented a new instantiation of the arithmetic garbling paradigm introduced by Applebaum, Ishai, and Kushilevitz [FOCS'11]. In particular, Ball et al.'s garbling scheme is the first constant-rate garbled circuit over large enough bounded integer computations, inferring the first constant-round constant-rate secure two-party computation (2PC) over bounded integer computations in the presence of semi-honest adversaries. The main source of difficulty in lifting the security of garbling schemes-based protocols to the malicious setting lies in proving the correctness of the underlying garbling scheme. In this work, we analyze the security of Ball et al.'s scheme in the presence of malicious attacks. - We demonstrate an overflow attack, which is inevitable in this computational model, even if the garbled circuit is fully correct. Our attack follows by defining an adversary, corrupting either the garbler or the evaluator, that chooses a bad input and causes the computation to overflow, thus leaking information about the honest party's input. By utilizing overflow attacks, we show that $1$-bit leakage is necessary for achieving security against a malicious garbler, discarding the possibility of achieving full malicious security in this model. We further demonstrate a wider range of overflow attacks against a malicious evaluator with more than $1$ bit of leakage. - We boost the security level of Ball et al.'s scheme by utilizing two variants of Vector Oblivious Linear Evaluation, denoted by VOLEc and aVOLE. We present the first constant-round constant-rate 2PC protocol over bounded integer computations, in the presence of a malicious garbler with $1$-bit leakage and a semi-honest evaluator, in the {VOLEc,aVOLE}-hybrid model and being black-box in the underlying group and ring. Compared to the semi-honest variant, our protocol incurs only a constant factor overhead, both in computation and communication. The constant-round and constant-rate properties hold even in the plain model.
Last updated:  2024-02-20
QFESTA: Efficient Algorithms and Parameters for FESTA using Quaternion Algebras
Kohei Nakagawa and Hiroshi Onuki
In 2023, Basso, Maino, and Pope proposed FESTA (Fast Encryption from Supersingular Torsion Attacks), an isogeny-based public-key encryption (PKE) protocol that uses the SIDH attack for decryption. In the same paper, they proposed a parameter for that protocol, but the parameter requires high-degree isogeny computations. In this paper, we introduce QFESTA (Quaternion Fast Encapsulation from Supersingular Torsion Attacks), a new variant of FESTA that works with better parameters using quaternion algebras and achieves IND-CCA security under QROM. To realize our protocol, we construct a new algorithm to compute an isogeny of non-smooth degree using quaternion algebra and the SIDH attack. Our protocol relies solely on $(2,2)$-isogeny and $3$-isogeny computations, promising a substantial reduction in computational costs. In addition, our protocol has significantly smaller data sizes for public keys and ciphertexts, approximately half size of the original FESTA.
Last updated:  2024-02-19
Logstar: Efficient Linear* Time Secure Merge
Suvradip Chakraborty, Stanislav Peceny, Srinivasan Raghuraman, and Peter Rindal
Secure merge considers the problem of combining two sorted lists into a single sorted secret-shared list. Merge is a fundamental building block for many real-world applications. For example, secure merge can implement a large number of SQL-like database joins, which are essential for almost any data processing task such as privacy-preserving fraud detection, ad conversion rates, data deduplication, and many more. We present two constructions with communication bandwidth and rounds tradeoff. Logstar, our bandwidth-optimized construction, takes inspiration from Falk and Ostrovsky (ITC, 2021) and runs in $O(n\log^*n)$ time and communication with $O(\log n)$ rounds. In particular, for all conceivable $n$, the $\log^*n$ factor will be equal to the constant $2$, and therefore we achieve a near-linear running time. Median, our rounds-optimized construction, builds on the classic parallel medians-based insecure merge approach of Valiant (SIAM J. Comput., 1975), later explored in the secure setting by Blunk et al. (2022), and requires $O(n \log^c n)$, $1<c<2$, communication with $O(\log \log n)$ rounds. We introduce two additional constructions that merge input lists of different sizes. SquareRootMerge, merges lists of sizes $n^{\frac{1}{2}}$ and $n$, and runs in $O(n)$ time and communication with $O(\log n)$ rounds. CubeRootMerge is closely inspired by Blunk et al.'s (2022) construction and merges lists of sizes $n^{\frac{1}{3}}$ and $n$. It runs in $O(n)$ time and communication with $O(1)$ rounds. We optimize our constructions for concrete efficiency. Today, concretely efficient secure merge protocols rely on standard techniques such as GMW or generic sorting. These approaches require an $O(n \log n)$ size circuit of $O(\log n)$ depth. In contrast, our constructions are more efficient and also achieve superior asymptotics. We benchmark our constructions and obtain significant improvements. For example, Logstar reduces bandwidth costs $\approx 3.3\times$ and Median reduces rounds $\approx2.22\times$.
Last updated:  2024-02-19
A Concrete Analysis of Wagner's $k$-List Algorithm over $\mathbb{Z}_p$
Antoine Joux, Hunter Kippen, and Julian Loss
Since its introduction by Wagner (CRYPTO `02), the $k$-list algorithm has found significant utility in cryptanalysis. One important application thereof is in computing forgeries on several interactive signature schemes that implicitly rely on the hardness of the ROS problem formulated by Schnorr (ICICS `01). The current best attack strategy for these schemes relies the conjectured runtime of the $k$-list algorithm over $\mathbb{Z}_p$. The tightest known analysis of Wagner's algorithm over $\mathbb{Z}_p$ is due to Shallue (ANTS `08). However, it hides large polynomial factors and leaves a gap with respect to desirable concrete parameters for the attack. In this work, we develop a degraded version of the $k$-list algorithm which provably enforces the heuristic invariants in Wagner's original. In the process, we devise and analyze a new list merge procedure that we dub the interval merge. We give a thorough analysis of the runtime and success probability of our degraded algorithm, and show that it beats the projected runtime of the analysis by Shallue for parameters relevant to the generalized ROS attack of Benhamouda et al. (EUROCRYPT `21). For a $256$-bit prime $p$, and $k = 8$, our degraded $k$-list algorithm runs in time $\approx 2^{70.4}$, while Shallue's analysis states that the Wagner's original algorithm runs in time $\approx 2^{98.3}$.
Last updated:  2024-02-19
Polynomial Commitments from Lattices: Post-Quantum Security, Fast Verification and Transparent Setup
Valerio Cini, Giulio Malavolta, Ngoc Khanh Nguyen, and Hoeteck Wee
Polynomial commitment scheme allows a prover to commit to a polynomial $f \in \mathcal{R}[X]$ of degree $L$, and later prove that the committed function was correctly evaluated at a specified point $x$; in other words $f(x)=u$ for public $x,u \in\mathcal{R}$. Most applications of polynomial commitments, e.g. succinct non-interactive arguments of knowledge (SNARKs), require that (i) both the commitment and evaluation proof are succinct (i.e., polylogarithmic in the degree $L$) - with the latter being efficiently verifiable, and (ii) no pre-processing step is allowed. Surprisingly, as far as plausibly quantum-safe polynomial commitments are concerned, the currently most efficient constructions only rely on weak cryptographic assumptions, such as security of hash functions. Indeed, despite making use of the underlying algebraic structure, prior lattice-based polynomial commitments still seem to be much behind the hash-based ones. Moreover, security of the aforementioned lattice constructions against quantum adversaries was never formally discussed. In this work, we bridge the gap and propose the first (asymptotically and concretely) efficient lattice-based polynomial commitment with transparent setup and post-quantum security. Our interactive variant relies on the standard (Module-)SIS problem, and can be made non-interactive in the random oracle model using Fiat-Shamir transformation. In addition, we equip the scheme with a knowledge soundness proof against quantum adversaries which can be of independent interest. In terms of concrete efficiency, for $L=2^{20}$ our scheme yields proofs of size $2$X smaller than the hash-based \textsf{FRI} commitment (Block et al., Asiacrypt 2023), and $70$X smaller than the very recent lattice-based construction by Albrecht et al. (Eurocrypt 2024).
Last updated:  2024-02-19
Extractable Witness Encryption for KZG Commitments and Efficient Laconic OT
Nils Fleischhacker, Mathias Hall-Andersen, and Mark Simkin
We present a concretely efficient and simple extractable witness encryption scheme for KZG polynomial commitments. It allows to encrypt a message towards a triple $(\mathsf{com}, \alpha, \beta)$, where $\mathsf{com}$ is a KZG commitment for some polynomial $f$. Anyone with an opening for the commitment attesting $f(\alpha) = \beta$ can decrypt, but without knowledge of a valid opening the message is computationally hidden. Our construction is simple and highly efficient. The ciphertext is only a single group element. Encryption and decryption both require a single pairing evaluation and a constant number of group operations. Using our witness encryption scheme, we construct a simple and highly efficient laconic OT protocol, which significantly outperforms the state of the art in most important metrics.
Last updated:  2024-02-19
HARTS: High-Threshold, Adaptively Secure, and Robust Threshold Schnorr Signatures
Renas Bacho, Julian Loss, Gilad Stern, and Benedikt Wagner
Threshold variants of the Schnorr signature scheme have recently been at the center of attention due to their applications to Bitcoin, Ethereum, and other cryptocurrencies. However, existing constructions for threshold Schnorr signatures among a set of $n$ parties with corruption threshold $t_c$ suffer from at least one of the following drawbacks: (i) security only against static (i.e., non-adaptive) adversaries, (ii) cubic or higher communication cost to generate a single signature, (iii) strong synchrony assumptions on the network, or (iv) $t_c+1$ are sufficient to generate a signature, i.e., the corruption threshold of the scheme equals its reconstruction threshold. Especially (iv) turns out to be a severe limitation for many asynchronous real-world applications where $t_c < n/3$ is necessary to maintain liveness, but a higher signing threshold of $n-t_c$ is needed. A recent scheme, ROAST, proposed by Ruffing et al. (ACM CCS `22) addresses (iii) and (iv), but still falls short of obtaining subcubic complexity and adaptive security. In this work, we present HARTS, the first threshold Schnorr signature scheme to incorporate all these desiderata. More concretely: - HARTS is adaptively secure and remains fully secure and operational even under asynchronous network conditions in the presence of up to $t_c < n/3$ malicious parties. This is optimal. - HARTS outputs a Schnorr signature of size $\lambda$ with a near-optimal amortized communication cost of $O(\lambda n^2 \log{n})$ bits and $O(1)$ rounds per signature. - HARTS is a high-threshold scheme: no fewer than $t_r+1$ signature shares can be combined to yield a full signature, where $t_r\geq 2n/3 > 2t_c$. This is optimal. We prove our result in a modular fashion in the algebraic group model. At the core of our construction, we design a new simple, and adaptively secure high-threshold AVSS scheme which may be of independent interest.
Last updated:  2024-02-19
Polynomial-Time Key-Recovery Attack on the ${\tt NIST}$ Specification of ${\tt PROV}$
River Moreira Ferreira and Ludovic Perret
In this paper, we present an efficient attack against ${\tt PROV}$, a recent variant of the popular Unbalanced Oil and Vinegar (${\tt UOV}$) multivariate signature scheme, that has been submitted to the ongoing ${\tt NIST}$ standardization process for additional post-quantum signature schemes. A notable feature of ${\tt PROV}$ is its proof of security, namely, existential unforgeability under a chosen-message attack (${\tt EUF-CMA}$), assuming the hardness of solving the system formed by the public-key non-linear equations. We present a polynomial-time key-recovery attack against the first specification of ${\tt PROV}$ (v$1.0$). To do so, we remark that a small fraction of the ${\tt PROV}$ secret-key is leaked during the signature process. Adapting and extending previous works on basic ${\tt UOV}$, we show that the entire secret-key can be then recovered from such a small fraction in polynomial-time. This leads to an efficient attack against ${\tt PROV}$ that we validated in practice. For all the security parameters suggested in by the authors of ${\tt PROV}$, our attack recovers the secret-key in at most $8$ seconds. We conclude the paper by discussing the apparent mismatch between such a practical attack and the theoretical security claimed by ${\tt PROV}$ designers. Our attack is not structural but exploits that the current specification of ${\tt PROV}$ differs from the required security model. A simple countermeasure makes ${\tt PROV}$ immune against the attack presented here and led the designers to update the specification of ${\tt PROV}$ (v$1.1$).
Last updated:  2024-02-19
XHash8 and XHash12: Efficient STARK-friendly Hash Functions
Tomer Ashur, Al Kindi, and Mohammad Mahzoun
Zero-Knowledge proof systems are widely used as building blocks of different protocols, e.g., such as those supporting blockchains. A core element in Zero-Knowledge proof systems is the underlying PRF, usually modeled as a hash function that needs to be efficient over finite fields of prime order. Such hash functions are part of a newly developed paradigm known as Arithmetization-Oriented designs. In this paper, we propose two new AO hash functions, XHash8 and XHash12 which are designed based on improving the bottlenecks in RPO [ePrint 2022/1577]. Based on our experiments, XHash8 performs $\approx2.75$ times faster than RPO, and XHash12 performs $\approx2$ times faster than RPO, while at the same time inheriting the security and robustness of the battle-tested Marvellous design strategy.
Last updated:  2024-02-19
Non-Interactive Threshold BBS+ From Pseudorandom Correlations
Sebastian Faust, Carmit Hazay, David Kretzler, Leandro Rometsch, and Benjamin Schlosser
The BBS+ signature scheme is one of the most prominent solutions for realizing anonymous credentials. Its prominence is due to properties like selective disclosure and efficient protocols for creating and showing possession of credentials. Traditionally, a single credential issuer produces BBS+ signatures, which poses significant risks due to a single point of failure. In this work, we address this threat via a novel $t$-out-of-$n$ threshold BBS+ protocol. Our protocol supports an arbitrary security threshold $t \leq n$ and works in the so-called preprocessing setting. In this setting, we achieve non-interactive signing in the online phase and sublinear communication complexity in the number of signatures in the offline phase, which, as we show in this work, are important features from a practical point of view. As it stands today, none of the widely studied signature schemes, such as threshold ECDSA and threshold Schnorr, achieve both properties simultaneously. To this end, we design specifically tailored presignatures that can be directly computed from pseudorandom correlations and allow servers to create signature shares without additional cross-server communication. Both our offline and online protocols are actively secure in the Universal Composability model. Finally, we evaluate the concrete efficiency of our protocol, including an implementation of the online phase and the expansion algorithm of the pseudorandom correlation generator (PCG) used during the offline phase. The online protocol without network latency takes less than $15 ms$ for $t \leq 30$ and credentials sizes up to $10$. Further, our results indicate that the influence of $t$ on the online signing is insignificant, $< 6 \%$ for $t \leq 30$, and the overhead of the thresholdization occurs almost exclusively in the offline phase. Our implementation of the PCG expansion is the first considering correlations between more than $3$ parties and shows that even for a committee size of $10$ servers, each server can expand a correlation of up to $2^{16}$ presignatures in about $600$ ms per presignature.
Last updated:  2024-02-19
Circle STARKs
Ulrich Haböck, David Levit, and Shahar Papini
Traditional STARKs require a cyclic group of a smooth order in the field. This allows efficient interpolation of points using the FFT algorithm, and writing constraints that involve neighboring rows. The Elliptic Curve FFT (ECFFT, Part I and II) introduced a way to make efficient STARKs for any finite field, by using a cyclic group of an elliptic curve. We show a simpler construction in the lines of ECFFT over the circle curve $x^2 + y^2 = 1$. When $p + 1$ is divisible by a large power of $2$, this construction is as efficient as traditional STARKs and ECFFT. Applied to the Mersenne prime $p = 2^{31} − 1$, which has been recently advertised in the IACR eprint 2023:824, our preliminary benchmarks indicate a speed-up by a factor of $1.4$ compared to a traditional STARK using the Babybear prime $p = 2^{31} − 2^{27} + 1$.
Last updated:  2024-02-19
LaKey: Efficient Lattice-Based Distributed PRFs Enable Scalable Distributed Key Management
Matthias Geihs and Hart Montgomery
Distributed key management (DKM) services are multi-party services that allow their users to outsource the generation, storage, and usage of cryptographic private keys, while guaranteeing that none of the involved service providers learn the private keys in the clear. This is typically achieved through distributed key generation (DKG) protocols, where the service providers generate the keys on behalf of the users in an interactive protocol, and each of the servers stores a share of each key as the result. However, with traditional DKM systems, the key material stored by each server grows linearly with the number of users. An alternative approach to DKM is via distributed key derivation (DKD) where the user key shares are derived on-demand from a constant-size (in the number of users) secret-shared master key and the corresponding user's identity, which is achieved by employing a suitable distributed pseudorandom function (dPRF). However, existing suitable dPRFs require on the order of 100 interaction rounds between the servers and are therefore insufficient for settings with high network latency and where users demand real-time interaction. To resolve the situation, we initiate the study of lattice-based distributed PRFs, with a particular focus on their application to DKD. Concretely, we show that the LWE-based PRF presented by Boneh et al. at CRYPTO'13 can be turned into a distributed PRF suitable for DKD that runs in only 8 online rounds, which is an improvement over the start-of-the-art by an order of magnitude. We further present optimizations of this basic construction. We show a new construction with improved communication efficiency proven secure under the same ``standard'' assumptions. Then, we present even more efficient constructions, running in as low as 5 online rounds, from non-standard, new lattice-based assumptions. We support our findings by implementing and evaluating our protocol using the MP-SPDZ framework (Keller, CCS '20). Finally, we give a formal definition of our DKD in the UC framework and prove a generic construction (for which our construction qualifies) secure in this model.
Last updated:  2024-02-19
Fault Attacks on UOV and Rainbow
Juliane Krämer and Mirjam Loiero
Multivariate cryptography is one of the main candidates for creating post-quantum public key cryptosystems. Especially in the area of digital signatures, there exist many practical and secure multivariate schemes. The signature schemes UOV and Rainbow are two of the most promising and best studied multivariate schemes which have proven secure for more than a decade. However, so far the security of multivariate signature schemes towards physical attacks has not been appropriately assessed. Towards a better understanding of the physical security of multivariate signature schemes, this paper presents fault attacks against SingleField schemes, especially UOV and Rainbow. Our analysis shows that although promising attack vectors exist, multivariate signature schemes inherently offer a good protection against fault attacks.
Last updated:  2024-02-19
Generic-Group Lower Bounds via Reductions Between Geometric-Search Problems: With and Without Preprocessing
Benedikt Auerbach, Charlotte Hoffmann, and Guillermo Pascual-Perez
The generic-group model (GGM) aims to capture algorithms working over groups of prime order that only rely on the group operation, but do not exploit any additional structure given by the concrete implementation of the group. In it, it is possible to prove information-theoretic lower bounds on the hardness of problems like the discrete logarithm (DL) or computational Diffie-Hellman (CDH). Thus, since its introduction, it has served as a valuable tool to assess the concrete security provided by cryptographic schemes based on such problems. A work on the related algebraic-group model (AGM) introduced a method, used by many subsequent works, to adapt GGM lower bounds for one problem to another, by means of conceptually simple reductions. In this work, we propose an alternative approach to extend GGM bounds from one problem to another. Following an idea by Yun (Eurocrypt '15), we show that, in the GGM, the security of a large class of problems can be reduced to that of geometric search-problems. By reducing the security of the resulting geometric-search problems to variants of the search-by-hypersurface problem, for which information theoretic lower bounds exist, we give alternative proofs of several results that used the AGM approach. The main advantage of our approach is that our reduction from geometric search-problems works, as well, for the GGM with preprocessing (more precisely the bit-fixing GGM introduced by Coretti, Dodis and Guo (Crypto '18)). As a consequence, this opens up the possibility of transferring preprocessing GGM bounds from one problem to another, also by means of simple reductions. Concretely, we prove novel preprocessing bounds on the hardness of the d-strong discrete logarithm, the d-strong Diffie-Hellman inversion, and multi-instance CDH problems, as well as a large class of Uber assumptions. Additionally, our approach applies to Shoup's GGM without additional restrictions on the query behavior of the adversary, while the recent works of Zhang, Zhou, and Katz (Asiacrypt '22) and Zhandry (Crypto '22) highlight that this is not the case for the AGM approach.
Last updated:  2024-02-19
Zombies and Ghosts: Optimal Byzantine Agreement in the Presence of Omission Faults
Julian Loss and Gilad Stern
Studying the feasibility of Byzantine Agreement (BA) in realistic fault models is an important question in the area of distributed computing and cryptography. In this work, we revisit the mixed fault model with Byzantine (malicious) faults and omission faults put forth by Hauser, Maurer, and Zikas (TCC 2009), who showed that BA (and MPC) is possible with $t$ Byzantine faults, $s$ send faults (whose outgoing messages may be dropped) and $r$ receive faults (whose incoming messages may be lost) if $n>3t+r+s$. We generalize their techniques and results by showing that BA is possible if $n>2t+r+s$, given the availability of a cryptographic setup. Our protocol is the first to match the recent lower bound of Eldefrawy, Loss, and Terner (ACNS 2022) for this setting.
Last updated:  2024-02-19
Reduce and Prange: Revisiting Prange's Information Set Decoding for LPN and RSD
Jiseung Kim and Changmin Lee
The learning parity with noise (LPN) problem has been widely utilized in classical cryptography to construct cryptographic primitives. Various variants of LPN have been proposed, including LPN over large fields and LPN with regular noise, depending on the underlying space and the noise regularity. These LPN variants have proven to be useful in constructing cryptographic primitives. We propose an improvement to the Gaussian elimination attack, which is also known as Prange's information set decoding algorithm, for solving the LPN problem. Contrary to prevailing knowledge, we find that the Gaussian elimination attack is highly competitive and currently the best method for solving LPN over large fields. Our improvement involves applying partial Gaussian elimination repeatedly, rather than the whole Gaussian algorithm, which we have named the ``Reduce and Prange's algorithm". Moreover, we provide two applications of Reduce and Prange algorithms: One is the hybrid algorithm of ours and Berstein, Lange and Peters's algorithm at PQCrypto'08, and the other one is Reduce and Prange algorithm for LPN with regular noise. Last, we provide a concrete estimation of the bit-security of LPN variants using our Reduce and Prange's frameworks. Our results show that the bit-security of LPN over $\mathbb{F}_q$ is reduced by 5-11 bits when $\log q = 128$ compared to previous analysis by Liu et al. (will appear at Eurocrypt'24). Furthermore, we show that our algorithm outperforms recent work by Briaud and Øygard (Eurocrypt'23) and Liu et al. for certain parameters. It reduces the bit-security of LPN with regular noise by 5-28 bits.
Last updated:  2024-02-19
On the Untapped Potential of the Quantum FLT-based Inversion
Ren Taguchi and Atsushi Takayasu
Thus far, several papers estimated concrete quantum resources of Shor’s algorithm for solving a binary elliptic curve discrete logarithm problem. In particular, the complexity of computing quantum inversions over a binary field F2n is dominant when running the algorithm, where n is a degree of a binary elliptic curve. There are two major methods for quantum inversion, i.e., the quantum GCD-based inversion and the quantum FLT-based inversion. Among them, the latter method is known to require more qubits; however, the latter one is valuable since it requires much fewer Toffoli gates and less depth. When n = 571, Kim-Hong’s quantum GCD-based inversion algorithm (Quantum Information Processing 2023) and Taguchi-Takayasu’s quantum FLT-based inversion algorithm (CT-RSA 2023) require 3, 473 qubits and 8, 566 qubits, respectively. In contrast, for the same n = 571, the latter algorithm requires only 2.3% of Toffoli gates and 84% of depth compared to the former one. In this paper, we modify Taguchi-Takayasu’s quantum FLT-based inversion algorithm to reduce the required qubits. While Taguchi-Takayasu’s FLT-based inversion algorithm takes an addition chain for n−1 as input and computes a sequence whose number is the same as the length of the chain, our proposed algorithm employs an uncomputation step and stores a shorter one. As a result, our proposed algorithm requires only 3, 998 qubits for n = 571, which is only 15% more than Kim-Hong’s GCD-based inversion algorithm. Furthermore, our proposed algorithm preserves the advantage of FLT-based inversion since it requires only 3.7% of Toffoli gates and 77% of depth compared to Kim-Hong’s GCD-based inversion algorithm for n = 571.
Last updated:  2024-02-19
Understanding User-Perceived Security Risks and Mitigation Strategies in the Web3 Ecosystem
Janice Jianing Si, Tanusree Sharma, and Kanye Ye Wang
The advent of Web3 technologies promises unprecedented levels of user control and autonomy. However, this decentralization shifts the burden of security onto the users, making it crucial to understand their security behaviors and perceptions. To address this, our study introduces a comprehensive framework that identifies four core components of user interaction within the Web3 ecosystem: blockchain infrastructures, Web3-based Decentralized Applications (DApps), online communities, and off-chain cryptocurrency platforms. We delve into the security concerns perceived by users in each of these components and analyze the mitigation strategies they employ, ranging from risk assessment and aversion to diversification and acceptance. We further discuss the landscape of both technical and human-induced security risks in the Web3 ecosystem, identify the unique security differences between Web2 and Web3, and highlight key challenges that render users vulnerable, to provide implications for security design in Web3.
Last updated:  2024-02-19
Phantom: A CUDA-Accelerated Word-Wise Homomorphic Encryption Library
Hao Yang, Shiyu Shen, Wangchen Dai, Lu Zhou, Zhe Liu, and Yunlei Zhao
Homomorphic encryption (HE) is a promising technique for privacy-preserving computations, especially the word-wise HE schemes that allow batching. However, the high computational overhead hinders the deployment of HE in real-word applications. GPUs are often used to accelerate execution, but a comprehensive performance comparison of different schemes on the same platform is still missing. In this work, we fill this gap by implementing three word-wise HE schemes BGV, BFV, and CKKS on GPU, with both theoretical and engineering optimizations. We enhance the hybrid key-switching technique, significantly reducing the computational and memory overhead. We explore several kernel fusing strategies to reuse data, resulting in reduced memory access and IO latency, and enhancing the overall performance. By comparing with the state-of-the-art works, we demonstrate the effectiveness of our implementation. Meanwhile, we introduce a unified framework that finely integrates our implementation of the three schemes, covering almost all scheme functions and homomorphic operations. We optimize the management of pre-computation, RNS bases, and memory in the framework, to provide efficient and lowlatency data access and transfer. Based on this framework, we provide a thorough benchmark of the three schemes, which can serve as a reference for scheme selection and implementation in constructing privacy-preserving applications. Our library is available for access at https://github.com/encryptorion-lab/phantom-fhe. It is released under the GPLv3 license.
Last updated:  2024-02-19
Amortized Large Look-up Table Evaluation with Multivariate Polynomials for Homomorphic Encryption
Heewon Chung, Hyojun Kim, Young-Sik Kim, and Yongwoo Lee
We present a new method for efficient look-up table (LUT) evaluation in homomorphic encryption (HE), based on Ring-LWE-based HE schemes, including both integer-message schemes such as Brakerski-Gentry-Vaikuntanathan (BGV) and Brakerski/Fan-Vercauteren (BFV), and complex-number-message schemes like the Cheon-Kim-Kim-Song (CKKS) scheme. Our approach encodes bit streams into codewords and translates LUTs into low-degree multivariate polynomials, allowing for the simultaneous evaluation of multiple independent LUTs with minimal overhead. To mitigate noise accumulation in the CKKS scheme, we propose a novel noise-reduction technique, accompanied by proof demonstrating its effectiveness in asymptotically decreasing noise levels. We demonstrate our algorithm's effectiveness through a proof-of-concept implementation, showcasing significant efficiency gains, including a 0.029ms per slot evaluation for 8-input, 8-output LUTs and a 280ms amortized decryption time for AES-128 using CKKS on a single GPU. This work not only advances LUT evaluation in HE but also introduces a transciphering method for the CKKS scheme utilizing standard symmetric-key encryption, bridging the gap between discrete bit strings and numerical data.
Last updated:  2024-02-18
A New Algebraic Approach to the Regular Syndrome Decoding Problem and Implications for PCG Constructions
Pierre Briaud and Morten Øygarden
The Regular Syndrome Decoding (RSD) problem, a variant of the Syndrome Decoding problem with a particular error distribution, was introduced almost 20 years ago by Augot et al. . In this problem, the error vector is divided into equally sized blocks, each containing a single noisy coordinate. More recently, the last five years have seen increased interest in this assumption due to its use in MPC and ZK applications. Generally referred to as "LPN with regular noise" in this context, the assumption allows to achieve better efficiency when compared to plain LPN. We present the first attack on RSD relying on Gröbner bases techniques. After a careful theoretical analysis of the underlying polynomial system, we propose concrete attacks that are able to take advantage of the regular noise distribution. In particular, we can identify several examples of concrete parameters where our techniques outperform other algorithms.
Last updated:  2024-02-18
Beyond MPC-in-the-Head: Black-Box Constructions of Short Zero-Knowledge Proofs
Carmit Hazay, Muthuramakrishnan Venkitasubramaniam, and Mor Weiss
In their seminal work, Ishai, Kushilevitz, Ostrovsky, and Sahai (STOC`07) presented the MPC-in-the-Head paradigm, which shows how to design Zero-Knowledge Proofs (ZKPs) from secure Multi-Party Computation (MPC) protocols. This paradigm has since then revolutionized and modularized the design of efficient ZKP systems, with far-reaching applications beyond ZKPs. However, to the best of our knowledge, all previous instantiations relied on fully-secure MPC protocols, and have not been able to leverage the fact that the paradigm only imposes relatively weak privacy and correctness requirements on the underlying MPC. In this work, we extend the MPC-in-the-Head paradigm to game-based cryptographic primitives supporting homomorphic computations (e.g., fully-homomorphic encryption, functional encryption, randomized encodings, homomorphic secret sharing, and more). Specifically, we present a simple yet generic compiler from these primitives to ZKPs which use the underlying primitive as a black box. We also generalize our paradigm to capture commit-and-prove protocols, and use it to devise tight black-box compilers from Interactive (Oracle) Proofs to ZKPs, assuming One-Way Functions (OWFs). We use our paradigm to obtain several new ZKP constructions: 1. The first ZKPs for NP relations $\mathcal{R}$ computable in (polynomial-time uniform) $NC^1$, whose round complexity is bounded by a fixed constant (independent of the depth of $\mathcal{R}$'s verification circuit), with communication approaching witness length (specifically, $n\cdot poly\left(\kappa\right)$, where $n$ is the witness length, and $\kappa$ is a security parameter), assuming DCR. Alternatively, if we allow the round complexity to scale with the depth of the verification circuit, our ZKPs can make black-box use of OWFs. 2. Constant-round ZKPs for NP relations computable in bounded polynomial space, with $O\left(n\right)+o\left(m\right)\cdot poly\left(\kappa\right)$ communication assuming OWFs, where $m$ is the instance length. This gives a black-box alternative to a recent non-black-box construction of Nassar and Rothblum (CRYPTO`22). 3. ZKPs for NP relations computable by a logspace-uniform family of depth-$d\left(m\right)$ circuits, with $n\cdot poly\left(\kappa,d\left(m\right)\right)$ communication assuming OWFs. This gives a black-box alternative to a result of Goldwasser, Kalai and Rothblum (JACM).
Last updated:  2024-02-18
Accountable Multi-Signatures with Constant Size Public Keys
Dan Boneh, Aditi Partap, and Brent Waters
A multisignature scheme is used to aggregate signatures by multiple parties on a common message $m$ into a single short signature on $m$. Multisignatures are used widely in practice, most notably, in proof-of-stake consensus protocols. In existing multisignature schemes, the verifier needs the public keys of all the signers in order to verify a multisignature issued by some subset of signers. We construct new practical multisignature schemes with three properties: (i) the verifier only needs to store a constant size public key in order to verify a multisignature by an arbitrary subset of parties, (ii) signature size is constant beyond the description of the signing set, and (iii) signers generate their secret signing keys locally, that is, without a distributed key generation protocol. Existing schemes satisfy properties (ii) and (iii). The new capability is property (i) which dramatically reduces the verifier's memory requirements from linear in the number of signers to constant. We give two pairing-based constructions: one in the random oracle model and one in the plain model. We also show that by relaxing property (iii), that is, allowing for a simple distributed key generation protocol, we can further improve efficiency while continuing to satisfy properties (i) and (ii). We give a pairing-based scheme and a lattice-based scheme in this relaxed model.
Last updated:  2024-02-18
YPIR: High-Throughput Single-Server PIR with Silent Preprocessing
Samir Jordan Menon and David J. Wu
We introduce YPIR, a single-server private information retrieval (PIR) protocol that achieves high throughput (up to 75% of the memory bandwidth of the machine) without any offline communication. For retrieving a 1-bit (or 1-byte) record from a 32-GB database, YPIR achieves 10.9 GB/s/core server throughput and requires 2.5 MB of total communication. On the same setup, the state-of-the-art SimplePIR protocol achieves a 12.6 GB/s/core server throughput, requires 1.5 MB total communication, but additionally requires downloading a 724 MB hint in an offline phase. YPIR leverages a new lightweight technique to remove the hint from high-throughput single-server PIR schemes with small overhead. We also show how to reduce the server preprocessing time in the SimplePIR family of protocols by a factor of $10$-$15\times$. By removing the need for offline communication, YPIR significantly reduces the server-side costs for private auditing of Certificate Transparency logs. Compared to the best previous PIR-based approach, YPIR reduces the server-side costs by a factor of $5.6\times$. Note that to reduce communication costs, the previous approach assumed that updates to the Certificate Transparency log servers occurred in weekly batches. Since there is no offline communication in YPIR, our approach allows clients to always audit the most recent Certificate Transparency logs (e.g., updating once a day). Supporting daily updates using the prior scheme would cost $30\times$ more than YPIR (based on current AWS compute costs).
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.