All papers in 2022 (Page 8 of 1781 results)

Last updated:  2022-08-19
Pairing-free secure-channel establishment in mobile networks with fine-grained lawful interception
Xavier Bultel, Cristina Onete
Modern-day mobile communications allow users to connect from any place, at any time. However, this ubiquitous access comes at the expense of their privacy. Currently, the operators providing mobile service to users learn call-and SMS-metadata, and even the contents of those exchanges. A main reason behind this is the Lawful-Interception (LI) requirement, by which serving networks must provide this (meta-)data to authorities, given a warrant. At ESORICS 2021, Arfaoui et al. pioneered a primitive called Lawful-Interception Key-Exchange, which achieves the best of both worlds: (provably) privacy-enhanced communications, and finegrained fine-grained, limited access to user data. Their work had two important shortcomings. First, their protocol required pairings which, while sufficiently efficient, might not always be available in the mobile setting. More importantly, that scheme was only applicable in a domestic setting, where the concerned users (Alice and Bob) were subject to the same LI authorities. The case of roaming was left as an open question. In this paper we answer that open question. We extend the framework of Arfaoui et al. to allow Alice and Bob (now subject to potentially two sets of authorities) to establish a secure channel that guarantees the strong properties afforded by the LIKE schemes of ESORICS 2021. Our construction is pairing-free, faster than that of Arfaoui et al., and its security relies on standard assumptions.
Last updated:  2023-01-25
A Lightweight, Secure Big data-based Authentication and Key-agreement Scheme for IoT with Revocability
Behnam Zahednejad
With the rapid development of Internet of Things (IoT), designing a secure two-factor authentication scheme for these network is increasingly demanding. Recently, historical bigdata has gained interest as a novel authentication factor in this area. In this paper, we focus on a recent authentication scheme using bigdata (Liu et al.’s scheme) which claims to provide additional security properties such as Perfect Forward Secrecy (PFS), Key Compromise Impersonation (KCI) resilience and Server Compromise Impersonation (SCI) resilience. However, assuming a real strong attacker, rather than a weak one. we show that their scheme not only fails to provide KCI and SCI, but also doesn’t provide real two-factor security, revocability and suffers inside attack. Then we propose our novel scheme which can indeed provide two-factor security, PFS , KCI and inside attack resilience and revocability of the client. Further, our performance analysis shows that our scheme has reduced modular exponentiation operation and multiplication for both client and server compared to Liu et al.’s scheme which reduces the execution time by one third i.e. 6 ms and 30 ms (0.3 ms and 4 ms) for IoT device (server) for security levels of λ = 128, λ = 256 respectively
Last updated:  2023-02-08
The inspection model for zero-knowledge proofs and efficient Zerocash with secp256k1 keys
Huachuang Sun, Haifeng Sun, Kevin Singh, Akhil Sai Peddireddy, Harshad Patil, Jianwei Liu, Weikeng Chen
Proving discrete log equality for groups of the same order is addressed by Chaum and Pedersen's seminal work. However, there has not been a lot of work in proving discrete log equality for groups of different orders. This paper presents an efficient solution, which leverages a technique we call delegated Schnorr. The discovery of this technique is guided by a design methodology that we call the inspection model, and we find it useful for protocol designs. We show two applications of this technique on the Findora blockchain: **Maxwell-Zerocash switching:** There are two privacy-preserving transfer protocols on the Findora blockchain, one follows the Maxwell construction and uses Pedersen commitments over Ristretto, one follows the Zerocash construction and uses Rescue over BLS12-381. We present an efficient protocol to convert assets between these two constructions while preserving the privacy. **Zerocash with secp256k1 keys:** Bitcoin, Ethereum, and many other chains do signatures on secp256k1. There is a strong need for ZK applications to not depend on special curves like Jubjub, but be compatible with secp256k1. Due to FFT unfriendliness of secp256k1, many proof systems (e.g., Groth16, Plonk, FRI) are infeasible. We present a solution using Bulletproofs over curve secq256k1 ("q") and delegated Schnorr which connects Bulletproofs to TurboPlonk over BLS12-381. We conclude the paper with (im)possibility results about Zerocash with only access to a deterministic ECDSA signing oracle, which is the case when working with MetaMask. This result shows the limitations of the techniques in this paper. This paper is under a bug bounty program through a grant from Findora Foundation.
Last updated:  2022-08-19
Skip Ratchet: A Hierarchical Hash System
Brooklyn Zelenka
Hash chains are a simple way to generate pseudorandom data, but are inefficient in situations that require long chains. This can cause unnecessary overhead for use cases including logical clocks, synchronizing the heads of a pseudorandom stream, or non-interactive key agreement. This paper presents the “skip ratchet”, a novel pseudorandom function that can be efficiently incremented by arbitrary intervals.
Last updated:  2022-08-25
New Bounds on the Multiplicative Complexity of Boolean Functions
Meltem Sonmez Turan
Multiplicative Complexity (MC) is defined as the minimum number of AND gates required to implement a function with a circuit over the basis AND, XOR, NOT. This complexity measure is relevant for many advanced cryptographic protocols such as fully homomorphic encryption, multi-party computation, and zero-knowledge proofs, where processing AND gates is more expensive than processing XOR gates. Although there is no known asymptotically efficient technique to compute the MC of a random Boolean function, bounds on the MC of Boolean functions are successfully used to to show existence of Boolean functions with a particular MC. In 2000, Boyar et al. showed that, for all $n\geq 0$, at most $2^{k^2+2k+2kn+n+1}$ $n$-variable Boolean functions can be computed with $k$ AND gates. This bound is used to prove the existence of a 8-variable Boolean functions with MC greater than 7. In this paper, we improve the Boyar et al. bound.
Last updated:  2022-12-12
Range Search over Encrypted Multi-Attribute Data
Francesca Falzon, Evangelia Anna Markatou, Zachary Espiritu, Roberto Tamassia
This work addresses expressive queries over encrypted data by presenting the first systematic study of multi-attribute range search on a symmetrically encrypted database outsourced to an honest-but-curious server. Prior work includes a thorough analysis of single-attribute range search schemes (e.g. Demertzis et al. 2016) and a proposed high-level approach for multi-attribute schemes (De Capitani di Vimercati et al. 2021). We first introduce a flexible framework for building secure range search schemes over multiple attributes (dimensions) by adapting a broad class of geometric search data structures to operate on encrypted data. Our framework encompasses widely used data structures such as multi-dimensional range trees and quadtrees, and has strong security properties that we formally prove. We then develop six concrete highly parallelizable range search schemes within our framework that offer a sliding scale of efficiency and security tradeoffs to suit the needs of the application. We evaluate our schemes with a formal complexity and security analysis, a prototype implementation, and an experimental evaluation on real-world datasets.
Last updated:  2022-08-18
Secure Branching Program Evaluation
Jonas Janneck, Anas Boudi, Anselme Tueno, Matthew Akram
We address the problem of privately evaluating a branching program on encrypted data. This scenario is a 2-party protocol consisting of a server and a client. The server privately holds a branching program which is a representation of a boolean function using a directed acyclic graph. The client holds a secret input to the branching program. The goal of the computation is to evaluate the client's input on the server program such that only the result is revealed to the client, and nothing is revealed to the server. To solve this problem Ishai-Paskin introduced a public-key encryption scheme that is based on Damgård-Jurik additively homomorphic encryption and has the property, that given a branching program $P$ and an encryption $c$ of an input $y$, it is possible to efficiently compute a succinct ciphertext $c'$ corresponding to $P(y)$. The entire computation is done by the server relying on the fact that Damgård-Jurik scheme has length-flexible ciphertexts which allows multiplications between ciphertexts of different sizes under the same encryption key. Although the decryption of the Damgård-Jurik scheme is theoretically efficient, the size of $c'$ and the decoding time depend on the depth of the branching program. In this paper, we propose a new scheme where the input is instead encrypted using fully homomorphic encryption and discuss different variants and optimizations. The entire computation is also done by the server but the size of the resulting ciphertext is independent of the depth of the program. We implement Ishai-Paskin and our scheme and show that the running time of our scheme is an order of magnitude smaller.
Last updated:  2022-08-18
On Quantum Ciphertext Indistinguishability, Recoverability, and OAEP
Juliane Krämer, Patrick Struck
The qINDqCPA security notion for public-key encryption schemes by Gagliardoni et al. (PQCrypto'21) models security against adversaries which are able to obtain ciphertexts in superposition. Defining this security notion requires a special type of quantum operator. Known constructions differ in which keys are necessary to construct this operator, depending on properties of the encryption scheme. We argue—for the typical setting of securing communication between Alice and Bob—that in order to apply the notion, the quantum operator should be realizable for challengers knowing only the public key. This is already known to be the case for a wide range of public-key encryption schemes, in particular, those exhibiting the so-called recoverability property which allows to recover the message from a ciphertext using the randomness instead of the secret key. The open question is whether there are real-world public-key encryption schemes for which the notion is not applicable, considering the aforementioned observation on the keys known by the challenger. We answer this question in the affirmative by showing that applying the qINDqCPA security notion to the OAEP construction requires the challenger to know the secret key. We conclude that the qINDqCPA security notion might need to be refined to eventually yield a universally applicable PKE notion of quantum security with a quantum indistinguishability phase.
Last updated:  2022-08-24
Fixing Issues and Achieving Maliciously Secure Verifiable Aggregation in ``VeriFL: Communication-Efficient and Fast Verifiable Aggregation for Federated Learning''
Xiaojie Guo
This work addresses the security flaw in the original VeriFL protocol and proposes a patched protocol. The patched protocol is secure against any static malicious adversary with a certain threshold and only introduces moderate modifications to the original protocol.
Last updated:  2023-09-04
Recursion over Public-Coin Interactive Proof Systems; Faster Hash Verification
Alexandre Belling, Azam Soleimanian, and Olivier Bégassat
SNARK is a well-known family of cryptographic tools that is increasingly used in the field of computation integrity at scale. In this area, multiple works have introduced SNARK-friendly cryptographic primitives: hashing, but also encryption and signature verification. Despite all the efforts to create cryptographic primitives that can be proved faster, it remains a major performance hole in practice. In this paper, we present a recursive technique that can improve the efficiency of the prover by an order of magnitude compared to proving MiMC hashes (a SNARK-friendly hash function, Albrecht et al. 2016) with a Groth16 (Eurocrypt 2016) proof. We use GKR (a well-known public-coin argument system by Goldwasser et al., STOC 2008) to prove the integrity of hash computations and embed the GKR verifier inside a SNARK circuit. The challenge comes from the fact that GKR is a public-coin interactive protocol, and applying Fiat-Shamir naively may result in worse performance than applying existing techniques directly. This is because Fiat-Shamir itself is involved with hash computation over a large string. We take advantage of a property that SNARK schemes commonly have, to build a protocol in which the Fiat-Shamir hashes have very short inputs. The technique we present is generic and can be applied over any SNARK-friendly hash, most known SNARK schemes, and any (one-round) public-coin argument system in place of GKR. We emphasize that while our general compiler is secure in the random oracle model, our concrete instantiation (i.e., GKR plus outer SNARK) is only proved to be heuristically secure. This is due to the fact we first need to convert the GKR protocol to a one-round protocol. Thus, the random oracle of GKR, starting from the second round, is replaced with a concrete hash inside the outer layer SNARK which makes the security-proof heuristic.
Last updated:  2022-08-18
Performance Evaluation of NIST LWC Finalists on AVR ATmega and ARM Cortex-M3 Microcontrollers
Yuhei Watanabe, Hideki Yamamoto, Hirotaka Yoshida
This paper presents results of performance evaluation of NIST Lightweight Cryptography standardization finalists which are implemented by us. Our implementation method puts on the target to reduce RAM consumption on embedded devices. Our target microcontrollers are AVR ATmega 128 and ARM Cortex-M3. We apply our implementation method to five AEAD schemes which include four finalists of the NIST lightweight cryptography standardization and demonstrate the performance evaluation on target microcontrollers. From our performance evaluation of the RAM size, we have achieved 117-byte TinyJAMBU-128 on ATmega 128 and 140-byte TinyJAMBU-128 on ARM Cortex-M3. Our implementation of TinyJAMBU-128 has the smallest RAM of all the target AEAD schemes.
Last updated:  2022-08-18
Efficient Unique Ring Signatures From Lattices
Tuong Ngoc Nguyen, Anh The Ta, Huy Quoc Le, Dung Hoang Duong, Willy Susilo, Fuchun Guo, Kazuhide Fukushima, Shinsaku Kiyomoto
Unique ring signatures (URS) were introduced by Franklin and Zhang (FC 2012) as a unification of linkable and traceable ring signatures. In URS, each member within a ring can only produce, on behalf of the ring, at most one signature for a message. Applications of URS potentially are e-voting systems and e–token systems. In blockchain technology, URS has been implemented for mixing contracts. However, existing URS schemes are based on the Discrete Logarithm Problem, which is insecure in the post-quantum setting. In this paper, we design a new lattice-based URS scheme where the signature size is logarithmic in the number of ring members. The proposed URS exploits a Merkle tree-based accumulator as a building block in the lattice setting. Our scheme is secure under the Short Integer Solution and Learning With Rounding assumptions in the random oracle model.
Last updated:  2022-11-30
A Theoretical Framework for the Analysis of Physical Unclonable Function Interfaces and its Relation to the Random Oracle Model
Marten van Dijk, Chenglu Jin
Analysis of advanced Physical Unclonable Function (PUF) applications and protocols rely on assuming that a PUF behaves like a random oracle, that is, upon receiving a challenge, a uniform random response with replacement is selected, measurement noise is added, and the resulting response is returned. In order to justify such an assumption, we need to rely on digital interface computation that to some extent remains confidential -- otherwise, information about PUF challenge response pairs leak with which the adversary can train a prediction model for the PUF. We introduce a theoretical framework that allows the adversary to have a prediction model (with a typical accuracy of 75\% for predicting response bits for state-of-the-art silicon PUF designs). We do not require any confidential digital computing or digital secrets while we can still prove rigorous statements about the bit security of a system that interfaces with the PUF. In particular, we prove the bit security of a PUF-based random oracle construction; this merges the PUF framework with fuzzy extractors.
Last updated:  2022-08-17
Evaluating isogenies in polylogarithmic time
Damien Robert
Let 𝑓 ∶ 𝐸 → 𝐸′ be an N-isogeny between elliptic curves (or abelian varieties) over a finite field 𝔽_𝑞. We show that there always exist an efficient representation of 𝑓 that takes polylogarithmic 𝑂(log^𝑂(1) 𝑁 log 𝑞) space and which can evaluate 𝑓 at any point 𝑃 ∈ 𝐸(𝔽_{𝑞^𝑘}) in polylogarithmic 𝑂(log^𝑂(1) 𝑁) arithmetic operations in 𝔽_{𝑞^𝑘}. Furthermore, this efficient representation can be computed by evaluating 𝑓 on 𝑂(log 𝑁) points defined over extensions of degree 𝑂(log 𝑁) over 𝔽_𝑞. In particular, if 𝑓 is represented by the equation 𝐻(𝑥) = 0 of its kernel 𝐾, then using Vélu’s formula the efficient representation can be computed in time 𝑂 ̃(𝑁 log 𝑞 + log^2 𝑞).
Last updated:  2022-08-17
Lattice Enumeration with Discrete Pruning: Improvement, Cost Estimation and Optimal Parameters
Luan Luan, Chunxiang Gu, Yonghui Zheng, Yanan Shi
Lattice enumeration is a linear-space algorithm for solving the shortest lattice vector problem(SVP). Extreme pruning is a practical technique for accelerating lattice enumeration, which has mature theoretical analysis and practical implementation. However, these works are still remain to be done for discrete pruning. In this paper, we improve the discrete pruned enumeration (DP enumeration), and give a solution to the problem proposed by Leo Ducas et Damien Stehle about the cost estimation of discrete pruning. Our contribution is on the following three aspects: First, we refine the algorithm both from theoretical and practical aspects. Discrete pruning using natural number representation lies on a randomness assumption of lattice point distribution, which has an obvious paradox in the original analysis. We rectify this assumption to fix the problem, and correspondingly modify some details of DP enumeration. We also improve the binary search algorithm for cell enumeration radius with polynomial time complexity, and refine the cell decoding algorithm. Besides, we propose to use a truncated lattice reduction algorithm -- k-tours-BKZ as reprocessing method when a round of enumeration failed. Second, we propose a cost estimation simulator for DP enumeration. Based on the investigation of lattice basis stability during reprocessing, we give a method to simulate the squared length of Gram-Schmidt orthogonalization basis quickly, and give the fitted cost estimation formulae of sub-algorithms in CPU-cycles through intensive experiments. The success probability model is also modified based on the rectified assumption. We verify the cost estimation simulator on middle size SVP challenge instances, and the simulation results are very close to the actual performance of DP enumeration. Third, we give a method to calculate the optimal parameter setting to minimize the running time of DP enumeration. We compare the efficiency of our optimized DP enumeration with extreme pruning enumeration in solving SVP challenge instances. The experimental results in medium dimension and simulation results in high dimension both show that the discrete pruning method could outperform extreme pruning. An open-source implementation of DP enumeration with its simulator is also provided.
Last updated:  2022-08-16
FairBlock: Preventing Blockchain Front-running with Minimal Overheads
Peyman Momeni, Sergey Gorbunov, Bohan Zhang
While blockchain systems are quickly gaining popularity, front-running remains a major obstacle to fair exchange. In this paper, we show how to apply identity-based encryption (IBE) to prevent front-running with minimal bandwidth overheads. In our approach, to decrypt a block of N transactions, the number of messages sent across the network only grows linearly with the size of decrypting committees, S. That is, to decrypt a set of N transactions sequenced at a specific block, a committee only needs to exchange S decryption shares (independent of N ). In comparison, previous solutions are based on threshold decryption schemes, where each transaction in a block must be decrypted separately by the committee, resulting in bandwidth overhead of N × S. Along the way, we present a model for fair block processing and build a prototype implementation. We show that on a sample of 1000 messages with 1000 validators our system saves 42.53 MB of bandwidth which is 99.6% less compared with the standard threshold decryption paradigm.
Last updated:  2022-08-16
A Note on the Theoretical and Practical Security of Block Ciphers
Öznur MUT SAĞDIÇOĞLU, Serhat Sağdıçoğlu, Ebru Küçükkubaş
Differential cryptanalysis is one of the most effective methods for evaluating the security level of block ciphers. For this, an attacker tries to find a differential or a characteristic with a high probability that distinguishes a block cipher from a random permutation to obtain the secret key. Although it is theoretically possible to compute the probability of a differential for a block cipher, there are two problems to compute it practically. The first problem is that it is computationally impossible to compute differential probability by trying all plaintext pairs. The second problem is that the probability of a differential over all choices of the plaintext and key might be different from the probability of the differential over all plaintexts for a fixed key. Thus, to evaluate the security against the differential cryptanalysis, one must assume both the hypothesis of stochastic equivalence and the Markov model. However, the hypothesis of stochastic equivalence does not hold in general. Indeed, we show on simple ciphers that the hypothesis of stochastic equivalence does not hold. Moreover, we observe that the differential probability is not equal to the expected differential probability. For these results, we study plateau characteristics for a 4-bit cipher and a 16-bit super box. As a result, when considering differential cryptanalysis, one must be careful about the gap between the theoretical and the practical security of block ciphers.
Last updated:  2022-08-16
Lattice Reduction Meets Key-Mismatch: New Misuse Attack on Lattice-Based NIST Candidate KEMs
Ruiqi Mi, Haodong Jiang, Zhenfeng Zhang
Resistance to key misuse attacks is a vital property for key encapsulation mechanisms(KEMs)in NIST-PQC standardization process. In key mismatch attack, the adversary recovers reused secret key with the help of an oracle $\mathcal{O}$ that indicates whether the shared key matches or not. Key mismatch attack is more powerful when fewer oracle queries are required. A series of works tried to reduce query times, Qin et al. [AISACRYPT 2021] gave a systematic approach to finding lower bound of oracle queries for a category of KEMs, including NIST’s third-round candidate Kyber and Saber. In this paper, we found the aforementioned bound can be bypassed by combining Qin et al. (AISACRYPT 2021)’s key mismatch attack with a standard lattice attack. In particular, we explicitly build the relationship between the number of queries to the oracle and the bit security of the lattice-based KEMs. Our attack is inspired by the fact that each oracle query reveals partial information of reused secrets, and affects the mean and the covariance parameter of secrets, making the attack on lattice easier. In addition, We quantify such effect in theory and estimate the security loss for all NIST second-round candidate KEMs.Specifically, Our improved attack reduces the number of queries for Kyber512 by 34% from 1312 queries with bit security 107 to 865 with bit security 32. For Kyber768 and Kyber1024, our improved attack reduces the number of queries by 29% and 27% with bit security is 32.
Last updated:  2023-06-30
Rapidash: Foundations of Side-Contract-Resilient Fair Exchange
Hao Chung, Elisaweta Masserova, Elaine Shi, Sri AravindaKrishnan Thyagarajan
Fair exchange is a fundamental primitive enabled by blockchains, and is widely adopted in applications such as atomic swaps, payment channels, and DeFi. Most existing designs of blockchain-based fair exchange protocols consider only the participating users as strategic players, and assume the miners are honest and passive. However, recent works revealed that the fairness of commonly deployed fair exchange protocols can be broken entirely in the presence of user-miner collusion. In particular, a user can bribe the miners to help it cheat — a phenomenon also referred to as Miner Extractable Value (MEV). In this work, we provide the first formal treatment of side-contract-resilient fair exchange where users and miners may enter into arbitrary contracts on the side. We propose a new fair exchange protocol called Rapidash, and prove that the protocol is incentive compatible in the presence of user-miner collusion. In particular, we show that Rapidash satisfies a coalition-resistant Nash equilibrium absent external incentives. Further, even when there exist arbitrary but bounded external incentives, Rapidash still protects honest players and ensures that they cannot be harmed. Last but not least, our game-theoretic formulations also lay the theoretical groundwork for studying side-contract-resilient fair exchange protocols. Finally, to showcase the instantiability of Rapidash with a wide range of blockchain systems, we present instantiations of Rapidash that are compatible with Bitcoin and Ethereum while incurring only a minimal overhead in terms of costs for the users.
Last updated:  2022-11-25
A Password-Based Access Control Framework for Time-Sequence Aware Media Cloudization
Haiyan Wang
The time sequence-based outsourcing makes new demands for related access control continue to grow increasingly in cloud computing. In this paper, we propose a practical password-based access control framework for such media cloudization relying on content control based on the time-sequence attribute, which is designed over prime-order groups. First, the scheme supports multi-keyword search simultaneously in any monotonic boolean formulas, and enables media owner to control content encryption key for different time-periods using an updatable password; Second, the scheme supports the key self-retrievability of content encryption key, which is more suitable for the cloud-based media applications with massive users. Then, we show that the proposed scheme is provably secure in the standard model. Finally, the detailed result of performance evaluation shows the proposed scheme is efficient and practical for cloud-based media applications.
Last updated:  2022-08-15
Breaking Category Five SPHINCS+ with SHA-256
Ray Perlner, John Kelsey, David Cooper
SPHINCS$^+$ is a stateless hash-based signature scheme that has been selected for standardization as part of the NIST post-quantum cryptography (PQC) standardization process. Its security proof relies on the distinct-function multi-target second-preimage resistance (DM-SPR) of the underlying keyed hash function. The SPHINCS$^+$ submission offered several instantiations of this keyed hash function, including one based on SHA-256. A recent observation by Sydney Antonov on the PQC mailing list demonstrated that the construction based on SHA-256 did not have DM-SPR at NIST category five, for several of the parameter sets submitted to NIST; however, it remained an open question whether this observation leads to a forgery attack. We answer this question in the affirmative by giving a complete forgery attack that reduces the concrete classical security of these parameter sets by approximately 40 bits of security. Our attack works by applying Antonov's technique to the {WOTS$^+$} public keys in {\SPHINCS}, leading to a new one-time key that can sign a very limited set of hash values. From that key, we construct a slightly altered version of the original hypertree with which we can sign arbitrary messages, yielding signatures that appear valid.
Last updated:  2023-05-17
Programmable Distributed Point Functions
Elette Boyle, Niv Gilboa, Yuval Ishai, Victor I. Kolobov
A distributed point function (DPF) is a cryptographic primitive that enables compressed additive sharing of a secret unit vector across two or more parties. Despite growing ubiquity within applications and notable research efforts, the best 2-party DPF construction to date remains the tree-based construction from (Boyle et al, CCS'16), with no significantly new approaches since. We present a new framework for 2-party DPF construction, which applies in the setting of feasible (polynomial-size) domains. This captures in particular all DPF applications in which the keys are expanded to the full domain. Our approach is motivated by a strengthened notion we put forth, of programmable DPF (PDPF): in which a short, input-independent "offline" key can be reused for sharing many point functions. * PDPF from OWF: We construct a PDPF for feasible domains from the minimal assumption that one-way functions exist, where the second "online" key size is polylogarithmic in the domain size $N$. Our approach offers multiple new efficiency features and applications: * Privately puncturable PRFs: Our PDPF gives the first OWF-based privately puncturable PRFs (for feasible domains) with sublinear keys. * $O(1)$-round distributed DPF Gen: We obtain a (standard) DPF with polylog-size keys that admits an analog of Doerner-shelat (CCS'17) distributed key generation, requiring only $O(1)$ rounds (versus $\log N$). * PCG with 1 short key: Compressing useful correlations for secure computation, where one key is of minimal size. This provides up to exponential communication savings in some application scenarios.
Last updated:  2022-08-15
Classification of all DO planar polynomials with prime field coefficients over GF(3^n) for n up to 7
Diana Davidova, Nikolay Kaleyski
We describe how any function over a finite field $\mathbb{F}_{p^n}$ can be represented in terms of the values of its derivatives. In particular, we observe that a function of algebraic degree $d$ can be represented uniquely through the values of its derivatives of order $(d-1)$ up to the addition of terms of algebraic degree strictly less than $d$. We identify a set of elements of the finite field, which we call the degree $d$ extension of the basis, which has the property that for any choice of values for the elements in this set, there exists a function of algebraic degree $d$ whose values match the given ones. We discuss how to reconstruct a function from the values of its derivatives, and discuss the complexity involved in transitioning between the truth table of the function and its derivative representation. We then specialize to the case of quadratic functions, and show how to directly convert between the univariate and derivative representation without going through the truth table. We thus generalize the matrix representation of qaudratic vectorial Boolean functions due to Yu et al. to the case of arbitrary characteristic. We also show how to characterize quadratic planar functions using the derivative representation. Based on this, we adapt the method of Yu et al. for searching for quadratic APN functions with prime field coefficients to the case of planar DO functions. We use this method to find all such functions (up to CCZ-equivalence) over $\mathbb{F}_{3^n}$ for $n \le 7$. We conclude that the currently known planar DO polynomials cover all possible cases for $n \le 7$. We find representatives simpler than the known ones for the Zhou-Pott, Dickson, and Lunardon-Marino-Polverino-Trombetti-Bierbrauer families for $n = 6$. Finally, we discuss the computational resources that would be needed to push this search to higher dimensions.
Last updated:  2023-01-09
Evaluating the Security of Merkle-Damgård Hash Functions and Combiners in Quantum Settings
Zhenzhen Bao, Jian Guo, Shun Li, Phuong Pham
In this work, we evaluate the security of Merkle-Damgård (MD) hash functions and their combiners (XOR and concatenation combiners) in quantum settings. Two main quantum scenarios are considered, including the scenario where a substantial amount of cheap quantum random access memory (qRAM) is available and where qRAM is limited and expensive to access. We present generic quantum attacks on the MD hash functions and hash combiners, and carefully analyze the complexities under both quantum scenarios. The considered securities are fundamental requirements for hash functions, including the resistance against collision and (second-)preimage. The results are consistent with the conclusions in the classical setting, that is, the considered resistances of the MD hash functions and their combiners are far less than ideal, despite the significant differences in the expected security bounds between the classical and quantum settings. Particularly, the generic attacks can be improved significantly using quantum computers under both scenarios. These results serve as an indication that classical hash constructions require careful security re-evaluation before being deployed to the post-quantum cryptography schemes.
Last updated:  2023-01-09
Rebound Attacks on SKINNY Hashing with Automatic Tools
Shun Li, Guozhen Liu, Phuong Pham
In ToSC'20, a new approach combining Mix-Integer Linear Programming (MILP) tool and Constraint Programming (CP) tool to search for boomerang distinguishers is proposed and later used for rebound attack in ASIACRYPT'21 and CRYPTO'22. In this work, we extend these techniques to mount collision attacks on SKINNY-128-256 MMO hashing mode in classical and quantum settings. The first results of 17-round (and 15-round) free-start collision attack on this variant of SKINNY hashing mode are presented. Moreover, one more round of the inbound phase is covered leading to the best existing classical free-start collision attack of 19-round on the SKINNY-128-384 MMO hashing.
Last updated:  2022-09-26
Linear-Time Probabilistic Proofs with Sublinear Verification for Algebraic Automata Over Every Field
Jonathan Bootle, Alessandro Chiesa, Ziyi Guan, Siqi Liu
Interactive oracle proofs (IOPs) are a generalization of probabilistically checkable proofs that can be used to construct succinct arguments. Improvements in the efficiency of IOPs lead to improvements in the efficiency of succinct arguments. Key efficiency goals include achieving provers that run in linear time and verifiers that run in sublinear time, where the time complexity is with respect to the arithmetic complexity of proved computations over a finite field $\mathbb{F}$. We consider the problem of constructing IOPs for any given finite field $\mathbb{F}$ with a linear-time prover and polylogarithmic query complexity. Several previous works have achieved these efficiency requirements with $O(1)$ soundness error for NP-complete languages. However, constrained by the soundness error of the sumcheck protocol underlying these constructions, the IOPs achieve linear prover time only for instances in fields of size $\Omega(\log n)$. Recent work (Ron-Zewi and Rothblum, STOC 2022) overcomes this problem, but with linear verification time. We construct IOPs for the algebraic automata problem over any finite field $\mathbb{F}$ with a linear-time prover, polylogarithmic query complexity, and sublinear verification complexity. We additionally prove a similar result to Ron-Zewi and Rothblum for the NP-complete language R1CS using different techniques. The IOPs imply succinct arguments for (nondeterministic) arithmetic computations over any finite field with linear-time proving (given black-box access to a linear-time collision-resistant hash function). Inspired by recent constructions of reverse-multiplication-friendly embeddings, our IOP constructions embed problem instances over small fields into larger fields and adapt previous IOP constructions to the new instances. The IOP provers are modelled as random access machines and use precomputation techniques to achieve linear prover time. In this way, we avoid having to replace the sumcheck protocol.
Last updated:  2022-11-26
Exploring Integrity of AEADs with Faults: Definitions and Constructions
Sayandeep Saha, Mustafa Khairallah, Thomas Peyrin
Implementation-based attacks are major concerns for modern cryptography. For symmetric-key cryptography, a significant amount of exploration has taken place in this regard for primitives such as block ciphers. Concerning symmetric-key operating modes, such as Authenticated Encryption with Associated Data (AEAD), the state- of-the-art mainly addresses the passive Side-Channel Attacks (SCA) in the form of leakage resilient cryptography. So far, only a handful of work address Fault Attacks (FA) in the context of AEADs concerning the fundamental properties – integrity and confidentiality. In this paper, we address this gap by exploring mode-level issues arising due to FAs. We emphasize that FAs can be fatal even in cases where the adversary does not aim to extract the long-term secret, but rather tries to violate the basic security requirements (integrity and confidentiality). Notably, we show novel integrity attack examples on state-of-the-art AEAD constructions and even on a prior fault-resilient AEAD construction called SIV$. On the constructive side, we first present new security notions of fault-resilience, for PRF (frPRF), MAC (frMAC) and AEAD (frAE), the latter can be seen as an improved version of the notion introduced by Fischlin and Gunther at CT-RSA’20. Then, we propose new constructions to turn a frPRF into a fault-resilient MAC frMAC (hash-then-frPRF) and into a fault-resilient AEAD frAE (MAC-then-Encrypt-then-MAC or MEM).
Last updated:  2023-02-07
SIDH with masked torsion point images
Tako Boris Fouotsa
We propose a countermeasure to the Castryck-Decru attack on SIDH. The attack heavily relies on the images of torsion points. The main input to our countermeasure consists in masking the torsion point images in SIDH in a way they are not exploitable in the attack, but can be used to complete the key exchange. This comes with a change in the form the field characteristic and a considerable increase in the parameter sizes.
Last updated:  2022-12-28
Secure and Private Distributed Source Coding with Private Keys and Decoder Side Information
Onur Gunlu, Rafael F. Schaefer, Holger Boche, H. Vincent Poor
The distributed source coding problem is extended by positing that noisy measurements of a remote source are the correlated random variables that should be reconstructed at another terminal. We consider a secure and private distributed lossy source coding problem with two encoders and one decoder such that (i) all terminals noncausally observe a noisy measurement of the remote source; (ii) a private key is available to each legitimate encoder and all private keys are available to the decoder; (iii) rate-limited noiseless communication links are available between each encoder and the decoder; (iv) the amount of information leakage to an eavesdropper about the correlated random variables is defined as secrecy leakage, and privacy leakage is measured with respect to the remote source; and (v) two passive attack scenarios are considered, where a strong eavesdropper can access both communication links and a weak eavesdropper can only choose one of the links to access. Inner and outer bounds on the rate regions defined under secrecy, privacy, communication, and distortion constraints are derived for both passive attack scenarios. When one or both sources should be reconstructed reliably, the rate region bounds are simplified.
Last updated:  2022-08-13
Double-Odd Jacobi Quartic
Thomas Pornin
Double-odd curves are curves with order equal to 2 modulo 4. A prime order group with complete formulas and a canonical encoding/decoding process could previously be built over a double-odd curve. In this paper, we reformulate such curves as a specific case of the Jacobi quartic. This allows using slightly faster formulas for point operations, as well as defining a more efficient encoding format, so that decoding and encoding have the same cost as classic point compression (decoding is one square root, encoding is one inversion). We define the prime-order groups jq255e and jq255s as the application of that modified encoding to the do255e and do255s groups. We furthermore define an optimized signature mechanism on these groups, that offers shorter signatures (48 bytes instead of the usual 64 bytes, for 128-bit security) and makes signature verification faster (down to less than 83000 cycles on an Intel x86 Coffee Lake core).
Last updated:  2022-08-13
How to Verifiably Encrypt Many Bits for an Election?
Henri Devillez, Olivier Pereira, Thomas Peters
The verifiable encryption of bits is the main computational step that is needed to prepare ballots in many practical voting protocols. Its computational load can also be a practical bottleneck, preventing the deployment of some protocols or requiring the use of computing clusters. We investigate the question of producing many verifiably encrypted bits in an efficient and portable way, using as a baseline the protocol that is in use in essentially all modern voting systems and libraries supporting homomorphic voting, including ElectionGuard, a state-of-the-art open source voting SDK deployed in government elections. Combining fixed base exponentiation techniques and new encryption and ZK proof mechanisms, we obtain speed-ups by more than one order of magnitude against standard implementations. Our exploration requires balancing conflicting optimization strategies, and the use of asymptotically less efficient protocols that turn out to be very effective in practice. Several of our proposed improvements are now on the ElectionGuard roadmap.
Last updated:  2022-09-19
RapidUp: Multi-Domain Permutation Protocol for Lookup Tables
Héctor Masip Ardevol, Jordi Baylina Melé, Daniel Lubarov, José L. Muñoz-Tapia
SNARKs for some standard cryptographic primitives tend to be plenty designed with SNARK-unfriendly operations such as XOR. Previous protocols such as [GW20] worked around this problem by the introduction of lookup arguments. However, these protocols were only appliable over the same circuit. RapidUp is a protocol that solves this limitation by unfolding the grand-product polynomial into two (equivalent) polynomials of the same size. Morevoer, a generalization of previous protocols is presented by the introduction of selectors.
Last updated:  2022-10-04
Post Quantum Design in SPDM for Device Authentication and Key Establishment
Jiewen Yao, Krystian Matusiewicz, Vincent Zimmer
The Security Protocol and Data Model (SPDM) defines flows to authenticate hardware identity of a computing device. It also allows for establishing a secure session for confidential and integrity protected data communication between two devices. The present version of SPDM, namely version 1.2, relies on traditional asymmetric cryptographic algorithms that are known to be vulnerable to quantum attacks. This paper describes the means by which support for post-quantum (PQ) cryptography can be added to the SPDM protocol in order to enable SPDM for the upcoming world of quantum computing. We examine SPDM 1.2 protocol and discuss how to negotiate the use of post-quantum cryptography algorithms (PQC), how to support device identity reporting, means to authenticate the device, and how to establish a secure session when using PQC algorithms. We consider so called hybrid modes where both classical and PQC algorithms are used to achieve security properties as these modes are important during the transition period. We also share our experience with implementing PQ-SPDM and provide benchmarks for some of the winning NIST PQC algorithms.
Last updated:  2022-08-12
Practical Sublinear Proofs for R1CS from Lattices
Ngoc Khanh Nguyen, Gregor Seiler
We propose a practical sublinear-size zero-knowledge proof system for Rank-1 Constraint Satisfaction (R1CS) based on lattices. The proof size scales asymptotically with the square root of the witness size. Concretely, the size becomes $2$-$3$ times smaller than Ligero (ACM CCS 2017), which also exhibits square root scaling, for large instances of R1CS. At the core lies an interactive variant of the Schwartz-Zippel Lemma that might be of independent interest.
Last updated:  2022-08-12
Perfectly Secure Synchronous MPC with Asynchronous Fallback Guarantees Against General Adversaries
Ananya Appan, Anirudh Chandramouli, Ashish Choudhury
In this work, we study perfectly-secure multi-party computation (MPC) against general (non-threshold) adversaries. Known protocols in a synchronous network are secure against $Q^{(3)}$ adversary structures, while in an asynchronous network, known protocols are secure against $Q^{(4)}$ adversary structures. A natural question is whether there exists a single protocol which remains secure against $Q^{(3)}$ and $Q^{(4)}$ adversary structures in a synchronous and in an asynchronous network respectively, where the parties are not aware of the network type. We design the first such best-of-both-worlds protocol against general adversaries. Our result generalizes the result of Appan, Chandramouli and Choudhury (PODC 2022), which presents a best-of-both-worlds perfectly-secure protocol against threshold adversaries. To design our protocol, we present two important building blocks which are of independent interest. The first building block is a best-of-both-worlds perfectly-secure Byzantine agreement (BA) protocol for $Q^{(3)}$ adversary structures, which remains secure both in a synchronous, as well as an asynchronous network. The second building block is a best-of-both-worlds perfectly-secure verifiable secret-sharing (VSS) protocol, which remains secure against $Q^{(3)}$ and $Q^{(4)}$ adversary structures in a synchronous network and an asynchronous network respectively.
Last updated:  2022-08-12
Post-Quantum Multi-Recipient Public Key Encryption
Joël Alwen, Dominik Hartmann, Eike Kiltz, Marta Mularczyk, Peter Schwabe
A multi-message multi-recipient PKE (mmPKE) encrypts a batch of messages, in one go, to a corresponding set of independently chosen receiver public keys. The resulting "multi-recipient ciphertext" can be then be reduced (by any 3rd party) to a shorter, receiver specific, "invidual ciphertext". Finally, to recover the $i$-th message in the batch from their indvidual ciphertext the $i$-th receiver only needs their own decryption key. A special case of mmPKE is multi-recipient PKE where all receivers are sent the same message. By treating (m)mPKE and their KEM counterparts as a stand-alone primitives we allow for more efficient constructions than trivially composing individual PKE/KEM instances. This is especially valuable in the post-quantum setting, where PKE/KEM ciphertexts and public keys tend to be far larger than their classic counterparts. In this work we describe a collection of new results around batched KEMs and PKE. We provide both classic and post-quantum proofs for all results. Our results are geared towards practical constructions and applications (for example in the domain of PQ-secure group messaging). Concretely, our results include a new non-adaptive to adaptive compiler for CPA-secure mKEMs resulting in public keys roughly half the size of the previous state-of-the-art [Hashimoto et.al., CCS'21]. We also prove their FO transform for mKEMs to be secure in the quantum random oracle model. We provide the first mKEM combiner as well as two mmPKE constructions. The first is an arbitrary message-length black-box construction from an mKEM (e.g. one produced by combining a PQ with a classic mKEM). The second is optimized for short messages and achieves hybrid PQ/classic security more directly. When encrypting $n$ short messages (e.g. as in several recent mmPKE applications) at 256-bits of security the mmPKE ciphertext are $144 n$ bytes shorter than the generic construction. Finally, we provide an optimized implementation of the (CCA secure) mKEM construction based on the NIST PQC winner Kyber and report benchmarks showing a significant speedup for batched encapsulation and up to 79% savings in ciphertext size compared to a naive solution.
Last updated:  2022-09-22
On UC-Secure Range Extension and Batch Verification for ECVRF
Christian Badertscher, Peter Gaži, Iñigo Querejeta-Azurmendi, Alexander Russell
Verifiable random functions (Micali et al., FOCS'99) allow a key-pair holder to verifiably evaluate a pseudorandom function under that particular key pair. These primitives enable fair and verifiable pseudorandom lotteries, essential in proof-of-stake blockchains such as Algorand and Cardano, and are being used to secure billions of dollars of capital. As a result, there is an ongoing IRTF effort to standardize VRFs, with a proposed ECVRF based on elliptic-curve cryptography appearing as the most promising candidate. In this work, towards understanding the general security of VRFs and in particular the ECVRF construction, we provide an ideal functionality in the Universal Composability (UC) framework (Canetti, FOCS'01) that captures VRF security, and show that ECVRF UC-realizes this functionality. We further show how the range of a VRF can generically be extended in a modular fashion based on the above functionality. This observation is particularly useful for protocols such as Ouroboros since it allows to reduce the number of VRF evaluations (per slot) and VRF verifications (per block) from two to one at the price of additional (but much faster) hash-function evaluations. Finally, we study batch verification in the context of VRFs. We provide a UC-functionality capturing a VRF with batch-verification capability, and propose modifications to ECVRF that allow for this feature. We again prove that our proposal UC-realizes the desired functionality. We provide a performance analysis showing that verification can yield a factor-two speedup for batches with 1024 proofs, at the cost of increasing the proof size from 80 to 128 bytes.
Last updated:  2022-08-11
Oblivious Revocable Functions and Encrypted Indexing
Kevin Lewi, Jon Millican, Ananth Raghunathan, Arnab Roy
Many online applications, such as online file backup services, support the sharing of indexed data between a set of devices. These systems may offer client-side encryption of the data, so that the stored data is inaccessible to the online host. A potentially desirable goal in this setting would be to protect not just the contents of the backed-up files, but also their identifiers. However, as these identifiers are typically used for indexing, a deterministic consistent mapping across devices is necessary. Additionally, in a multi-device setting, it may be desirable to maintain an ability to revoke a device’s access—e.g. through rotating encryption keys for new data. We present a new primitive, called the Oblivious Revocable Function (ORF), which operates in the above setting and allows identifiers to be obliviously mapped to a consistent value across multiple devices, while enabling the server to permanently remove an individual device’s ability to map values. This permits a stronger threat model against metadata, in which metadata cannot be derived from identifiers by a revoked device colluding with the service provider, so long as the service provider was honest at the instant of revocation. We describe a simple Diffie- Hellman-based construction that achieves ORFs and provide a proof of security under the UC framework.
Last updated:  2022-08-11
A Study of Error Floor Behavior in QC-MDPC Codes
Sarah Arpin, Tyler Raven Billingsley, Daniel Rayor Hast, Jun Bo Lau, Ray Perlner, Angela Robinson
We present experimental findings on the decoding failure rate (DFR) of BIKE, a fourth-round candidate in the NIST Post-Quantum Standardization process, at the 20-bit security level. We select parameters according to BIKE design principles and conduct a series of experiments. We directly compute the average DFR on a range of BIKE block sizes and identify both the waterfall and error floor regions of the DFR curve. We then study the influence on the average DFR of three sets $\mathcal{C}$, $\mathcal{N}$, and $2\mathcal{N}$ of near-codewords --- vectors of low weight that induce syndromes of low weight --- defined by Vasseur in 2021. We find that error vectors leading to decoding failures have small maximum support intersection with elements of these sets; further, the distribution of intersections is quite similar to that of sampling random error vectors and counting the intersections with $\mathcal{C}$, $\mathcal{N}$, and $2\mathcal{N}$. Our results indicate that these three sets are not sufficient in classifying vectors expected to cause decoding failures. Finally, we study the role of syndrome weight on the decoding behavior and conclude that the set of error vectors that lead to decoding failures differ from random vectors by having low syndrome weight.
Last updated:  2022-08-11
Weak Subtweakeys in SKINNY
Daniël Kuijsters, Denise Verbakel, Joan Daemen
Lightweight cryptography is characterized by the need for low implementation cost, while still providing sufficient security. This requires careful analysis of building blocks and their composition. SKINNY is an ISO/IEC standardized family of tweakable block ciphers and is used in the NIST lightweight cryptography standardization process finalist Romulus. We present non-trivial linear approximations of two- round SKINNY that have correlation one or minus one and that hold for a large fraction of all round tweakeys. Moreover, we show how these could have been avoided.
Last updated:  2022-08-11
Lattice-Based Cryptography in Miden VM
Alan Szepieniec, Frederik Vercauteren
This note discusses lattice-based cryptography over the field with $p= 2^{64} - 2^{32} + 1$ elements, with an eye to supporting lattice-based cryptography operations in virtual machines such as Miden VM that operate natively over this field. It discusses how to support Dilithium and Falcon, two lattice-based signature scheme recently selected by the NIST PQC project; and proposes parameters for efficient public key encryption and publicly re-randomizable commitments modulo $p$.
Last updated:  2022-08-11
A framework for constructing Single Secret Leader Election from MPC
Michael Backes, Pascal Berrang, Lucjan Hanzlik, Ivan Pryvalov
The emergence of distributed digital currencies has raised the need for a reliable consensus mechanism. In proof-of-stake cryptocur- rencies, the participants periodically choose a closed set of validators, who can vote and append transactions to the blockchain. Each valida- tor can become a leader with the probability proportional to its stake. Keeping the leader private yet unique until it publishes a new block can significantly reduce the attack vector of an adversary and improve the throughput of the network. The problem of Single Secret Leader Election (SSLE) was first formally defined by Boneh et al. in 2020. In this work, we propose a novel framework for constructing SSLE proto- cols, which relies on secure multi-party computation (MPC) and satisfies the desired security properties. Our framework does not use any shuffle or sort operations and has a computational cost for N parties as low as O(N) of basic MPC operations per party. We improve the state-of-the- art for SSLE protocols that do not assume a trusted setup. Moreover, our SSLE scheme efficiently handles weighted elections. That is, for a total weight S of N parties, the associated costs are only increased by a factor of logS. When the MPC layer is instantiated with techniques based on Shamir’s secret-sharing, our SSLE has a communication cost of O(N2) which is spread over O(log N) rounds, can tolerate up to t < N/2 of faulty nodes without restarting the protocol, and its security relies on DDH in the random oracle model. When the MPC layer is instantiated with more efficient techniques based on garbled circuits, our SSLE re- quires all parties to participate, up to N − 1 of which can be malicious, and its security is based on the random oracle model.
Last updated:  2023-01-03
Theoretical Limits of Provable Security Against Model Extraction by Efficient Observational Defenses
Ari Karchmer
Can we hope to provide provable security against model extraction attacks? As a step towards a theoretical study of this question, we unify and abstract a wide range of "observational" model extraction defenses (OMEDs) --- roughly, those that attempt to detect model extraction by analyzing the distribution over the adversary's queries. To accompany the abstract OMED, we define the notion of complete OMEDs --- when benign clients can freely interact with the model --- and sound OMEDs --- when adversarial clients are caught and prevented from reverse engineering the model. Our formalism facilitates a simple argument for obtaining provable security against model extraction by complete and sound OMEDs, using (average-case) hardness assumptions for PAC-learning, in a way that abstracts current techniques in the prior literature. The main result of this work establishes a partial computational incompleteness theorem for the OMED: any efficient OMED for a machine learning model computable by a polynomial size decision tree that satisfies a basic form of completeness cannot satisfy soundness, unless the subexponential Learning Parity with Noise (LPN) assumption does not hold. To prove the incompleteness theorem, we introduce a class of model extraction attacks called natural Covert Learning attacks based on a connection to the Covert Learning model of Canetti and Karchmer (TCC '21), and show that such attacks circumvent any defense within our abstract mechanism in a black-box, nonadaptive way. As a further technical contribution, we extend the Covert Learning algorithm of Canetti and Karchmer to work over any "concise" product distribution (albeit for juntas of a logarithmic number of variables rather than polynomial size decision trees), by showing that the technique of learning with a distributional inverter of Binnendyk et al. (ALT '22) remains viable in the Covert Learning setting.
Last updated:  2022-09-11
Breaking SIDH in polynomial time
Damien Robert
We show that we can break SIDH in classical polynomial time, even with a random starting curve $E_0$.
Last updated:  2022-12-22
RPM: Robust Anonymity at Scale
Donghang Lu, Aniket Kate
This work presents RPM, a scalable anonymous communication protocol suite using secure multiparty computation (MPC) with the offline-online model. We generate random, unknown permutation matrices in a secret-shared fashion and achieve improved (online) performance and the lightest communication and computation overhead for the clients compared to the state of art robust anonymous communication protocols. Using square-lattice shuffling, we make our protocol scale well as the number of clients increases. We provide three protocol variants, each targeting different input volumes and MPC frameworks/libraries. Besides, due to the modular design, our protocols can be easily generalized to support more MPC functionalities and security properties as they get developed. We also illustrate how to generalize our protocols to support two-way anonymous communication and secure sorting. We have implemented our protocols using the MP-SPDZ library suit and the benchmark illustrates that our protocols achieve unprecedented online phase performance with practical offline phases.
Last updated:  2022-08-10
MuSig-L: Lattice-Based Multi-Signature With Single-Round Online Phase
Cecilia Boschini, Akira Takahashi, Mehdi Tibouchi
Multi-signatures are protocols that allow a group of signers to jointly produce a single signature on the same message. In recent years, a number of practical multi-signature schemes have been proposed in the discrete-log setting, such as MuSigT (CRYPTO'21) and DWMS (CRYPTO'21). The main technical challenge in constructing a multi-signature scheme is to achieve a set of several desirable properties, such as (1) security in the plain public-key (PPK) model, (2) concurrent security, (3) low online round complexity, and (4) key aggregation. However, previous lattice-based, post-quantum counterparts to Schnorr multi-signatures fail to satisfy these properties. In this paper, we introduce MuSigL, a lattice-based multi-signature scheme simultaneously achieving these design goals for the first time. Unlike the recent, round-efficient proposal of Damgård et al. (PKC'21), which had to rely on lattice-based trapdoor commitments, we do not require any additional primitive in the protocol, while being able to prove security from the standard module-SIS and LWE assumptions. The resulting output signature of our scheme therefore looks closer to the usual Fiat--Shamir-with-abort signatures.
Last updated:  2022-08-10
Efficient Pseudorandom Correlation Generators from Ring-LPN
Elette Boyle, Geoffroy Couteau, Niv Gilboa, Yuval Ishai, Lisa Kohl, Peter Scholl
Secure multiparty computation can often utilize a trusted source of correlated randomness to achieve better efficiency. A recent line of work, initiated by Boyle et al. (CCS 2018, Crypto 2019), showed how useful forms of correlated randomness can be generated using a cheap, one-time interaction, followed by only "silent" local computation. This is achieved via a pseudorandom correlation generator (PCG), a deterministic function that stretches short correlated seeds into long instances of a target correlation. Previous works constructed concretely efficient PCGs for simple but useful correlations, including random oblivious transfer and vector-OLE, together with efficient protocols to distribute the PCG seed generation. Most of these constructions were based on variants of the Learning Parity with Noise (LPN) assumption. PCGs for other useful correlations had poor asymptotic and concrete efficiency. In this work, we design a new class of efficient PCGs based on different flavors of the ring-LPN assumption. Our new PCGs can generate OLE correlations, authenticated multiplication triples, matrix product correlations, and other types of useful correlations over large fields. These PCGs are more efficient by orders of magnitude than the previous constructions and can be used to improve the preprocessing phase of many existing MPC protocols.
Last updated:  2023-10-15
Finding All Impossible Differentials When Considering the DDT
Kai Hu, Thomas Peyrin, and Meiqin Wang
Impossible differential (ID) cryptanalysis is one of the most important attacks on block ciphers. The Mixed Integer Linear Programming (MILP) model is a popular method to determine whether a specific difference pair is an ID. Unfortunately, due to the huge search space (approximately $2^{2n}$ for a cipher with a block size $n$ bits), we cannot leverage this technique to exhaust all difference pairs, which is a well-known long-standing problem. In this paper, we propose a systematic method to find all IDs for SPN block ciphers. The idea is to partition the whole difference pair space into lots of small disjoint sets, each of which has a representative difference pair. All difference pairs in one small set are possible if its representative pair is possible, and this can be conveniently checked by the MILP model. In this way, the overall search space is drastically reduced to a practical size by excluding the sets containing no IDs. We then examine the remaining difference pairs to identify all IDs (if some IDs exist). If our method cannot find any ID, the target cipher is proved free of ID distinguishers. Our method works especially well for SPN ciphers with block size 64. We apply our method to SKINNY-64 and successfully find all 432 and 12 truncated IDs (we find all IDs but all of them can be assembled into certain truncated IDs) for 11 and 12 rounds, respectively. We also prove, for the first time, that 13-round SKINNY-64 is free of ID distinguishers even when considering the differential transitions through the Difference Distribution Table (DDT). Similarly, we find all 12 truncated IDs (all IDs are assembled into 12 truncated IDs) for 13-round CRAFT and prove there is no ID for 14 rounds. For SbPN cipher GIFT-64, we prove that there is no ID for 8 rounds. For SPN ciphers with larger block sizes, we show that our idea is also useful to strengthen the current search methods. For example, if we consider the Sbox to be ideal and only consider the branch number information of the diffusion matrix, we can find all 6,750 truncated IDs for 6-round Rijndael-192 in 1 second and prove that there is no truncated ID for 7 rounds. Previously, we need to solve approximately $2^{48}$ MILP models to achieve the same goal. For GIFT-128, we exhausted all difference patterns that have an active superbox in the plaintext and ciphertext and proved there is no ID of such patterns for 8 rounds. Although we have searched for a larger or even full space for IDs, no longer ID distinguishers have been found. This implies the reasonableness of the intuition that a small number (usually one or two) of active bits/words at the beginning and end of an ID will be the longest.
Last updated:  2022-08-10
A Complete Characterization of Security for Linicrypt Block Cipher Modes
Tommy Hollenberg, Mike Rosulek, Lawrence Roy
We give characterizations of IND\$-CPA security for a large, natural class of encryption schemes. Specifically, we consider encryption algorithms that invoke a block cipher and otherwise perform linear operations (e.g., XOR and multiplication by fixed field elements) on intermediate values. This class of algorithms corresponds to the Linicrypt model of Carmer & Rosulek (Crypto 2016). Our characterization for this class of encryption schemes is sound but not complete. We then focus on a smaller subclass of block cipher modes, which iterate over the blocks of the plaintext, repeatedly applying the same Linicrypt program. For these Linicrypt block cipher modes, we are able to give a sound and complete characterization of IND\$-CPA security. Our characterization is linear-algebraic in nature and is easy to check for a candidate mode. Interestingly, we prove that a Linicrypt block cipher mode is secure if and only if it is secure against adversaries who choose all-zeroes plaintexts.
Last updated:  2022-08-09
On Non-uniform Security for Black-box Non-Interactive CCA Commitments
Rachit Garg, Dakshita Khurana, George Lu, Brent Waters
We obtain a black-box construction of non-interactive CCA commitments against non-uniform adversaries. This makes black-box use of an appropriate base commitment scheme for small tag spaces, variants of sub-exponential hinting PRG (Koppula and Waters, Crypto 2019) and variants of keyless sub-exponentially collision-resistant hash function with security against non-uniform adversaries (Bitansky, Kalai and Paneth, STOC 2018 and Bitansky and Lin, TCC 2018). All prior works on non-interactive non-malleable or CCA commitments without setup first construct a "base" scheme for a relatively small identity/tag space, and then build a tag amplification compiler to obtain commitments for an exponential-sized space of identities. Prior black-box constructions either add multiple rounds of interaction (Goyal, Lee, Ostrovsky and Visconti, FOCS 2012) or only achieve security against uniform adversaries (Garg, Khurana, Lu and Waters, Eurocrypt 2021). Our key technical contribution is a novel tag amplification compiler for CCA commitments that replaces the non-interactive proof of consistency required in prior work. Our construction satisfies the strongest known definition of non-malleability, i.e., CCA2 (chosen commitment attack) security. In addition to only making black-box use of the base scheme, our construction replaces sub-exponential NIWIs with sub-exponential hinting PRGs, which can be obtained based on assumptions such as (sub-exponential) CDH or LWE.
Last updated:  2023-06-14
Revisiting Algebraic Attacks on MinRank and on the Rank Decoding Problem
Magali Bardet, Pierre Briaud, Maxime Bros, Philippe Gaborit, Jean-Pierre Tillich
The Rank Decoding problem (RD) is at the core of rank-based cryptography. Cryptosystems such as ROLLO and RQC, which made it to the second round of the NIST Post-Quantum Standardization Process, as well as the Durandal signature scheme, rely on it or its variants. This problem can also be seen as a structured version of MinRank, which is ubiquitous in multivariate cryptography. Recently, [1,2] proposed attacks based on two new algebraic modelings, namely the MaxMinors modeling which is specific to RD and the Support-Minors modeling which applies to MinRank in general. Both improved significantly the complexity of algebraic attacks on these two problems. In the case of RD and contrarily to what was believed up to now, these new attacks were shown to be able to outperform combinatorial attacks and this even for very small field sizes. However, we prove here that the analysis performed in [2] for one of these attacks which consists in mixing the MaxMinors modeling with the Support-Minors modeling to solve RD is too optimistic and leads to underestimate the overall complexity. This is done by exhibiting linear dependencies between these equations and by considering an Fqm version of these modelings which turns out to be instrumental for getting a better understanding of both systems. Moreover, by working over Fqm rather than over Fq, we are able to drastically reduce the number of variables in the system and we (i) still keep enough algebraic equations to be able to solve the system, (ii) are able to analyze rigorously the complexity of our approach. This new approach may improve the older MaxMinors approach on RD from [1,2] for certain parameters. We also introduce a new hybrid approach on the Support-Minors system whose impact is much more general since it applies to any MinRank problem. This technique improves significantly the complexity of the Support-Minors approach for small to moderate field sizes. References: [1] An Algebraic Attack on Rank Metric Code-Based Cryptosystems, Bardet, Briaud, Bros, Gaborit, Neiger, Ruatta, Tillich, EUROCRYPT 2020. [2] Improvements of Algebraic Attacks for solving the Rank Decoding and MinRank problems, Bardet, Bros, Cabarcas, Gaborit, Perlner, Smith-Tone, Tillich, Verbel, ASIACRYPT 2020.
Last updated:  2022-08-09
Oblivious Extractors and Improved Security in Biometric-based Authentication Systems
Ivan De Oliveira Nunes, Peter Rindal, Maliheh Shirvanian
We study the problem of biometric-based authentication with template confidentiality. Typical schemes addressing this problem, such as Fuzzy Vaults (FV) and Fuzzy Extractors (FE), allow a server, aka Authenticator, to store “random looking” Helper Data (HD) instead of biometric templates in clear. HD hides information about the corresponding biometric while still enabling secure biometric-based authentication. Even though these schemes reduce the risk of storing biometric data, their correspondent authentication procedures typically require sending the HD (stored by the Authenticator) to a client who claims a given identity. The premise here is that only the identity owner - i.e., the person whose biometric was sampled to originally generate the HD - is able to provide the same biometric to reconstruct the proper cryptographic key from HD. As a side effect, the ability to freely retrieve HD, by simply claiming a given identity, allows invested adversaries to perform offline statistical attacks (a biometric analog for dictionary attacks on hashed passwords) or re-usability attacks (if the FE scheme is not reusable) on the HD to eventually recover the user’s biometric. In this work we develop Oblivious Extractors: a new construction that allows an Authenticator to authenticate a user without requiring neither the user to send a biometric to the Authenticator, nor the server to send the HD to the client. Oblivious Extractors provide concrete security advantages for biometric-based authentication systems. From the perspective of secure storage, an oblivious extractor is as secure as its non-oblivious fuzzy extractor counterpart. In addition, it enhances security against aforementioned statistical and re-usability attacks. To demonstrate the construction’s practicality, we implement and evaluate a biometric-based authentication prototype using Oblivious Extractors.
Last updated:  2022-08-19
FIDO2, CTAP 2.1, and WebAuthn 2: Provable Security and Post-Quantum Instantiation
Nina Bindel, Cas Cremers, Mang Zhao
The FIDO2 protocol is a globally used standard for passwordless authentication, building on an alliance between major players in the online authentication space. While already widely deployed, the standard is still under active development. Since version 2.1 of its CTAP sub-protocol, FIDO2 can potentially be instantiated with post-quantum secure primitives. We provide the first formal security analysis of FIDO2 with the CTAP 2.1 and WebAuthn 2 sub-protocols. Our security models build on work by Barbosa et al. for their analysis of FIDO2 with CTAP 2.0 and WebAuthn 1, which we extend in several ways. First, we provide a more fine-grained security model that allows us to prove more relevant protocol properties, such as guarantees about token binding agreement, the None attestation mode, and user verification. Second, we can prove post-quantum security for FIDO2 under certain conditions and minor protocol extensions. Finally, we show that for some threat models, the downgrade resilience of FIDO2 can be improved, and show how to achieve this with a simple modification.
Last updated:  2022-08-11
New Unbounded Verifiable Data Streaming for Batch Query with Almost Optimal Overhead
Jiaojiao Wu, Jianfeng Wang, Xinwei Yong, Xinyi Huang, Xiaofeng Chen
Verifiable Data Streaming (VDS) enables a resource-limited client to continuously outsource data to an untrusted server in a sequential manner while supporting public integrity verification and efficient update. However, most existing VDS schemes require the client to generate all proofs in advance and store them at the server, which leads to a heavy computation burden on the client. In addition, all the previous VDS schemes can perform batch query (i.e., retrieving multiple data entries at once), but are subject to linear communication cost $l$, where $l$ is the number of queried data. In this paper, we first introduce a new cryptographic primitive named Double-trapdoor Chameleon Vector Commitment (DCVC), and then present an unbounded VDS scheme $\mathsf{VDS_1}$ with optimal communication cost in the random oracle model from aggregatable cross-commitment variant of DCVC. Furthermore, we propose, to our best knowledge, the first unbounded VDS scheme $\mathsf{VDS}_2$ with optimal communication and storage overhead in the standard model by integrating Double-trapdoor Chameleon Hash Function (DCH) and Key-Value Commitment (KVC). Both of our schemes enjoy constant-size public key. Finally, we demonstrate the efficiency of our two VDS schemes with a comprehensive performance evaluation.
Last updated:  2022-08-08
Maliciously Secure Massively Parallel Computation for All-but-One Corruptions
Rex Fernando, Yuval Gelles, Ilan Komargodski, Elaine Shi
The Massive Parallel Computing (MPC) model gained wide adoption over the last decade. By now, it is widely accepted as the right model for capturing the commonly used programming paradigms (such as MapReduce, Hadoop, and Spark) that utilize parallel computation power to manipulate and analyze huge amounts of data. Motivated by the need to perform large-scale data analytics in a privacy-preserving manner, several recent works have presented generic compilers that transform algorithms in the MPC model into secure counterparts, while preserving various efficiency parameters of the original algorithms. The first paper, due to Chan et al. (ITCS ’20), focused on the honest majority setting. Later, Fernando et al. (TCC ’20) considered the dishonest majority setting. The latter work presented a compiler that transforms generic MPC algorithms into ones which are secure against semi-honest attackers that may control all but one of the parties involved. The security of their resulting algorithm relied on the existence of a PKI and also on rather strong cryptographic assumptions: indistinguishability obfuscation and the circular security of certain LWE-based encryption systems. In this work, we focus on the dishonest majority setting, following Fernando et al. In this setting, the known compilers do not achieve the standard security notion called malicious security, where attackers can arbitrarily deviate from the prescribed protocol. In fact, we show that unless very strong setup assumptions as made (such as a programmable random oracle), it is provably impossible to withstand malicious attackers due to the stringent requirements on space and round complexity. As our main contribution, we complement the above negative result by designing the first general compiler for malicious attackers in the dishonest majority setting. The resulting protocols withstand all-but-one corruptions. Our compiler relies on a simple PKI and a (programmable) random oracle, and is proven secure assuming LWE and SNARKs. Interestingly, even with such strong assumptions, it is rather non-trivial to obtain a secure protocol.
Last updated:  2022-08-25
An attack on SIDH with arbitrary starting curve
Luciano Maino, Chloe Martindale
We present an attack on SIDH which does not require any endomorphism information on the starting curve. Our attack has subexponential complexity thus significantly reducing the security of SIDH and SIKE; our analysis and preliminary implementation suggests that our algorithm will be feasible for the Microsoft challenge parameters $p = 2^{110}3^{67}-1$ on a regular computer. Our attack applies to any isogeny-based cryptosystem that publishes the images of points under the secret isogeny, for example Seta and B-SIDH. It does not apply to CSIDH, CSI-FiSh, or SQISign.
Last updated:  2022-08-08
Parallelizable Delegation from LWE
Cody Freitag, Rafael Pass, Naomi Sirkin
We present the first non-interactive delegation scheme for P with time-tight parallel prover efficiency based on standard hardness assumptions. More precisely, in a time-tight delegation scheme–which we refer to as a SPARG (succinct parallelizable argument)–the prover's parallel running time is t + polylog(t), while using only polylog(t) processors and where t is the length of the computation. (In other words, the proof is computed essentially in parallel with the computation, with only some minimal additive overhead in terms of time). Our main results show the existence of a publicly-verifiable, non-interactive, SPARG for P assuming polynomial hardness of LWE. Our SPARG construction relies on the elegant recent delegation construction of Choudhuri, Jain, and Jin (FOCS'21) and combines it with techniques from Ephraim et al (EuroCrypt'20). We next demonstrate how to make our SPARG time-independent–where the prover and verifier do not need to known the running-time t in advance; as far as we know, this yields the first construction of a time-tight delegation scheme with time-independence based on any hardness assumption. We finally present applications of SPARGs to the constructions of VDFs (Boneh et al, Crypto'18), resulting in the first VDF construction from standard polynomial hardness assumptions (namely LWE and the minimal assumption of a sequentially hard function).
Last updated:  2022-08-08
Multi-Input Attribute Based Encryption and Predicate Encryption
Shweta Agrawal, Anshu Yadav, Shota Yamada
Motivated by several new and natural applications, we initiate the study of multi-input predicate encryption (${\sf miPE}$) and further develop multi-input attribute based encryption (${\sf miABE}$). Our contributions are: 1. Formalizing Security: We provide definitions for ${\sf miABE}$ and ${\sf miPE}$ in the {symmetric} key setting and formalize security in the standard indistinguishability (IND) paradigm, against unbounded collusions. 2. Two-input ${\sf ABE}$ for ${\sf NC}_1$ from ${\sf LWE}$ and Pairings: We provide the first constructions for two-input key-policy ${\sf ABE}$ for ${\sf NC}_1$ from ${\sf LWE}$ and pairings. Our construction leverages a surprising connection between techniques recently developed by Agrawal and Yamada (Eurocrypt, 2020) in the context of succinct single-input ciphertext-policy ${\sf ABE}$, to the seemingly unrelated problem of two-input key-policy ${\sf ABE}$. Similarly to Agrawal-Yamada, our construction is proven secure in the bilinear generic group model. By leveraging inner product functional encryption and using (a variant of) the KOALA knowledge assumption, we obtain a construction in the standard model analogously to Agrawal, Wichs and Yamada (TCC, 2020). 3. Heuristic two-input ${\sf ABE}$ for ${\sf P}$ from Lattices: We show that techniques developed for succinct single-input ciphertext-policy ${\sf ABE}$ by Brakerski and Vaikuntanathan (ITCS 2022) can also be seen from the lens of ${\sf miABE}$ and obtain the first two-input key-policy ${\sf ABE}$ from lattices for ${\sf P}$. 4. Heuristic three-input ${\sf ABE}$ and ${\sf PE}$ for ${\sf NC}_1$ from Pairings and Lattices: We obtain the first three-input ${\sf ABE}$ for ${\sf NC}_1$ by harnessing the powers of both the Agrawal-Yamada and the Brakerski-Vaikuntanathan constructions. 5. Multi-input ${\sf ABE}$ to multi-input ${\sf PE}$ via Lockable Obfuscation: We provide a generic compiler that lifts multi-input ${\sf ABE}$ to multi-input ${\sf PE}$ by relying on the hiding properties of Lockable Obfuscation (${\sf LO}$) by Wichs-Zirdelis and Goyal-Koppula-Waters (FOCS 2018), which can be based on ${\sf LWE}$. Our compiler generalizes such a compiler for single-input setting to the much more challenging setting of multiple inputs. By instantiating our compiler with our new two and three-input ${\sf ABE}$ schemes, we obtain the first constructions of two and three-input ${\sf PE}$ schemes. Our constructions of multi-input ${\sf ABE}$ provide the first improvement to the compression factor of non-trivially exponentially efficient Witness Encryption defined by Brakerski et al. (SCN 2018) without relying on compact functional encryption or indistinguishability obfuscation. We believe that the unexpected connection between succinct single-input ciphertext-policy ${\sf ABE}$ and multi-input key-policy ${\sf ABE}$ may lead to a new pathway for witness encryption.
Last updated:  2023-04-26
SIM: Secure Interval Membership Testing and Applications to Secure Comparison
Albert Yu, Donghang Lu, Aniket Kate, Hemanta K. Maji
The offline-online model is a leading paradigm for practical secure multi-party computation (MPC) protocol design that has successfully reduced the overhead for several prevalent privacy-preserving computation functionalities common to diverse application domains. However, the prohibitive overheads associated with secure comparison -- one of these vital functionalities -- often bottlenecks current and envisioned MPC solutions. Indeed, an efficient secure comparison solution has the potential for significant real-world impact through its broad applications. This work identifies and presents SIM, a secure protocol for the functionality of interval membership testing. This security functionality, in particular, facilitates secure less-than-zero testing and, in turn, secure comparison. A key technical challenge is to support a fast online protocol for testing in large rings while keeping the precomputation tractable. Motivated by the map-reduce paradigm, this work introduces the innovation of (1) computing a sequence of intermediate functionalities on a partition of the input into input blocks and (2) securely aggregating the output from these intermediate outputs. This innovation allows controlling the size of the precomputation through a granularity parameter representing these input blocks' size -- enabling application-specific automated compiler optimizations. To demonstrate our protocols' efficiency, we implement and test their performance in a high-demand application: privacy-preserving machine learning. The benchmark results show that switching to our protocols yields significant performance improvement, which indicates that using our protocol in a plug-and-play fashion can improve the performance of various security applications. Our new paradigm of protocol design may be of independent interest because of its potential for extensions to other functionalities of practical interest.
Last updated:  2022-10-11
New Low-Memory Algebraic Attacks on LowMC in the Picnic Setting
Fukang Liu, Willi Meier, Santanu Sarkar, Takanori Isobe
The security of the post-quantum signature scheme Picnic is highly related to the difficulty of recovering the secret key of LowMC from a single plaintext-ciphertext pair. Since Picnic is one of the alternate third-round candidates in NIST post-quantum cryptography standardization process, it has become urgent and important to evaluate the security of LowMC in the Picnic setting. The best attacks on LowMC with full S-box layers used in Picnic3 were achieved with Dinur's algorithm. For LowMC with partial nonlinear layers, e.g. 10 S-boxes per round adopted in Picnic2, the best attacks on LowMC were published by Banik et al. with the meet-in-the-middle (MITM) method. In this paper, we improve the attacks on LowMC in a model where memory consumption is costly. First, a new attack on 3-round LowMC with full S-box layers with negligible memory complexity is found, which can outperform Bouillaguet et al.'s fast exhaustive search attack and can achieve better time-memory tradeoffs than Dinur's algorithm. Second, we extend the 3-round attack to 4 rounds to significantly reduce the memory complexity of Dinur's algorithm at the sacrifice of a small factor of time complexity. For LowMC instances with 1 S-box per round, our attacks are shown to be much faster than the MITM attacks. For LowMC instances with 10 S-boxes per round, we can reduce the memory complexity from 32GB ($2^{38}$ bits) to only 256KB ($2^{21}$ bits) using our new algebraic attacks rather than the MITM attacks, while the time complexity of our attacks is about $2^{3.2}\sim 2^{5}$ times higher than that of the MITM attacks. A notable feature of our new attacks (apart from the 4-round attack) is their simplicity. Specifically, only some basic linear algebra is required to understand them and they can be easily implemented.
Last updated:  2022-09-19
Practical Statistically-Sound Proofs of Exponentiation in any Group
Charlotte Hoffmann, Pavel Hubáček, Chethan Kamath, Karen Klein, Krzysztof Pietrzak
For a group $\mathbb{G}$ of unknown order, a *Proof of Exponentiation* (PoE) allows a prover to convince a verifier that a tuple $(y,x,q,T)\in \mathbb{G}^2\times\mathbb{N}^2$ satisfies $y=x^{q^T}$. PoEs have recently found exciting applications in constructions of verifiable delay functions and succinct arguments of knowledge. The current PoEs that are practical in terms of proof-size only provide restricted soundness guarantees: Wesolowski's protocol (Journal of Cryptology 2020) is only computationally-sound (i.e., it is an argument), whereas Pietrzak's protocol (ITCS 2019) is statistically-sound only in groups that come with the promise of not having any small subgroups. On the other hand, the only statistically-sound PoE in *arbitrary* groups of unknown order is due to Block et al. (CRYPTO 2021), and it can be seen as an elaborate parallel repetition of Pietrzak's PoE. Therefore, to achieve $\lambda$ bits of security, say $\lambda=80$, the number of repetitions required -- and hence the (multiplicative) overhead incurred in proof-size -- is as large as $\lambda$. In this work, we propose a statistically-sound PoE for arbitrary groups for the case where the exponent $q$ is the product of all primes up to some bound $B$. For such a structured exponent, we show that it suffices to run only $\lambda/\log(B)$ parallel instances of Pietrzak's PoE. This reduces the concrete proof-size compared to Block et al. by an order of magnitude. Furthermore, we show that in the known applications where PoEs are used as a building block such structured exponents are viable. Finally, we also discuss batching of our PoE, showing that many proofs (for the same $\mathbb{G}$ and $q$ but different $x$ and $T$) can be batched by adding only a single element to the proof per additional statement.
Last updated:  2024-02-17
Uncle Maker: (Time)Stamping Out The Competition in Ethereum
Aviv Yaish, Gilad Stern, and Aviv Zohar
We present an attack on Ethereum's consensus mechanism which can be used by miners to obtain consistently higher mining rewards compared to the honest protocol. This attack is novel in that it does not entail withholding blocks or any behavior which has a non-zero probability of earning less than mining honestly, in contrast with the existing literature. This risk-less attack relies instead on manipulating block timestamps, and carefully choosing whether and when to do so. We present this attack as an algorithm, which we then analyze to evaluate the revenue a miner obtains from it, and its effect on a miner's absolute and relative share of the main-chain blocks. The attack allows an attacker to replace competitors' main-chain blocks after the fact with a block of its own, thus causing the replaced block's miner to lose all transactions fees for the transactions contained within the block, which will be demoted from the main-chain. This block, although ``kicked-out'' of the main-chain, will still be eligible to be referred to by other main-chain blocks, thus becoming what is commonly called in Ethereum an uncle. We proceed by defining multiple variants of this attack, and assessing whether any of these attacks has been performed in the wild. Surprisingly, we find that this is indeed true, making this the first case of a confirmed consensus-level manipulation performed on a major cryptocurrency. Additionally, we implement a variant of this attack as a patch for geth, Ethereum's most popular client, making it the first consensus-level attack on Ethereum which is implemented as a patch. Finally, we suggest concrete fixes for Ethereum's protocol and implemented them as a patch for geth which can be adopted quickly and mitigate the attack and its variants.
Last updated:  2023-02-22
Masked-degree SIDH
Tomoki Moriya
Isogeny-based cryptography is one of the candidates for post-quantum cryptography. SIDH is a compact and efficient isogeny-based key exchange, and SIKE, which is the SIDH-based key encapsulation mechanism, remains the NIST PQC Round 4. However, by the brilliant attack provided by Castryck and Decru, the original SIDH is broken in polynomial time (with heuristics). To break the original SIDH, there are three important pieces of information in the public key: information about the endomorphism ring of a starting curve, some image points under a cyclic hidden isogeny, and the degree of the isogeny. In this paper, we proposed the new isogeny-based scheme named \textit{masked-degree SIDH}. This scheme is the variant of SIDH that masks most information about degrees of hidden isogenies, and the first trial against Castryck--Decru attack. The main idea to cover degrees is to use many primes to compute isogenies that allow the degree to be more flexible. Though the size of the prime $p$ for this scheme is slightly larger than that of SIDH, this scheme resists current attacks using degrees of isogenies like the attack of Castryck and Decru. The most effective attack for masked-degree SIDH has $\tilde{O}(p^{1/(8\log_2{(\log_2{p})})})$ time complexity with classical computers and $\tilde{O}(p^{1/(16\log_2{(\log_2{p})})})$ time complexity with quantum computers in our analysis.
Last updated:  2022-08-06
Time-Deniable Signatures
Gabrielle Beck, Arka Rai Choudhuri, Matthew Green, Abhishek Jain, Pratyush Ranjan Tiwari
In this work we propose time-deniable signatures (TDS), a new primitive that facilitates deniable authentication in protocols such as DKIM-signed email. As with traditional signatures, TDS provide strong authenticity for message content, at least for a sender-chosen period of time. Once this time period has elapsed, however, time-deniable signatures can be forged by any party who obtains a signature. This forgery property ensures that signatures serve a useful authentication purpose for a bounded time period, while also allowing signers to plausibly disavow the creation of older signed content. Most critically, and unlike many past proposals for deniable authentication, TDS do not require interaction with the receiver or the deployment of any persistent cryptographic infrastructure or services beyond the signing process (e.g., APIs to publish secrets or author timestamp certificates.) We first investigate the security definitions for time-deniability, demonstrating that past definitional attempts are insufficient (and indeed, allow for broken signature schemes.) We then propose an efficient construction of TDS based on well-studied assumptions.
Last updated:  2022-08-06
PERKS: Persistent and Distributed Key Acquisition for Secure Storage from Passwords
Gareth T. Davies, Jeroen Pijnenburg
We investigate how users of instant messaging (IM) services can acquire strong encryption keys to back up their messages and media with strong cryptographic guarantees. Many IM users regularly change their devices and use multiple devices simultaneously, ruling out any long-term secret storage. Extending the end-to-end encryption guarantees from just message communication to also incorporate backups has so far required either some trust in an IM or outsourced storage provider, or use of costly third-party encryption tools with unclear security guarantees. Recent works have proposed solutions for password-protected key material, however all require one or more servers to generate and/or store per-user information, inevitably invoking a cost to the users. We define distributed key acquisition (DKA) as the primitive for the task at hand, where a user interacts with one or more servers to acquire a strong cryptographic key, and both user and server are required to store as little as possible. We present a construction framework that we call PERKS---Password-based Establishment of Random Keys for Storage---providing efficient, modular and simple protocols that utilize Oblivious Pseudorandom Functions (OPRFs) in a distributed manner with minimal storage by the user (just the password) and servers (a single global key for all users). Along the way we introduce a formal treatment of DKA, and provide proofs of security for our constructions in their various flavours. Our approach enables key rotation by the OPRF servers, and for this we incorporate updatable encryption. Finally, we show how our constructions fit neatly with recent research on encrypted outsourced storage to provide strong security guarantees for the outsourced ciphertexts.
Last updated:  2022-09-25
Public Key Authenticated Encryption with Keyword Search from LWE
Leixiao Cheng, Fei Meng
Public key encryption with keyword search (PEKS) inherently suffers from the inside keyword guessing attack. To resist against this attack, Huang et al. proposed the public key authenticated encryption with keyword search (PAEKS), where the sender not only encrypts a keyword, but also authenticates it. To further resist against quantum attacks, Liu et al. proposed a generic construction of PAEKS and the first quantum-resistant PAEKS instantiation based on lattices. Later, Emura pointed out some issues in Liu et al.'s construction and proposed a new generic construction of PAEKS. The basic construction methodology of Liu et al. and Emura is the same, i.e., each keyword is converted into an extended keyword using the shared key calculated by a word-independent smooth projective hash functions (SPHF), and PEKS is used for the extended keyword. In this paper, we first analyze the schemes of Liu et al. and Emura, and point out some issues regarding their construction and security model. In short, in their lattice-based instantiations, the sender and receiver use a lattice-based word independent SPHF to compute the same shared key to authenticate keywords, leading to a super-polynomial modulus $q$; their generic constructions need a trusted setup assumption or the designated-receiver setting; Liu et al. failed to provide convincing evidence that their scheme satisfies their claimed security. Then, we propose two new lattice-based PAEKS schemes with totally different construction methodology from Liu et al. and Emura. Specifically, in our PAEKS schemes, instead of using the shared key calculated by SPHF, the sender and receiver achieve keyword authentication by using their own secret key to sample a set of short vectors related to the keyword. In this way, the modulus $q$ in our schemes could be of polynomial size, which results in much smaller size of the public key, ciphertext and trapdoor. In addition, our schemes need neither a trusted setup assumption nor the designated-receiver setting. Finally, our schemes can be proven secure in stronger security model, and thus provide stronger security guarantee for both ciphertext privacy and trapdoor privacy.
Last updated:  2024-03-18
Quantum Cryptanalysis of 5 rounds Feistel schemes and Benes schemes
Maya Chartouny, Jacques Patarin, and Ambre Toulemonde
In this paper, we provide new quantum cryptanalysis results on 5 rounds (balanced) Feistel schemes and on Benes schemes. More precisely, we give an attack on 5 rounds Feistel schemes in $\Theta(2^{2n/3})$ quantum complexity and an attack on Benes schemes in $\Theta(2^{2n/3})$ quantum complexity, where n is the number of bits of the internel random functions.
Last updated:  2023-03-31
Correlated Pseudorandomness from Expand-Accumulate Codes
Elette Boyle, Geoffroy Couteau, Niv Gilboa, Yuval Ishai, Lisa Kohl, Nicolas Resch, Peter Scholl
A pseudorandom correlation generator (PCG) is a recent tool for securely generating useful sources of correlated randomness, such as random oblivious transfers (OT) and vector oblivious linear evaluations (VOLE), with low communication cost. We introduce a simple new design for PCGs based on so-called expand-accumulate codes, which first apply a sparse random expander graph to replicate each message entry, and then accumulate the entries by computing the sum of each prefix. Our design offers the following advantages compared to state-of-the-art PCG constructions: - Competitive concrete efficiency backed by provable security against relevant classes of attacks; - An offline-online mode that combines near-optimal cache-friendliness with simple parallelization; - Concretely efficient extensions to pseudorandom correlation functions, which enable incremental generation of new correlation instances on demand, and to new kinds of correlated randomness that include circuit-dependent correlations. To further improve the concrete computational cost, we propose a method for speeding up a full-domain evaluation of a puncturable pseudorandom function (PPRF). This is independently motivated by other cryptographic applications of PPRFs.
Last updated:  2022-08-05
Dynamic Local Searchable Symmetric Encryption
Brice Minaud, Michael Reichle
In this article, we tackle for the first time the problem of dynamic memory-efficient Searchable Symmetric Encryption (SSE). In the term "memory-efficient" SSE, we encompass both the goals of local SSE, and page-efficient SSE. The centerpiece of our approach is a new connection between those two goals. We introduce a map, called the Generic Local Transform, which takes as input a page-efficient SSE scheme with certain special features, and outputs an SSE scheme with strong locality properties. We obtain several results. 1. First, we build a dynamic SSE scheme with storage efficiency $O(1)$ and page efficiency only $\tilde{O}(\log \log (N/p))$, where $p$ is the page size, called LayeredSSE. The main technique behind LayeredSSE is a new weighted extension of the two-choice allocation process, of independent interest. 2. Second, we introduce the Generic Local Transform, and combine it with LayeredSSE to build a dynamic SSE scheme with storage efficiency $O(1)$, locality $O(1)$, and read efficiency $\tilde{O}(\log\log N)$, under the condition that the longest list is of size $O(N^{1-1/\log \log \lambda})$. This matches, in every respect, the purely static construction of Asharov et al. from STOC 2016: dynamism comes at no extra cost. 3. Finally, by applying the Generic Local Transform to a variant of the Tethys scheme by Bossuat et al. from Crypto 2021, we build an unconditional static SSE with storage efficiency $O(1)$, locality $O(1)$, and read efficiency $O(\log^\varepsilon N)$, for an arbitrarily small constant $\varepsilon > 0$. To our knowledge, this is the construction that comes closest to the lower bound presented by Cash and Tessaro at Eurocrypt 2014.
Last updated:  2023-02-15
Nonce-Misuse Resilience of Romulus-N and GIFT-COFB
Akiko Inoue, Chun Guo, Kazuhiko Minematsu
We analyze nonce-misuse resilience (NMRL) security of Romulus-N and GIFT-COFB, the two finalists of NIST Lightweight Cryptography project for standardizing lightweight authenticated encryption. NMRL, introduced by Ashur et al. at CRYPTO 2017, is a relaxed security notion from a stronger, nonce-misuse resistance notion. We proved that Romulus-N and GIFT-COFB have nonce-misuse resilience. For Romulus-N, we showed the perfect privacy (NMRL-PRIV) and n/2-bit authenticity (NMRL-AUTH) with graceful degradation with respect to nonce repetition. For GIFT-COFB, we showed n/4-bit security for both NMRL-PRIV and NMRL-AUTH notions.
Last updated:  2022-08-05
Structure-Aware Private Set Intersection, With Applications to Fuzzy Matching
Gayathri Garimella, Mike Rosulek, Jaspal Singh
In two-party private set intersection (PSI), Alice holds a set $X$, Bob holds a set $Y$, and they learn (only) the contents of $X \cap Y$. We introduce structure-aware PSI protocols, which take advantage of situations where Alice's set $X$ is publicly known to have a certain structure. The goal of structure-aware PSI is to have communication that scales with the description size of Alice's set, rather its cardinality. We introduce a new generic paradigm for structure-aware PSI based on function secret-sharing (FSS). In short, if there exists compact FSS for a class of structured sets, then there exists a semi-honest PSI protocol that supports this class of input sets, with communication cost proportional only to the FSS share size. Several prior protocols for efficient (plain) PSI can be viewed as special cases of our new paradigm, with an implicit FSS for unstructured sets. Our PSI protocol can be instantiated from a significantly weaker flavor of FSS, which has not been previously studied. We develop several improved FSS techniques that take advantage of these relaxed requirements, and which are in some cases exponentially better than existing FSS. Finally, we explore in depth a natural application of structure-aware PSI. If Alice's set $X$ is the union of many radius-$\delta$ balls in some metric space, then an intersection between $X$ and $Y$ corresponds to fuzzy PSI, in which the parties learn which of their points are within distance $\delta$. In structure-aware PSI, the communication cost scales with the number of balls in Alice's set, rather than their total volume. Our techniques lead to efficient fuzzy PSI for $\ell_\infty$ and $\ell_1$ metrics (and approximations of $\ell_2$ metric) in high dimensions. We implemented this fuzzy PSI protocol for 2-dimensional $\ell_\infty$ metrics. For reasonable input sizes, our protocol requires 45--60% less time and 85% less communication than competing approaches that simply reduce the problem to plain PSI.
Last updated:  2023-11-02
Orion: Zero Knowledge Proof with Linear Prover Time
Tiancheng Xie, Yupeng Zhang, and Dawn Song
Zero-knowledge proof is a powerful cryptographic primitive that has found various applications in the real world. However, existing schemes with succinct proof size suffer from a high overhead on the proof generation time that is super-linear in the size of the statement represented as an arithmetic circuit, limiting their efficiency and scalability in practice. In this paper, we present Orion, a new zero-knowledge argument system that achieves $O(N)$ prover time of field operations and hash functions and $O(\log^2 N)$ proof size. Orion is concretely efficient and our implementation shows that the prover time is 3.09s and the proof size is 1.5MB for a circuit with $2^{20}$ multiplication gates. The prover time is the fastest among all existing succinct proof systems, and the proof size is an order of magnitude smaller than a recent scheme proposed in Golovnev et al. 2021. In particular, we develop two new techniques leading to the efficiency improvement. (1) We propose a new algorithm to test whether a random bipartite graph is a lossless expander graph or not based on the small set expansion problem. It allows us to sample lossless expanders with an overwhelming probability. The technique improves the efficiency and/or security of all existing zero-knowledge argument schemes with a linear prover time. (2) We develop an efficient proof composition scheme, code switching, to reduce the proof size from square root to polylogarithmic in the size of the computation. The scheme is built on the encoding circuit of a linear code and shows that the witness of a second zero-knowledge argument is the same as the message in the linear code. The proof composition only introduces a small overhead on the prover time.
Last updated:  2023-03-09
Time-Space Tradeoffs for Sponge Hashing: Attacks and Limitations for Short Collisions
Cody Freitag, Ashrujit Ghoshal, Ilan Komargodski
Sponge hashing is a novel alternative to the popular Merkle-Damgård hashing design. The sponge construction has become increasingly popular in various applications, perhaps most notably, it underlies the SHA-3 hashing standard. Sponge hashing is parametrized by two numbers, $r$ and $c$ (bitrate and capacity, respectively), and by a fixed-size permutation on $r+c$ bits. In this work, we study the collision resistance of sponge hashing instantiated with a random permutation by adversaries with arbitrary $S$-bit auxiliary advice input about the random permutation that make $T$ online queries. Recent work by Coretti et al. (CRYPTO '18) showed that such adversaries can find collisions (with respect to a random $c$-bit initialization vector) with advantage $\Theta(ST^2/2^c + T^2/ 2^{r})$. Although the above attack formally breaks collision resistance in some range of parameters, its practical relevance is limited since the resulting collision is very long (on the order of $T$ blocks). Focusing on the task of finding short collisions, we study the complexity of finding a $B$-block collision for a given parameter $B\ge 1$. We give several new attacks and limitations. Most notably, we give a new attack that results in a single-block collision and has advantage $$ \Omega \left(\left(\frac{S^{2}T}{2^{2c}}\right)^{2/3} + \frac{T^2}{2^r}\right). $$ In certain range of parameters (e.g., $ST^2>2^c$), our attack outperforms the previously-known best attack. To the best of our knowledge, this is the first natural application for which sponge hashing is provably less secure than the corresponding instance of Merkle-Damgård hashing. Our attack relies on a novel connection between single-block collision finding in sponge hashing and the well-studied function inversion problem. We also give a general attack that works for any $B\ge 2$ and has advantage $\Omega({STB}/{2^{c}} + {T^2}/{2^{\min\{r,c\}}})$, adapting an idea of Akshima et al. (CRYPTO '20). We complement the above attacks with bounds on the best possible attacks. Specifically, we prove that there is a qualitative jump in the advantage of best possible attacks for finding unbounded-length collisions and those for finding very short collisions. Most notably, we prove (via a highly non-trivial compression argument) that the above attack is optimal for $B=2$ in some range of parameters.
Last updated:  2022-08-05
Multimodal Private Signatures
Khoa Nguyen, Fuchun Guo, Willy Susilo, Guomin Yang
We introduce Multimodal Private Signature (MPS) - an anonymous signature system that offers a novel accountability feature: it allows a designated opening authority to learn some partial information $\mathsf{op}$ about the signer's identity $\mathsf{id}$, and nothing beyond. Such partial information can flexibly be defined as $\mathsf{op} = \mathsf{id}$ (as in group signatures), or as $\mathsf{op} = \mathbf{0}$ (like in ring signatures), or more generally, as $\mathsf{op} = G_j(\mathsf{id})$, where $G_j(\cdot)$ is a certain disclosing function. Importantly, the value of $\mathsf{op}$ is known in advance by the signer, and hence, the latter can decide whether she/he wants to disclose that piece of information. The concept of MPS significantly generalizes the notion of tracing in traditional anonymity-oriented signature primitives, and can enable various new and appealing privacy-preserving applications. We formalize the definitions and security requirements for MPS. We next present a generic construction to demonstrate the feasibility of designing MPS in a modular manner and from commonly used cryptographic building blocks (ordinary signatures, public-key encryption and NIZKs). We also provide an efficient construction in the standard model based on pairings, and a lattice-based construction in the random oracle model.
Last updated:  2022-08-05
zkQMC: Zero-Knowledge Proofs For (Some) Probabilistic Computations Using Quasi-Randomness
Zachary DeStefano, Dani Barrack, Michael Dixon
We initiate research into efficiently embedding probabilistic computations in probabilistic proofs by introducing techniques for capturing Monte Carlo methods and Las Vegas algorithms in zero knowledge and exploring several potential applications of these techniques. We design and demonstrate a technique for proving the integrity of certain randomized computations, such as uncertainty quantification methods, in non-interactive zero knowledge (NIZK) by replacing conventional randomness with low-discrepancy sequences. This technique, known as the Quasi-Monte Carlo (QMC) method, functions as a form of weak algorithmic derandomization to efficiently produce adversarial-resistant worst-case uncertainty bounds for the results of Monte Carlo simulations. The adversarial resistance provided by this approach allows the integrity of results to be verifiable both in interactive and non-interactive zero knowledge without the need for additional statistical or cryptographic assumptions. To test these techniques, we design a custom domain specific language and implement an associated compiler toolchain that builds zkSNARK gadgets for expressing QMC methods. We demonstrate the power of this technique by using this framework to benchmark zkSNARKs for various examples in statistics and physics. Using $N$ samples, our framework produces zkSNARKs for numerical integration problems of dimension $d$ with $O\left(\frac{(\log N)^d}{N}\right)$ worst-case error bounds. Additionally, we prove a new result using discrepancy theory to efficiently and soundly estimate the output of computations with uncertain data with an $O\left(d\frac{\log N}{\sqrt[d]{N}}\right)$ worst-case error bound. Finally, we show how this work can be applied more generally to allow zero-knowledge proofs to capture a subset of decision problems in $\mathsf{BPP}$, $\mathsf{RP}$, and $\mathsf{ZPP}$.
Last updated:  2022-08-04
A Forward-secure Efficient Two-factor Authentication Protocol
Steven J. Murdoch, Aydin Abadi
Two-factor authentication(2FA)schemes that rely on a combination of knowledge factors (e.g., PIN) and device possession have gained popularity. Some of these schemes remain secure even against strong adversaries that (a) observe the traffic between a client and server, and (b) have physical access to the client’s device, or its PIN, or breach the server. However, these solutions have several shortcomings; namely, they (i) require a client to remember multiple secret values to prove its identity, (ii) involve several modular exponentiations, and (iii) are in the non-standard random oracle model. In this work, we present a 2FA protocol that resists such a strong adversary while addressing the above shortcomings. Our protocol requires a client to remember only a single secret value/PIN, does not involve any modular exponentiations, and is in a standard model. It is the first one that offers these features without using trusted chipsets. This protocol also imposes up to 40% lower communication overhead than the state-of-the-art solutions do.
Last updated:  2022-08-10
PUF-COTE: A PUF Construction with Challenge Obfuscation and Throughput Enhancement
Boyapally Harishma, Durba Chatterjee, Kuheli Pratihar, Sayandeep Saha, Debdeep Mukhopadhyay
Physically Unclonable Functions~(PUFs) have been a potent choice for enabling low-cost, secure communication. However, the state-of-the-art strong PUFs generate single-bit response. So, we propose PUF-COTE: a high throughput architecture based on linear feedback shift register and a strong PUF as the ``base''-PUF. At the same time, we obfuscate the challenges to the ``base''-PUF of the final construction. We experimentally evaluate the quality of the construction by implementing it on Artix 7 FPGAs. We evaluate the statistical quality of the responses~(using NIST SP800-92 test suit and standard PUF metrics: uniformity, uniqueness, reliability, strict avalanche criterion, ML-based modelling), which is a crucial factor for cryptographic applications.
Last updated:  2022-08-04
Interactive Non-Malleable Codes Against Desynchronizing Attacks in the Multi-Party Setting
Nils Fleischhacker, Suparno Ghoshal, Mark Simkin
Interactive Non-Malleable Codes were introduced by Fleischhacker et al. (TCC 2019) in the two party setting with synchronous tampering. The idea of this type of non-malleable code is that it "encodes" an interactive protocol in such a way that, even if the messages are tampered with according to some class $\mathcal{F}$ of tampering functions, the result of the execution will either be correct, or completely unrelated to the inputs of the participating parties. In the synchronous setting the adversary is able to modify the messages being exchanged but cannot drop messages nor desynchronize the two parties by first running the protocol with the first party and then with the second party. In this work, we define interactive non-malleable codes in the non-synchronous multi-party setting and construct such interactive non-malleable codes for the class $\mathcal{F}^{s}_{\textsf{bounded}}$ of bounded-state tampering functions. The construction is applicable to any multi-party protocol with a fixed message topology.
Last updated:  2022-08-10
Orbis Specification Language: a type theory for zk-SNARK programming
Morgan Thomas
Orbis Specification Language (OSL) is a language for writing statements to be proven by zk-SNARKs. zk-SNARK theories allow for proving wide classes of statements. They usually require the statement to be proven to be expressed as a constraint system, called an arithmetic circuit, which can take various forms depending on the theory. It is difficult to express complex statements in the form of arithmetic circuits. OSL is a language of statements which is similar to type theories used in proof engineering, such as Agda and Coq. OSL has a feature set which is sufficiently limited to make it feasible to compile a statement expressed in OSL to an arithmetic circuit which expresses the same statement. This work builds on Σ¹₁ arithmetization [5] in Halo 2 [3, 4], by defining a frontend for a user-friendly circuit compiler.
Last updated:  2022-08-04
Zswap: zk-SNARK Based Non-Interactive Multi-Asset Swaps
Felix Engelmann, Thomas Kerber, Markulf Kohlweiss, Mikhail Volkhov
Privacy-oriented cryptocurrencies, like Zcash or Monero, provide fair transaction anonymity and confidentiality but lack important features compared to fully public systems, like Ethereum. Specifically, supporting assets of multiple types and providing a mechanism to atomically exchange them, which is critical for e.g. decentralized finance (DeFi), is challenging in the private setting. By combining insights and security properties from Zcash and SwapCT (PETS 21, an atomic swap system for Monero), we present a simple zk-SNARKs-based transaction scheme, called Zswap, which is carefully malleable to allow the merging of transactions, while preserving anonymity. Our protocol enables multiple assets and atomic exchanges by making use of sparse homomorphic commitments with aggregated open randomness, together with Zcash-friendly simulation-extractable non-interactive zero-knowledge (NIZK) proofs. This results in a provably secure privacy-preserving transaction protocol, with efficient swaps, and overall performance close to that of existing deployed private cryptocurrencies. It is similar to Zcash Sapling and benefits from existing code bases and implementation expertise.
Last updated:  2022-08-04
Quantum Security of FOX Construction based on Lai-Massey Scheme
Amit Kumar Chauhan, Somitra Sanadhya
The Lai-Massey scheme is an important cryptographic approach to design block ciphers from secure pseudorandom functions. It has been used in the designs of IDEA and IDEA-NXT. At ASIACRYPT'99, Vaudenay showed that the 3-round and 4-round Lai-Massey scheme are secure against chosen-plaintext attacks (CPAs) and chosen-ciphertext attacks (CCAs), respectively, in the classical setting. At SAC'04, Junod and Vaudenay proposed a new family of block ciphers based on the Lai-Massey scheme, namely FOX. In this work, we analyze the security of the FOX cipher in the quantum setting, where the attacker can make quantum superposed queries to the oracle. Our results are as follows: $-$ The 3-round FOX construction is not a pseudorandom permutation against quantum chosen-plaintext attacks (qCPAs), and the 4-round FOX construction is not a strong pseudorandom permutation against quantum chosen-ciphertext attacks (qCCAs). Essentially, we build quantum distinguishers against the 3-round and 4-round FOX constructions, using Simon's algorithm. $-$ The 4-round FOX construction is a pseudorandom permutation against qCPAs. Concretely, we prove that the 4-round FOX construction is secure up to $O(2^{n/12})$ quantum queries. Our security proofs use the compressed oracle technique introduced by Zhandry. More precisely, we use an alternative formalization of the compressed oracle technique introduced by Hosoyamada and Iwata.
Last updated:  2022-10-17
Statistical Decoding 2.0: Reducing Decoding to LPN
Kevin Carrier, Thomas Debris-Alazard, Charles Meyer-Hilfiger, Jean-Pierre Tillich
The security of code-based cryptography relies primarily on the hardness of generic decoding with linear codes. The best generic decoding algorithms are all improvements of an old algorithm due to Prange: they are known under the name of information set decoders (ISD). A while ago, a generic decoding algorithm which does not belong to this family was proposed: statistical decoding. It is a randomized algorithm that requires the computation of a large set of parity-checks of moderate weight, and uses some kind of majority voting on these equations to recover the error. This algorithm was long forgotten because even the best variants of it performed poorly when compared to the simplest ISD algorithm. We revisit this old algorithm by using parity-check equations in a more general way. Here the parity-checks are used to get LPN samples with a secret which is part of the error and the LPN noise is related to the weight of the parity-checks we produce. The corresponding LPN problem is then solved by standard Fourier techniques. By properly choosing the method of producing these low weight equations and the size of the LPN problem, we are able to outperform in this way significantly information set decodings at code rates smaller than $0.3$. It gives for the first time after $60$ years, a better decoding algorithm for a significant range which does not belong to the ISD family.
Last updated:  2022-08-03
PipeMSM: Hardware Acceleration for Multi-Scalar Multiplication
Charles. F. Xavier
Multi-Scalar Multiplication (MSM) is a fundamental computational problem. Interest in this problem was recently prompted by its application to ZK-SNARKs, where it often turns out to be the main computational bottleneck. In this paper we set forth a pipelined design for computing MSM. Our design is based on a novel algorithmic approach and hardware-specific optimizations. At the core, we rely on a modular multiplication technique which we deem to be of independent interest. We implemented and tested our design on FPGA. We highlight the promise of optimized hardware over state-of-the-art GPU- based MSM solver in terms of speed and energy expenditure.
Last updated:  2023-02-20
On the Hardness of the Finite Field Isomorphism Problem
Dipayan Das, Antoine Joux
The finite field isomorphism (FFI) problem was introduced in PKC'18, as an alternative to average-case lattice problems (like LWE, SIS, or NTRU). As an application, the same paper used the FFI problem to construct a fully homomorphic encryption scheme. In this work, we prove that the decision variant of the FFI problem can be solved in polynomial time for any field characteristics $q= \Omega(\beta n^2)$, where $q,\beta,n$ parametrize the FFI problem. Then we use our result from the FFI distinguisher to propose polynomial-time attacks on the semantic security of the fully homomorphic encryption scheme. Furthermore, for completeness, we also study the search variant of the FFI problem and show how to state it as a $q$-ary lattice problem, which was previously unknown. As a result, we can solve the search problem for some previously intractable parameters using a simple lattice reduction approach.
Last updated:  2022-08-03
Key-Recovery Attacks on CRAFT and WARP (Full Version)
Ling Sun, Wei Wang, Meiqin Wang
This paper considers the security of CRAFT and WARP. We present a practical key-recovery attack on full-round CRAFT in the related-key setting with only one differential characteristic, and the theoretical time complexity of the attack is $2^{36.09}$ full-round encryptions. The attack is verified in practice. The test result indicates that the theoretical analysis is valid, and it takes about $15.69$ hours to retrieve the key. A full-round key-recovery attack on WARP in the related-key setting is proposed, and the time complexity is $2^{44.58}$ full-round encryptions. The theoretical attack is implemented on a round-reduced version of WARP, which guarantees validity. Besides, we give a 33-round multiple zero-correlation linear attack on WARP, which is the longest attack on the cipher in the single-key attack setting. We note that the attack results in this paper do not threaten the security of CRAFT and WARP as the designers do not claim security under the related-key attack setting.
Last updated:  2023-10-08
Fast Hashing to $\mathbb{G}_2$ on Pairing-friendly Curves with the Lack of Twists
Yu Dai, Fangguo Zhang, and Chang-An Zhao
Pairing-friendly curves with the lack of twists, such as BW13-P310 and BW19-P286, have been receiving attention in pairing-based cryptographic protocols as they provide fast operation in the first pairing subgroup $\mathbb{G}_1$ at the 128-bit security level. However, they also incur a performance penalty for hashing to $\mathbb{G}_2$ simultaneously since $\mathbb{G}_2$ is totally defined over a full extension field. Furthermore, the previous methods for hashing to $\mathbb{G}_2$ focus on pairing-friendly curves admitting a twist, which can not be employed for our selected curves. In this paper, we propose a general method for hashing to $\mathbb{G}_2$on curves with the lack of twists. More importantly, we further optimize the general algorithm on curves with non-trival automorphisms, which is certainly suitable for BW13-P310 and BW19-P286. Theoretical estimations show that the latter would be more efficient than the former. For comparing the performance of the two proposed algorithms in detail, high speed software implementation over BW13-P310 is also provided on a 64-bit processor. Experimental results show that the general algorithm can be sped up by up to $88\%$ if the computational cost of cofactor multiplication for $\mathbb{G}_2$ is only considered, while the improved method is up to $71\%$ faster than the general one for the whole process.
Last updated:  2022-08-02
Sequential Digital Signatures for Cryptographic Software-Update Authentication
Bertram Poettering, Simon Rastikian
Consider a computer user who needs to update a piece of software installed on their computing device. To do so securely, a commonly accepted ad-hoc method stipulates that the old software version first retrieves the update information from the vendor's public repository, then checks that a cryptographic signature embedded into it verifies with the vendor's public key, and finally replaces itself with the new version. This updating method seems to be robust and lightweight, and to reliably ensure that no malicious third party (e.g., a distribution mirror) can inject harmful code into the update process. Unfortunately, recent prominent news reports (SolarWinds, Stuxnet, TikTok, Zoom, ...) suggest that nation state adversaries are broadening their efforts related to attacking software supply chains. This calls for a critical re-evaluation of the described signature based updating method with respect to the real-world security it provides against particularly powerful adversaries. We approach the setting by formalizing a cryptographic primitive that addresses specifically the secure software updating problem. We define strong, rigorous security models that capture forward security (stealing a vendor's key today doesn't allow modifying yesterday's software version) as well as a form of self-enforcement that helps protecting vendors against coercion attacks in which they are forced, e.g. by nation state actors, to misuse or disclose their keys. We note that the common signature based software authentication method described above meets neither the one nor the other goal, and thus represents a suboptimal solution. Hence, after formalizing the syntax and security of the new primitive, we propose novel, efficient, and provably secure constructions.
Last updated:  2023-02-25
Faster Sounder Succinct Arguments and IOPs
Justin Holmgren, Ron Rothblum
Succinct arguments allow a prover to convince a verifier that a given statement is true, using an extremely short proof. A major bottleneck that has been the focus of a large body of work is in reducing the overhead incurred by the prover in order to prove correctness of the computation. By overhead we refer to the cost of proving correctness, divided by the cost of the original computation. In this work, for a large class of Boolean circuits $C=C(x,w)$, we construct succinct arguments for the language $\{ x : \exists w\; C(x,w)=1\}$, with $2^{-\lambda}$ soundness error, and with prover overhead $\mathsf{polylog}(\lambda)$. This result relies on the existence of (sub-exponentially secure) linear-size computable collision-resistant hash functions. The class of Boolean circuits that we can handle includes circuits with a repeated sub-structure, which arise in natural applications such as batch computation/verification, hashing and related block chain applications. The succinct argument is obtained by constructing \emph{interactive oracle proofs} for the same class of languages, with $\mathsf{polylog}(\lambda)$ prover overhead, and soundness error $2^{-\lambda}$. Prior to our work, the best IOPs for Boolean circuits either had prover overhead of $\mathsf{polylog}(|C|)$ based on efficient PCPs due to Ben-Sasson et al. (STOC, 2013) or $\mathsf{poly}(\lambda)$ due to Rothblum and Ron-Zewi (STOC, 2022).
Last updated:  2023-07-12
A New Look at Blockchain Leader Election: Simple, Efficient, Sustainable and Post-Quantum
Muhammed F. Esgin, Oguzhan Ersoy, Veronika Kuchta, Julian Loss, Amin Sakzad, Ron Steinfeld, Xiangwen Yang, Raymond K. Zhao
In this work, we study the blockchain leader election problem. The purpose of such protocols is to elect a leader who decides on the next block to be appended to the blockchain, for each block proposal round. Solutions to this problem are vital for the security of blockchain systems. We introduce an efficient blockchain leader election method with security based solely on standard assumptions for cryptographic hash functions (rather than public-key cryptographic assumptions) and that does not involve a racing condition as in Proof-of-Work based approaches. Thanks to the former feature, our solution provides the highest confidence in security, even in the post-quantum era. A particularly scalable application of our solution is in the Proof-of-Stake setting, and we investigate our solution in the Algorand blockchain system. We believe our leader election approach can be easily adapted to a range of other blockchain settings. At the core of Algorand's leader election is a verifiable random function (VRF). Our approach is based on introducing a simpler primitive which still suffices for the blockchain leader election problem. In particular, we analyze the concrete requirements in an Algorand-like blockchain setting to accomplish leader election, which leads to the introduction of indexed VRF (iVRF). An iVRF satisfies modified uniqueness and pseudorandomness properties (versus a full-fledged VRF) that enable an efficient instantiation based on a hash function without requiring any complicated zero-knowledge proofs of correct PRF evaluation. We further extend iVRF to an authenticated iVRF with forward-security, which meets all the requirements to establish an Algorand-like consensus. Our solution is simple, flexible and incurs only a 32-byte additional overhead when combined with the current best solution to constructing a forward-secure signature (in the post-quantum setting). We implemented our (authenticated) iVRF proposal in C language on a standard computer and show that it significantly outperforms other quantum-safe VRF proposals in almost all metrics. Particularly, iVRF evaluation and verification can be executed in 0.02 ms, which is even faster than ECVRF used in Algorand.
Last updated:  2022-10-03
An $\mathcal{O}(n)$ Algorithm for Coefficient Grouping
Fukang Liu
In this note, we study a specific optimization problem arising in the recently proposed coefficient grouping technique, which is used for the algebraic degree evaluation. Specifically, we show that there exists an efficient algorithm running in time $\mathcal{O}(n)$ to solve this basic optimization problem relevant to upper bound the algebraic degree. Moreover, the main technique in this efficient algorithm can also be used to further improve the performance of the off-the-shelf solvers to solve other optimization problems in the coefficient grouping technique. We expect that some results in this note can inspire more studies on the coefficient grouping technique.
Last updated:  2023-02-21
Coefficient Grouping: Breaking Chaghri and More
Fukang Liu, Ravi Anand, Libo Wang, Willi Meier, Takanori Isobe
We propose an efficient technique called coefficient grouping to evaluate the algebraic degree of the FHE-friendly cipher Chaghri, which has been accepted for ACM CCS 2022. It is found that the algebraic degree increases linearly rather than exponentially. As a consequence, we can construct a 13-round distinguisher with time and data complexity of $2^{63}$ and mount a 13.5-round key-recovery attack. In particular, a higher-order differential attack on 8 rounds of Chaghri can be achieved with time and data complexity of $2^{38}$. Hence, it indicates that the full 8 rounds are far from being secure. Furthermore, we also demonstrate the application of our coefficient grouping technique to the design of secure cryptographic components. As a result, a countermeasure is found for Chaghri and it has little overhead compared with the original design. Since more and more symmetric primitives defined over a large finite field are emerging, we believe our new technique can have more applications in the future research.
Last updated:  2022-08-02
Efficient Computation of (2^n,2^n)-Isogenies
Sabrina Kunzweiler
Elliptic curves are abelian varieties of dimension one; the two-dimensional analogue are abelian surfaces. In this work we present an algorithm to compute $(2^n,2^n)$-isogenies of abelian surfaces defined over finite fields. These isogenies are the natural generalization of $2^n$-isogenies of elliptic curves. Our algorithm is designed to be used in higher-dimensional variants of isogeny-based cryptographic protocols such as G2SIDH which is a genus-$2$ version of the Supersingular Isogeny Diffie-Hellman (SIDH) key exchange. We analyze the performance of our algorithm in cryptographically relevant settings and show that it significantly improves upon previous implementations. Different results deduced in the development of our algorithm are also interesting beyond this application. For instance, we derive a formula for the evaluation of $(2,2)$-isogenies. Given an element in Mumford coordinates, this formula outputs the (unreduced) Mumford coordinates of its image under the $(2,2)$-isogeny. Furthermore, we study $4$-torsion points on Jacobians of hyperelliptic curves and explain how to extract square-roots of coefficients of $2$-torsion points from these points.
Last updated:  2022-08-03
Quantum-Resistant Password-Based Threshold Single-Sign-On Authentication with Updatable Server Private Key
Jingwei Jiang, Ding Wang, Guoyin Zhang, Zhiyuan Chen
Passwords are the most prevalent authentication mechanism and proliferate on nearly every new web service. As users are overloaded with the tasks of managing dozens even hundreds of passwords, accordingly password-based single-sign-on (SSO) schemes have been proposed. In password-based SSO schemes, the authentication server needs to maintain a sensitive password file, which is an attractive target for compromise and poses a single point of failure. Hence, the notion of password-based threshold authentication (PTA) system has been proposed. However, a static PTA system is threatened by perpetual leakage (e.g., the adversary perpetually compromises servers). In addition, most of the existing PTA schemes are built on the intractability of conventional hard problems and become insecure in the quantum era. In this work, we first propose a threshold oblivious pseudorandom function (TOPRF) to harden the password so that PTA schemes can resist offline password guessing attacks. Then, we employ the threshold homomorphic aggregate signature (THAS) over lattices to construct the first quantum-resistant password-based threshold single-sign-on authentication scheme with the updatable server private key. Our scheme resolves various issues arising from user corruption and server compromise, and it is formally proved secure against quantum adversaries. Comparison results show that our scheme is superior to its counterparts.
Last updated:  2022-08-02
Modeling and Simulating the Sample Complexity of solving LWE using BKW-Style Algorithms
Qian Guo, Erik Mårtensson, Paul Stankovski Wagner
The Learning with Errors (LWE) problem receives much attention in cryptography, mainly due to its fundamental significance in post-quantum cryptography. Among its solving algorithms, the Blum-Kalai-Wasserman (BKW) algorithm, originally proposed for solving the Learning Parity with Noise (LPN) problem, performs well, especially for certain parameter settings with cryptographic importance. The BKW algorithm consists of two phases, the reduction phase and the solving phase. In this work, we study the performance of distinguishers used in the solving phase. We show that the Fast Fourier Transform (FFT) distinguisher from Eurocrypt’15 has the same sample complexity as the optimal distinguisher, when making the same number of hypotheses. We also show via simulation that it performs much better than previous theory predicts and develop a sample complexity model that matches the simulations better. We also introduce an improved, pruned version of the FFT distinguisher. Finally, we indicate, via extensive experiments, that the sample dependency due to both LF2 and sample amplification is limited.
Last updated:  2022-08-02
A Signature-Based Gröbner Basis Algorithm with Tail-Reduced Reductors (M5GB)
Manuel Hauke, Lukas Lamster, Reinhard Lüftenegger, Christian Rechberger
Gröbner bases are an important tool in computational algebra and, especially in cryptography, often serve as a boilerplate for solving systems of polynomial equations. Research regarding (efficient) algorithms for computing Gröbner bases spans a large body of dedicated work that stretches over the last six decades. The pioneering work of Bruno Buchberger in 1965 can be considered as the blueprint for all subsequent Gröbner basis algorithms to date. Among the most efficient algorithms in this line of work are signature-based Gröbner basis algorithms, with the first of its kind published in the late 1990s by Jean-Charles Faugère under the name $\texttt{F5}$. In addition to signature-based approaches, Rusydi Makarim and Marc Stevens investigated a different direction to efficiently compute Gröbner bases, which they published in 2017 with their algorithm $\texttt{M4GB}$. The ideas behind $\texttt{M4GB}$ and signature-based approaches are conceptually orthogonal to each other because each approach addresses a different source of inefficiency in Buchberger's initial algorithm by different means. We amalgamate those orthogonal ideas and devise a new Gröbner basis algorithm, called $\texttt{M5GB}$, that combines the concepts of both worlds. In that capacity, $\texttt{M5GB}$ merges strong signature-criteria to eliminate redundant S-pairs with concepts for fast polynomial reductions borrowed from $\texttt{M4GB}$. We provide proofs of termination and correctness and a proof-of-concept implementation in C++ by means of the Mathic library. The comparison with a state-of-the-art signature-based Gröbner basis algorithm (implemented via the same library) validates our expectations of an overall faster runtime for quadratic overdefined polynomial systems that have been used in comparisons before in the literature and are also part of cryptanalytic challenges.
Last updated:  2022-08-02
Quantum Attacks on Lai-Massey Structure
Shuping Mao, Tingting Guo, Peng Wang, Lei Hu
Aaram Yun et al. considered that Lai-Massey structure has the same security as Feistel structure. However, Luo et al. showed that 3-round Lai-Massey structure can resist quantum attacks of Simon's algorithm, which is different from Feistel structure. We give quantum attacks against a typical Lai-Massey structure. The result shows that there exists a quantum CPA distinguisher against 3-round Lai-Massey structure and a quantum CCA distinguisher against 4-round Lai-Massey Structure, which is the same as Feistel structure. We extend the attack on Lai-Massey structure to quasi-Feistel structure. We show that if the combiner of quasi-Feistel structure is linear, there exists a quantum CPA distinguisher against 3-round balanced quasi-Feistel structure and a quantum CCA distinguisher against 4-round balanced quasi-Feistel Structure.
Last updated:  2022-08-02
Privacy when Everyone is Watching: An SOK on Anonymity on the Blockchain
Roy Rinberg, Nilaksh Agarwal
Blockchain technologies rely on a public ledger, where typically all transactions are pseudoanonymous and fully traceable. This poses a major flaw in its large scale adoption of cryptocurrencies, the primary application of blockchain technologies, as most individuals do not want to disclose their finances to the pub- lic. Motivated by the explosive growth in private-Blockchain research, this Statement-of-Knowledge (SOK) explores the ways to obtain privacy in this public ledger ecosystem. The authors first look at the underly- ing technology underling all zero-knowledge applications on the blockchain: zk-SNARKs (zero-knowledge Succinct Non-interactive ARguments of Knowledge). We then explore the two largest privacy coins as of today, ZCash and Monero, as well as TornadoCash, a popular Ethereum Tumbler solution. Finally, we look at the opposing incentives behind privacy solutions and de-anonymization techniques, and the future of privacy on the blockchain.
Last updated:  2022-08-01
ToSHI - Towards Secure Heterogeneous Integration: Security Risks, Threat Assessment, and Assurance
Nidish Vashistha, Md Latifur Rahman, Md Saad Ul Haque, Azim Uddin, Md Sami Ul Islam Sami, Amit Mazumder Shuo, Paul Calzada, Farimah Farahmandi, Navid Asadizanjani, Fahim Rahman, Mark Tehranipoor
The semiconductor industry is entering a new age in which device scaling and cost reduction will no longer follow the decades-long pattern. Packing more transistors on a monolithic IC at each node becomes more difficult and expensive. Companies in the semiconductor industry are increasingly seeking technological solutions to close the gap and enhance cost-performance while providing more functionality through integration. Putting all of the operations on a single chip (known as a system on a chip, or SoC) presents several issues, including increased prices and greater design complexity. Heterogeneous integration (HI), which uses advanced packaging technology to merge components that might be designed and manufactured independently using the best process technology, is an attractive alternative. However, although the industry is motivated to move towards HI, many design and security challenges must be addressed. This paper presents a three-tier security approach for secure heterogeneous integration by investigating supply chain security risks, threats, and vulnerabilities at the chiplet, interposer, and system-in-package levels. Furthermore, various possible trust validation methods and attack mitigation were proposed for every level of heterogeneous integration. Finally, we shared our vision as a roadmap toward developing security solutions for a secure heterogeneous integration.
Last updated:  2022-08-01
Do Not Bound to a Single Position: Near-Optimal Multi-Positional Mismatch Attacks Against Kyber and Saber
Qian Guo, Erik Mårtensson
Misuse resilience is an important security criterion in the evaluation of the NIST Post-quantum cryptography standardization process. In this paper, we propose new key mismatch attacks against Kyber and Saber, NIST's selected scheme for encryption and one of the finalists in the third round of the NIST competition, respectively. Our novel idea is to recover partial information of multiple secret entries in each mismatch oracle call. These multi-positional attacks greatly reduce the expected number of oracle calls needed to fully recover the secret key. They also have significance in side-channel analysis. From the perspective of lower bounds, our new attacks falsify the Huffman bounds proposed in [Qin et al. ASIACRYPT 2021], where a one- positional mismatch adversary is assumed. Our new attacks can be bounded by the Shannon lower bounds, i.e., the entropy of the distribution generating each secret coefficient times the number of secret entries. We call the new attacks "near-optimal" since their query complexities are close to the Shannon lower bounds.
Last updated:  2022-07-31
Random-Index Oblivious RAM
Shai Halevi, Eyal Kushilevitz
We study the notion of Random-index ORAM (RORAM), which is a weak form of ORAM where the Client is limited to asking for (and possibly modifying) random elements of the $N$-items memory, rather than specific ones. That is, whenever the client issues a request, it gets in return a pair $(r,x_r)$ where $r\in_R[N]$ is a random index and $x_r$ is the content of the $r$-th memory item. Then, the client can also modify the content to some new value $x'_r$. We first argue that the limited functionality of RORAM still suffices for certain applications. These include various applications of sampling (or sub-sampling), and in particular the very-large-scale MPC application in the setting of~ Benhamouda et al. (TCC 2020). Clearly, RORAM can be implemented using any ORAM scheme (by the Client selecting the random $r$'s by himself), but the hope is that the limited functionality of RORAM can make it faster and easier to implement than ORAM. Indeed, our main contributions are several RORAM schemes (both of the hierarchical-type and the tree-type) of lighter complexity than that of ORAM.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.